id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
32f22621b37aba3ec04449f9b66296c8eb124900
Keywords: Strategic Planning, ICT, PETIC, Taxonomy. Abstract: Innovations in organizations require better solutions for technology improvement, quality assurance and customers’ business satisfaction. On the other hand, Strategic Planning (SP) and Information and Communication Technologies (ICT) need to be integrated and coherent, to ensure the survival of organizations. In this context, the Strategic Planning of ICT Methodology (PETIC) is a SP that carefully helps managers to identify the maturity of ICT processes required for company management. The increasing number of PETIC methodology applications in organizations has made it difficult to locate and classify the PETIC artifacts produced. Moreover, the use of taxonomies has been successfully applied for classification and information retrieval. This paper aims on proposing TAXOPETIC, the taxonomy to support the PETIC Methodology. It will also be used to implement a software called TAXOPETICWeb that will allow storage and classification of PETIC artifacts, as well as facilitate the process of searching these artifacts. 1 INTRODUCTION Advancements in Information and Communication Technology (ICT) provide competitive advantages for companies. Thus, organizations rely on technological innovation to solve customer problems in order to ensure quality and meet their expectations. Strategic Planning (SP) presents itself as a "tool" that guides the direction and actions of an organization in its external and internal environment. This planning can be characterized as an ongoing process that allows one to define goals and capabilities. SP strives for better resource management. Therefore, it reduces the possibility of taking wrong decisions in a highly competitive market, with short margin for error (Palmeira et al., 2012). According to (Cassidy, 1998), there are benefits in strategic planning activities, which are: to offer improved communication between companies and organizations, designing information and processes, use of technologies, precautionary investment and ICT expenditure, reduction of strategic risks in projects, gaining competitive and best results advance. Besides that, it is essential to use automated tools for preparation of a Strategic Planning, enabling aid strategies and actions established (Palmeira et al., 2012). In this context, the PETIC Methodology proposes a set of standards and guidelines for the design of SP focused on ICT processes of organizations (Marchi et al., 2010). The growing volume of artifacts produced by software applications of PETIC Methodology led to the necessity of creating a search engine that allows the systematic classification of deployments results, in order to store and retrieve information in a logical way through navigation. Thus, an approach which has received attention is to use taxonomies for classification and information retrieval. Taxonomies are classificatory structures intended to be a tool for organization and information retrieval in companies. They have been considered as means of access acting as conceptual maps of the explored topics in an information retrieval service (Bailey, 1994). This paper aims to search for existing taxonomies within ICT domain and propose a taxonomy to support PETIC Methodology. Among the identified advantages of using taxonomy to PETIC Methodology, it was noticed the ease of location and classification of PETIC artifacts produced by applying the PETIC Methodology in several organizations. Proposed taxonomy, called TAXOPETIC, has been applied in real artifacts generated by the PETIC Methodology. The TAXOPETIC structure has also been applied to create a software called TAXOPETICWeb, which supports will support the taxonomy by allowing storage and classification of PETIC artifacts, as well as facilitating their localization. 2 RELATED WORK This section contains related work obtained through literature review. This research aimed to identify works to present taxonomies applied to SP of ICT processes. Unfortunately, it was not possible to identify specific works for these keywords. Then, the search has been expanded to locate taxonomies applied to SP in general. For the literature review fulfillment, the question used in the protocol was: “What are the existing taxonomies for strategic planning?” To answer this question, we used the following string for queries: (taxonomy) and (strategic planning). Databases used for research were: IEEE Xplore Digital Library, ACM Digital Library, Springer and Science Direct. The literature review resulted in (Svahnberg et al., 2010), (Pradhan and Akinci, 2012) and (Dukaric and Juric, 2013). Svahnbergs’s work presented a systematic review of versions (release) of SP models proposals, degree of empirical validation, factors for selection requirements and the fate of these requirements. The twenty-four models identified in the search were mapped in relation to each other and taxonomy of requirement selection factors was constructed. It was concluded that many models are related to each other and they use similar techniques to solve version planning problem. It was also possible to conclude that a number of requirements selection factors are included in different models, but many methods fail to address factors such as stakeholder value or internal value (Svahnberg et al., 2010). Pradhan and Akincis present taxonomy of spatial reasoning mechanisms and time necessary to merge spatial data sources and time to support construction of yield monitoring. This work describes two different approaches: interpolation approaches and nearest neighbor. It can be applied to synchronize sources of temporal and / or spatial data. Taxonomy developed was validated on representative query with construction engineers and managers who have been identified in previous research studies (Pradhan and Akinci, 2012). Dukaric and Juric interest propose a unified taxonomy and an architectural structure IaaS - Infrastructure as a Service. Taxonomy is structured through seven layers: core services layer, support layer, value-added services, control layer, layer management, security layers and fundraising. Authors conducted a survey of several IaaS systems and mapped it for taxonomy to evaluate the classification. Then, they introduced an IaaS architectural structure that depends on unified taxonomy. A detailed description of each layer and definition of the dependencies for layers and components are provided (Dukaric and Juric, 2013). Three studies were identified through literature review and, though using taxonomy with different objective from the proposition of this paper, they have proposed taxonomies that tangentially contributed to this research. 3 CONCEPTS AND TECHNOLOGIES 3.1 PETIC Methodology PETIC Methodology has the following components: PETIC artifact, ICT Process Catalog, importance of graphics versus cost and Gantt maps of information systems pillars (Palmeira et al., 2015): Software, Hardware, Telecommunication, Data and People. PETIC artifact is created during the implementation of PETIC Methodology in an organization. PETIC artifact is generated from ICT Process Catalog, Maturity Analysis of ICT processes and compilation of Improvement Actions suggested to ICT organization processes (Marchi et al., 2010). The steps to design the PETIC artifact are: (i) identifying / updating ICT unit objectives in the organization; (ii) analyzing ICT processes catalog of PETIC; (iii) defining levels of maturity of ICT processes in the organization; (iv) defining relevance of ICT processes of the organization; (v) defining actions catalog for each process or critical priority ICT; (vi) analyzing importance of graphics versus cost; (vii) discussing results with other stakeholders; (viii) designing Gantt charts; (ix) documenting PETIC artifact and (xi) reviewing PETIC artifact annually. It can be concluded that the PETIC Methodology assists in the preparation of SP, in order to help the manager in the decision making processes of ICT (Palmeira et al., 2012). 3.2 Taxonomy Taxonomy is a system to classify and facilitate access to information that aims to: (i) representing concepts through terms; (ii) streamline communication for specialists and also among experts and other stakeholders; (iii) finding consensus; (iv) proposing ways to control the diversity of significance; and (v) providing an area map that will serve as process guide knowledge (Bailey, 1994). Therefore, it is a controlled vocabulary of a particular field of knowledge and, mainly, an instrument or design element which allows one logically allocate, retrieve and communicate information within a system. Within ICT domain, taxonomies can be compared to classificatory structures such as leaderboards, which aim to bring together a logical and classified document form. Currently, taxonomies gather all types of digital document and allow search strategies, immediate access to information. Unlike tables that provide an address (notation) that locates documents on the shelves, taxonomy dispenses notation (Gilchrist, 2003). Metadata is data that identifies and describes information. They can be used to safely obtain characteristics such as where and when information was captured. They may also be associated with different types of media such as documents, videos, images, audio, books and many other files. (Linfoot, 2009). For an implementation approach of taxonomy, tags are important tools in categorization process. After structuring taxonomy, tags and metadata can be applied to optimize accuracy in document searching (Linfoot, 2009). 3.3 Methods for Construction of a Taxonomy According to Reamy (Reamy, 2007), in order to create quality taxonomy, a defined development process must be followed. Like any process, the development of taxonomy requires a well executed plan, a development cycle and initial requirements. However, unlike the normal processes, the development process of taxonomy never ends. Authors (Delphi Group White Paper, 2002), (Dutra, 2003), (Woods, 2004) and (Kremer, 2005) propose practices and steps for building taxonomies. Unlike these authors, (Bayona-Oré et al., 2014) analyzes the practices and steps proposed by several authors, including the aforementioned, and propose a development method of taxonomy. Bayonne-Oré proposes a taxonomy development method created from a literature review on methods and guidelines used to build taxonomies. To create this method, nine different authors have been analyzed and proposed steps to build taxonomies (Bayona-Oré et al., 2014). 3.4 Bayona-Oré Method The method proposed by (Bayona-Oré et al., 2014) consists of five stages and twenty-four activities. These five stages, objectives and generated products are: 1. Planning: This stage aims to establish the work plan that defines project activities which allow to design and implement taxonomy. Products gotten in this stage are: (i) working plan and (ii) working group for taxonomy development. 2. Identification and Information Extraction: This stage aims to align the working plan with the information needs of the organization. At this stage, the sources of information are identified. Products of this stage are: (i) inventory for construction of taxonomy, (ii) policies for use of taxonomy, (iii) characteristics of technology used and (iv) representative lists of all areas involved. 3. Design and Construction Taxonomy: This stage aims to design and build the taxonomy using terms extracted in the previous stage. At this stage, products designed are: (i) categorization of terms of first level, (ii) general structure of taxonomy and (iii) dictionary of categories and subcategories. 4. Testing and Validation: This stage aims to ensure that designed taxonomy is useful for users to achieve their goals. Products of this stage are: (i) validated taxonomy, (ii) validated category dictionary and (iii) validated sub-dictionary. 5. Implementation of Taxonomy: This stage aims to ensure the implementation of taxonomy in organization. This step is obtained with the staff qualification in taxonomy and availability of taxonomy for users. Products designed at this stage are: (i) users trained in taxonomy and (ii) taxonomy available for users. 4 TAXOPETIC PROCESS DESIGN The growing volume of PETIC Methodology artifacts resulted in the need for measures of storage, classification and location of these artifacts. Thus, the creation of a taxonomy called TAXOPETIC serves as a storage, classification and easy location of artifacts obtained by applying PETIC Methodology in several organizations. For the creation of proposed TAXOPETIC, we analyzed the methods of building taxonomies proposed by (Bayona-Oré et al., 2014). It was decided to follow (Bayona-Oré et al., 2014) method for three reasons: this is the most recent method described in literature; it proposes a stage dedicated to "Test and Validation", aiming at improving taxonomy with test results; and, it has a stage of "taxonomy implementation" with a series of activities aimed not only in availability, but is concerned on usability, management and maintenance of the taxonomy. Thus, the following (Bayona-Oré et al., 2014) method for the construction of TAXOPETIC has performed the five stages listed as follows: 4.1 Planning At this first phase, following products were obtained: i. The roadmap containing working plan - roadmap presented duration of TAXOPETIC construction phase in a period of four months, i.e., from August to November 2015. Below periods, the five phases that must be followed for TAXOPETIC construction are sequenced. Each phase and products obtained are listed as depicted in Figure 1. ii. Working group for the construction of TAXOPETIC - group is formed by coordinator and members of the (GPES - Software Engineering Research Group at at UFS - Federal University of Sergipe). 4.2 Identification and Information Extraction At this stage the following products were obtained: i. Inventory for the construction of TAXOPETIC - PETIC Methodology artifacts were found distributed in several storage media. This demanded quite operational work to locate and catalog the... artifacts. They were distributed in the following storage medias: folders on Dropbox, Web links and attached in e-mails. ii. Policies for TAXOPETIC use - access levels have been defined. They are: registration and query. The GPES members have full access to perform insert, update and delete PETIC artifacts. After PETIC artifact inclusion, these are available on the Web for unrestricted access of other organizations. iii. Characteristics of technology used – softwares which have been used: Drupal (7:41 version) and MySQL (version 5.0.11) to create the TAXOPETICWeb. Drupal is a content management software that provides features such as: easy content creation, reliable performance and excellent security (Drupal, 2015). Drupal enables the use of taxonomies, tags and metadata for content classification (Drupal, 2016). MySQL is an open source database. Because of its proven performance, reliability and easy usage, MySQL has become the main database choice for Web-based applications (MySQL, 2015). iv. Representatives list from all areas involved: Coordinator and members of the GPES. ### 4.3 Design and Taxonomy Construction At this third stage of TAXOPETIC construction, the following products were designed: i. First level terms categorization - Analysis of artifacts, needs reported by GPES members and survey applied on organizations helped to define TAXOPETIC first level categories. They are: (i) Service Organizations (ii) Public Organizations, (iii) Mutual Benefits Associations and (iv) Commercial Stakeholder Organizations. Each category has a number of subcategories that are associated according to each category purpose. Other categories and subcategories may be added upon identified needs after TAXOPETIC implementation. ii. General TAXOPETIC structure - Figure 2 shows all TAXOPETIC categories and subcategories. iii. TAXOPETIC categories and subcategories dictionary: The construction of a dictionary has been carried out as follows: - Default category, called SP dimensions of ICT in organizations, includes all TAXOPETIC categories and subcategories. - Service Organizations - a category that represents types of organizations where the main beneficiaries are the customers. Subcategories of this category are: (i) schools, (ii) universities, (iii) religious organizations, (iv) social agencies and (v) Non-Governmental Organizations. - Public organizations - represents types of organizations where the main beneficiary is the public. Their respective subcategories are: (i) legal institutions, (ii) health institutions, (iii) public security (iv) military and (v) post office. - Mutual Benefits Associations - represents organizations where the main beneficiaries are organization members themselves. Subcategories of this category are: (i) trade unions, (ii) cooperatives, (iii) consortia and (iv) professional associations. ![Figure 2: TAXOPETIC categories and subcategories.](image) Commercial Stakeholder organizations - category representing organizations where the main beneficiaries are owners or shareholders. Its subcategories are: (i) private companies and (ii) anonymous society. Subcategory dictionary construction has not been necessary, because each subcategory represents Organizations artifact that are directly linked to their nomenclature purpose. ### 4.4 Testing and Validation At this TAXOPETIC test and validation phase, following products were generated: i. TAXOPETIC Validation - TAXOPETIC was tested by coordinator and members of GPES UFS Tests ensured the viability of proposed structure enabling the storage, easy categorization and artifacts location to its users. ii. Validate categories Dictionary - Categories dictionary construction has been validated with TAXOPETIC users. iii. Validate subcategories dictionary - There was no need for subcategories dictionaries validation, because its nomenclature already defines its purpose. 4.5 Taxonomy Implementation At this last TAXOPETIC construction phase, the following products were generated: i. TAXOPETICWeb user training - At the training plan for users, it was defined training sessions in different shifts in order to cover all TAXOPETICWeb users. At the training storage practices, categorization and artifact location has been held to optimize learning. In these trainings, users also received an operating manual for future reference. ii. TAXOPETICWeb Availability for users - TAXOPETICWeb was available on the GPES's internal network using the infrastructure of the Computer Science Department. 5 TAXOPETICWeb TOOL For TAXOPETICWeb creation, Drupal content management framework and MySql database are used as development tools because both are open-source software and meet the research needs. Figure 3 shows an artifact location through the categories and sub-categories classification or TAXOPETIC tags. Tag blocks containing elements of PETIC Methodology have been presented vertically on the TAXOPETICWeb homepage. For example, it is displayed Service Organizations category and Universities subcategory to access the UFS PETIC artifact. In the Fig. 3B is called Computer Science Department from UFS artifact. The location of that artifact can also be performed using tags: areas, sub-areas, processes and PETIC Methodology improvement actions. For storing and cataloging a new artifact in TAXOPETICWeb the following metadata has been defined: (i) organization name, (ii) organization logo, (iii) artifact description, (iv) organization category and subcategory that applied the PETIC Methodology, (v) Federative Unit organization, (vi) year when artifact was generated, (vii) document containing artifact, (viii) artifact version number, as depicted in Figure 3. Figure 3: PETIC artifact categorized in TAXOPETICWeb. Figure 4 also shows tags designed to facilitate location of artifacts, they are: (i) PETIC fields, (ii) PETIC subfields, (iii) PETIC processes and (iv) improvement actions belonging to artifact. 6 DISCUSSION AND ANALYSIS Faced with many needs to be supported, including to store, classify and find artifacts produced during application of PETIC methodology in organizations. Identified need to propose taxonomy to support PETIC. The five proposed steps were used in method created by (Bayona-Oré et al., 2014) for TAXOPETIC construction. Bayona-Oré proposes a method with five steps and twenty-four activities. Following this method it was possible to conceive TAXOPETIC as they have... been obtained products of each step (Bayona-Oré et al., 2014). To Shaw, a good research requires not only a result, but also clear and convincing result evidence. Some research validation techniques are used in software engineering (Shaw, 2002). They are: analysis, experience, example, evaluation, persuasion and affirmation act. In this context, we use an example to validate TAXOPETIC. To illustrate this article, we explore Services Organizations category and its Universities subcategory. The structure of TAXOPETIC allowed to create an application named TAXOPETIC. Creation of TAXOPETICWeb allowed us to analyze an example, as proposed by (Shaw, 2002). It was necessary to catalog manually all PETIC Methodology artifacts store artifacts in the TAXOPETICWeb. This procedure required a high cost search because those artifacts were also distributed in several different storage media. Thus, TAXOPETICWeb made possible to find, store, and catalog the several PETIC Methodology artifacts spread in organizations from Northeastern Southeast and North Regions of the country. After cataloging PETIC artifacts, TAXOPETICWeb storage was performed using metadata and tags. Basic data of the organization and file containing PETIC artifact have been used as metadata. For tags it has been used the PETIC Methodology processes catalog (areas, sub-areas, processes and improvement actions). Tags are used to facilitate the process of searching for an artefact, having selected tag in stored file. Thus, TAXOPETICWeb enables artifacts location through TAXOPETIC structure and defined tags in artifacts storage. ### 7 CONTRIBUTIONS AND FUTURE WORK PETIC Methodology proposes standards and policies set for design an organization Strategic Planning of ICT. PETIC application in organizations has resulted in the difficulty of localizing and classifying the produced PETIC artifacts. Main contributions of this article are: (i) Conducting a literature search to identify and present existing taxonomies within ICT domain; (ii) Taxonomies construction proposals analysis for creation, analysis and selection of a taxonomies construction method; (iii) Application of Bayona-Oré method step (Bayona-Oré et al., 2014) in the construction of the TAXOPETIC and the TAXOPETICWeb tool; (iv) Selecting a content management framework and a database for TAXOPETICWeb implementation; (v) Creation of a tool to validate the structure of TAXOPETIC; (vi) Creating and analyzing an example in order to explore a TAXOPETIC category and subcategory. In future works we intend to evaluate usability of TAXOPETICWeb. This review will be carried out using usability heuristics for inspection interfaces proposed by (Nielsen, 1995). It is also planned to integrate TAXOPETIC and TAXOPETICWEB to the PETIC Methodology Knowledge Portal. REFERENCES MySql - About MySQL. November 12, 2015, from <https://www.mysql.com/about/>.
{"Source-Url": "http://www.scitepress.org/Papers/2017/63243/63243.pdf", "len_cl100k_base": 4798, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30141, "total-output-tokens": 6390, "length": "2e12", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.0010747909545898438, "__label__crime_law": 0.0008320808410644531, "__label__education_jobs": 0.0176849365234375, "__label__entertainment": 0.0002498626708984375, "__label__fashion_beauty": 0.0003101825714111328, "__label__finance_business": 0.04144287109375, "__label__food_dining": 0.0005125999450683594, "__label__games": 0.0008296966552734375, "__label__hardware": 0.0021228790283203125, "__label__health": 0.0010128021240234375, "__label__history": 0.0010595321655273438, "__label__home_hobbies": 0.0003466606140136719, "__label__industrial": 0.00135040283203125, "__label__literature": 0.0011615753173828125, "__label__politics": 0.0008597373962402344, "__label__religion": 0.0005707740783691406, "__label__science_tech": 0.255859375, "__label__social_life": 0.00033354759216308594, "__label__software": 0.1534423828125, "__label__software_dev": 0.51708984375, "__label__sports_fitness": 0.0003037452697753906, "__label__transportation": 0.00078582763671875, "__label__travel": 0.00041961669921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26799, 0.02729]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26799, 0.34916]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26799, 0.91102]], "google_gemma-3-12b-it_contains_pii": [[0, 3340, false], [3340, 7885, null], [7885, 12274, null], [12274, 14198, null], [14198, 17759, null], [17759, 20657, null], [20657, 22679, null], [22679, 26799, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3340, true], [3340, 7885, null], [7885, 12274, null], [12274, 14198, null], [14198, 17759, null], [17759, 20657, null], [20657, 22679, null], [22679, 26799, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26799, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26799, null]], "pdf_page_numbers": [[0, 3340, 1], [3340, 7885, 2], [7885, 12274, 3], [12274, 14198, 4], [14198, 17759, 5], [17759, 20657, 6], [20657, 22679, 7], [22679, 26799, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26799, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2c28dd34409e0c5235652a9b8b547dd94043465d
Automatically locating relevant programming help online Oleksii Kononenko, David Dietrich, Rahul Sharma, and Reid Holmes School of Computer Science University of Waterloo Waterloo, Ontario, Canada Email: okononen, d4dietri, r64sharm, rtholmes{.@uwaterloo.ca Abstract—While maintaining software systems, developers often encounter compilation errors and runtime exceptions that they do not know how to solve. Solutions to these errors can often be found through discussions with other developers on the Internet. Unfortunately, many of these online discussions do not contain relevant answers. We have developed an approach to automatically query and analyze online discussions to locate relevant solutions to programming problems. Our tool, called Dora, is integrated into Visual Studio and allows developers to query and evaluate solutions within their development environment, enabling them to reduce context switching between their development tasks and their search sessions. We have performed a semi-controlled experiment to validate the utility of our search approach with 18 tasks, finding that our approach provides 55% more relevant results than traditional web searching approaches. I. INTRODUCTION Developers frequently wonder, “How did this runtime state occur?” [1]; this question often arises as a result of an exception or other programming error. Given the vast nature of software systems, developers sometimes encounter errors that are new to them that they do not know how to solve. Determining how to resolve these errors often involves investigating documentation specific to the software being worked on [2], discussing the error with fellow developers [3], or searching the Internet and investigating various online resources in the hope of finding a suitable solution or explanation [4], [5]. Identifying helpful programming information online can be challenging; while developers often post their problems online, many of these posts have no solution at all, or the provided solution is incorrect. The volume of unhelpful information can cause a developer to investigate many unsuitable posts before finding the correct answer. Time spent investigating unhelpful posts increases the duration of search sessions, as well as increasing likelihood that the developer abandons the search task altogether [6]. We have designed an approach to help developers identify posts from discussion forums that are contextually relevant to the developer’s problem. Once a search is initiated from our tool (called Dora) within the developer’s Integrated Development Environment (IDE), several common programming-related online resources are searched. The results of these searches are evaluated according to our model that captures the properties of helpful programming posts. The most relevant results are displayed to the developers within their IDE, enabling easy result investigation without losing the context on their current development tasks. We have evaluated our model that describes helpful programming posts as well as the complete Dora tool. The model was trained with 255 results that were returned from 14 queries. As a result of this training evaluation, we identified six characteristics that capture relevant aspects of a helpful programming post. Dora was evaluated through a semi-controlled experiment using 18 search tasks. The tool was compared to equivalent queries performed with Google\(^1\) and Stack Overflow\(^2\), a frequently-used source of development questions [7]. We found that Dora provides a higher proportion of relevant results (47%) than Google (30%) or Stack Overflow (26%). Dora also returned relevant results earlier in the result list (mean 1.0) than Google (2.83) or Stack Overflow (2.0). The contributions of our paper are: 1) We developed a model that identifies online posts likely to be useful for answering programming questions. 2) We defined a set of characteristics that evaluates whether textual material matches our model and performed a validation of the utility of these properties. 3) We implemented a tool that embodies the matching characteristics and a validation of its effectiveness. The remainder of the paper is as follows. A motivating scenario is provided in Section II. Related work is covered in Section III. The model, its weighting, and its relevance levels are presented in Section IV. Dora and its evaluation is described in Section V. Discussion is included in Section VI; Section VII concludes. II. SCENARIO In this section we will give a brief overview of how a developer would use Dora in practice. John is a software developer who uses the Visual Studio IDE to maintain a system written in C#. While developing a new feature, he encounters a compilation error that states “Bad array declarator” in the Error view (Figure 1(A)). John has never seen this error before, and the error message alone is not enough for John to figure out what he needs to do to fix the error and get \(^{1}\text{http://google.com}\) \(^{2}\text{http://stackoverflow.com}\) back to his task. To solve this problem, John switches from the IDE to his web browser and manually enters a query to a search engine to try to see how other developers resolved this problem. Unfortunately for John, several of the returned results have developers asking about this problem, but do not contain solutions. In contrast, when using Dora, John initiates the query within the IDE (Figure 1(B)). He selects the error that he would like to find solutions for (Figure 1(C)); Dora then generates a query automatically and searches relevant online programming forums. Dora evaluates the results returned from these forums to identify posts that are likely to help resolve John’s problem. The results of this query are displayed within Visual Studio (Figure 2); each result includes a page title, a short description, and the link to the page (Figure 2(A)). The results are sorted according to their utility for John’s query; irrelevant posts or those that do not contain solutions are elided. If John chooses to modify his query, he is able to manually edit the search terms and perform the search again (Figure 2(B)). After John has resolved his compilation errors, he is able to run his code. Unfortunately, he encounters a runtime exception (AssemblyResolutionException) when calling an external library. He again initiates a Dora search, and is able to quickly find a solution without leaving the IDE or having to investigate any dead-end forum posts. By clicking on the link of an appropriate result, John can view the full thread within the Dora IDE view. III. RELATED WORK Understanding programming errors (and how to fix them) is a fundamental software development skill. Compilation errors are one kind of programming problem developers must resolve. Unfortunately, novice developers often have trouble understanding compilation errors [8], [9]. Nienaltowski et. al. [9] also found that experienced developers are able to more quickly solve compilation problems indicating that novice developers could benefit from learning from others. Hartmann et. al. [10] developed the HelpMeOut system to assist developers by automatically suggesting solutions to compiler errors. HelpMeOut provided a structured environment that stored and provided suggestions on compilation errors for developers. While the approach considered solutions gathered from other users, it did not query the Internet for these resources. In contrast, Dora is a lighter-weight approach that does not maintain its own database, although HelpMeOut results are more structured and succinct than those returned by Dora. Experienced developers often leverage the past work of other developers while developing their systems (e.g., [11]). This kind of behaviour has been demonstrated to be effective in practice [12]. In particular, a number of software recommendation systems have been created to help developers accomplish tasks by looking at the examples from existing systems (e.g., CodeFinder [13], Jungloid mining [14], and Strathcona [15]). Previous work by Parnin and Treude [16] on using crowd documentation for application programming interfaces (APIs) has shown the benefits of using social media as a means of documenting APIs. In particular, their work focused on API documentation for the JQuery library and found that by using Stack Overflow alone 84.4% of all APIs was covered, while official forums contained documentation for only 37.0% of the API. A more recent report by Parnin et. al. [17] explored three additional API’s: Google Web Toolkit, Java and Android. This report confirms the previous work showing that in the case of the Android API, 87% of the API has been covered by at least one post on Stack Overflow. Although this work did not directly look at errors, the results show that collaborative online resources can contain valuable programming information. Bacchelli et. al. [18] have developed Seahawk, an Eclipse plugin that integrates Stack Overflow with the Eclipse IDE. Seahawk tightly integrates the IDE and Stack Overflow enabling developers to query, view, link, and incorporate examples from Stack Overflow into their system. Dora is complementary to Seahawk; the primary focus of our tool is to improve the relevance of the returned results from multiple online resources. Developers frequently post questions about programming problems online. Conversational threads are composed of posts that contain questions, answers, discussion, code snippets, and other conversation. When searching for help online, developers want to find threads about their problem containing posts that answer their questions as quickly as possible. Unfortunately, developers frequently encounter online discussion threads that contain only questions (and no answers), or questions that have incorrect answers. We have identified several characteristics that model posts that are likely to solve programming problems; these characteristics have been implemented in Dora. A. Characteristics of helpful posts The primary challenge is to take a textual document (e.g., a post on a web page) and determine whether it provides a potentially-successful solution to a given query. To address this problem, we identified eight characteristics, or properties, that we hypothesize will model potentially useful posts. Although each of these characteristics is weighted equivalently (with a score of one), some of them are able to score more than once (these are clearly identified below). 1) Marked solved/answered: Online development resources often provide the ability for developers to flag threads as having correct solutions (as well as specific posts for providing the right solution). This marker (usually displayed prominently near the top of the thread and near the correct solution) can be helpful to quickly identify whether a post is helpful. From a developer’s point of view, the presence of this marker suggests that there should be a helpful result for them within the included comments (as long as the post is relevant to their query). 2) No replies: It is common to encounter threads which have no replies to the original question. These kinds of posts represent dead threads that provide no benefit to the developer. While the date of the post could be considered (e.g., a post made only five minutes ago not having a solution may not be surprising), we treat all posts with no replies equivalently. 3) N replies: For threads that have one or more replies, each reply is counted. Hence for N replies, the total value assigned to the thread would be N. The justification behind this is that more replies may indicate increased discussion about the problem or the presence of alternative solutions to the error. 4) N replies contain source code: As compilation errors generally arise due to source code problems, another positive characteristic are responses that provide code snippets. It is also common that runtime exceptions might occur because of logical errors in source code or incorrect object initialization. Source code in these kinds of discussion forums are predominantly placed in easily-identifiable HTML markup. We search for this HTML markup to locate source code snippets in a replies. The presence of source code in a reply post can indicate that a developer is trying to provide a concrete solution to the original question; we assign a value of 1 to each reply that contains a source code snippet. 5) N author replies: When developers reply to their own threads, they are actively encouraging greater discussion on their problem. Authors also post replies to their own posts when other developers request additional information or clarification that could be helpful for resolving the original problem. We apply a value of 1 for each reply made by the initial author of a thread. 6) Author made last reply: We also identify posts where the author made the final comment on their question. This measure is included because developers often close their posts by thanking one of the other developers on the thread for providing a solution. 7) Finding positive themes using Sentiment Analysis: The positive or negative attitudes present in the thread can also give insight into the influence of the solutions provided by other developers. This method of analyzing one’s attitude is called Sentiment Analysis [19]. We use the Alchemy API Sentiment Analysis tool3 to analyze the sentiment of replies to the question asked. We split sentiment analysis into two characteristics: replies from the author, and replies from other developers. Our justification for this is that positive replies from the author may indicate that they are expressing gratitude or confirming that everything now works whereas negative replies from the author may indicate that the posted solutions have not worked. Other developers may add that the provided solution has worked for them, or to chime in that they are also unable to solve this problem. B. Weighting the model The model characteristics above were identified by the authors based on their experience with online help sources. While we do not claim that they are comprehensive, we believe they capture many salient properties of helpful programming threads. It is natural to assume that the value of these characteristics are not equal. While our model applies equal weights to each characteristic (with the exception of those that can be counted more than once), it is necessary to weight the characteristics accordingly to assess the utility of any given result. 1) Experimental approach: We performed a semi-controlled experiment to identify a candidate set of weights for our characteristics. During the experiment, we performed 14 searches using our tool that returned 264 search results (threads). The searches presented compilation errors and runtime exceptions chosen by the authors based on their experience to represent a reasonably diverse set of exceptions. The first three authors (participants) independently evaluated these search results and assigned a value on a scale from 1 to 5 to capture the relevance of each result. A value of 1 indicates that there is no useful information to be found in the thread, while the value 5 means that the thread provides an exact solution to the query. 9 out of 264 results were discarded because the participants did not sufficiently agree on the relevance of the result (e.g., the difference between 3http://www.alchemyapi.com/api/sentiment/ at least two assigned values was more than 2). For the remaining 255 results, we averaged the values we assigned to compensate for possible minor differences in how we evaluated the relevance of the thread. We used WEKA [20], a machine learning tool, to apply multiple linear regression to the collected data to derive candidate weights for the remaining 255 data points. The choice of using multiple linear regression was determined by our interest in finding approximation of relevance, rather than classifying each result to a specific bin. The data for all experiments described in the paper are available online.⁴ 2) Identifying model weights: The resulting weights from the linear regression are shown in Table I. The value of the baseline is 1.93; this means that every query starts with this value and the scores of each matching characteristic are added to this starting value. The positive values of weights show that a characteristic correlates with a positive impact on overall relevance, while a negative value shows that the associated characteristic correlates with a negative impact. The greater the score, the more relevant the result. <table> <thead> <tr> <th>Characteristic</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>Marked solved/answered</td> <td>0.71</td> </tr> <tr> <td>No replies</td> <td>-0.81</td> </tr> <tr> <td>N replies</td> <td>0.06</td> </tr> <tr> <td>N replies contain source code</td> <td>0.01</td> </tr> <tr> <td>N author replies</td> <td>-0.06</td> </tr> <tr> <td>Author made last reply</td> <td>0.20</td> </tr> </tbody> </table> TABLE I FINAL WEIGHTS ASSOCIATED WITH EACH MODEL CHARACTERISTIC. The negative weight for the Author replied characteristic conflicts with our initial expectation that the more the author replied, the more useful the thread is. Qualitatively reexamining the threads we scored, we feel the negative correlation arises because (a) authors often reply to say that the solutions suggested by others did not work, (b) authors reply to provide more clarifications to an initially incomplete question, and (c) some threads contains only author’s replies (authors often ‘bump’ their question to make it appear at the top of the question queue). We do not report weights for Sentiment Analysis because we found that these characteristics create more noise than meaningful contribution to the overall model. For example, the Sentiment Analysis often returned false negative results when an author talked about problems related to a question and naturally used some “negative” words (e.g., fatal execution or memory corruption), or when a poster used phrases like “understood my error” or “I see what was wrong”. In such cases, Sentiment Analysis punished the search result even though the question was answered successfully. There were also many false positive values when an author just thanked a developer for a reply, or when other people ended their answer with phrases like “hope it will help you”. Code snippets also caused problems; for instance, boolean success = run(); would be scored positively (because of the word “success”), even though it has no bearing on the effectiveness of the code snippet. For these reason, we removed Sentiment Analysis from our model. It can be seen that the two largest factors determining the relevance of a post are whether the post has been marked relevant (+0.71) and whether the thread is devoid of any answers at all (-0.81). Interestingly, the author making the last reply has a positive value (+0.20) even though an author being highly active in a thread has a negative value (-0.06 for each subsequent reply). Including source code in a post a fairly minor factor (+0.01); we believe this may be because source code tends to be present in most posts, regardless of whether they are helpful or unhelpful. C. Determining result thresholds Since our model assesses the relevance of a result, we wanted to return only relevant discussion threads to the developer. To do this, we classify the returned results as “good”, “ok”, and “poor”; we never return “poor” results to the developer. In Figure 2, “good” results have a green background and a star sign in the first column, while “ok” results have a plain white background. By differentiating the “good” and “ok” results, developers gain a greater sense of Dora’s confidence in a result, beyond its rank in the result view. To determine the thresholds we re-ran all of the queries from Section IV-B with our derived weights. The thresholds were identified by comparing the weighted score returned by our model to the relevance values assigned by the participants. The participants did not have to reevaluate the relevance of any result; since they evaluated every result of the unweighted model, we were able to retroactively apply the weights without them having to reevaluate the results. We analyzed our 255 results as one set by ordering the results along the X-axis by their returned score. We found a clear split between “ok” and “poor” results at a score of 2.7; the data showed a knee effect around this point. Because the main contributor to this effect was the solved/answered characteristic, we analyzed the search results below this threshold value to learn how this bias may impact those sites that do not support this kind of metadata. From our data we observed that 197 out of 255 search results (77%) were from sites that support Solved/Answered mark indicating that heavily weighting these results will be unlikely to cause an unfair bias. The distinction between “good” and “ok” results was more subjective; however, since both “ok” and “good” results would all be returned to the developer, and the order would not be impacted by our choice, this split was somewhat less critical. The data also exhibited a knee effect around 3.25, which led us to select that value as the cutoff between the “good” and “ok” results. With these weights applied, 8 of the 14 training queries would have at least one “good” result, while 13 of the 14 ⁴http://dora.googlecode.com queries had one or more “ok” results (minimum 1, average 4.4). One query had only one “good” result and no “ok” results. The number of results in each quality category for our 255 results is shown in Figure 3. V. DORA SOLUTION SEARCH TOOL We built the Dora search tool to help developers identify relevant programming posts. Dora was built as a Visual Studio add-in enabling developers to maintain their task context by forming queries and investigating results without having to use external searching tools. This allows them to remain more focused on their tasks and relieves them from constant task switching whenever they need to perform a search [21]. To enhance the performance of the tool, we implemented the search logic on the server side and used the client side only to generate the query string and to provide GUI to examine results. Dora is available for download.\(^5\) The current Dora prototype searches five specific web sites for solutions: - Stack Overflow [http://stackoverflow.com] - Daniweb [http://daniweb.com] - Bytes [http://bytes.com] - Codeguru [http://codeguru.com] - Dev Shed [http://forums.devshed.com] These five sites were chosen based upon the authors’ programming experience. Three of the authors had more than one year of industrial programming experience and found these sites to be the most commonly used when searching for programming help for the C# language. While our model is not tied to any specific online resource or programming language, there are common properties of these sites that may have influenced the helpful characteristics we identified. For example, the selected sites focus on a thread-based format that enables developers to collaborate, comment, and add metadata to the provided solutions. This contrasts to more static web resources, where commenting is not enabled, or even blog-like posts where comments appear at the bottom but are often not an integral part of the post itself. We use a set of site-specific regular expressions to extract data from the results returned for each site. For each thread, we delineate all of the posts and extract the author and their message (while deleting any quoted material), as well as checking for presence of the Solved/Answered metadata and the presence of code snippets. Adding new sites is generally straigh-forward; for the five sites we support, none requires more than 12 lines of parsing code. While the users cannot add new sites directly to the tool, once they are added to the server side they are automatically available to all clients. A. IDE integration Visual Studio provides a view that lists compilation errors whenever they occur. Dora only uses the description field from this view when formulating its queries. Retrieving runtime exceptions is more problematic: we subscribe to Visual Studio’s event stream to capture AppDomain.UnhandledException events, which occur whenever a runtime exception is encountered. We casted all runtime exceptions to their base class System.Exception and took its “Message” field value. Again, only the exception description is used to formulate the query. To initiate the query, a user has to select the entry for Dora from the Tools menu of Visual Studio; queries are initiated in the same manner for both compilation errors and runtime exceptions. After performing queries to the five sites listed above, the results are analyzed and displayed in another Visual Studio view. An example is shown in Figure 2. Each returned result includes the title of the matched post, a short snippet of the description, an indication of relevance as assessed by our model, and a link to the post itself. To help developers stay focused on their tasks, Dora supports showing the complete text of a particular query result within the IDE. The developer can easily switch from a list of query results to the web view of a result by clicking on the link; this opens the result in a new tab of the results view. B. Performing searches Dora automatically generates a query for each of the five programming help sites based on the element the developer queried upon. We manually identified an effective query string through trial and error. While we initially created very specific queries, we noted that including too much contextual information (e.g., information specific to the developer’s system) overly restricted results. Ultimately, we settled upon only using the error message; all of the machine specific portions of this message are removed (file names, variables, etc.). The query string was quoted so that the entire string is searched for instead of each word within the query. We searched each of the five sites separately using the Google Custom Search API\(^6\), rather than querying the site directly. For each site, Dora only examines the first 10 results. Queries to the various sites are performed in parallel to increase performance. Dora’s most time consuming action is downloading the HTML for each result for further analysis; the analysis itself is quick enough to be insignificant. Since the current prototype waits for all results before showing them to \(^5\)http://dora.googlecode.com \(^6\)http://code.google.com/apis/customsearch/ a developer, the overall waiting time depends on the slowest site. While performance could be improved by evaluating results as they are identified, we have not implemented this streaming behavior because we want to maintain a stable ordering within the result view. C. Evaluating the model and tool Software developers currently use several different approaches to search for programming help information on the Internet. Ultimately, the main question is whether Dora can more effectively identify relevant results than a traditional Internet search. We performed an evaluation to answer the following two specific questions: RQ-1: Does Dora provide more relevant results than traditional search approaches? RQ-2: Is the index of the first relevant result better with Dora than traditional search approaches? The first three authors were the participants in this evaluation. We compared Dora to a Google web search and a search of Stack Overflow using their built in search engine. Google was chosen as it is the most popular general purpose search engine; Stack Overflow was selected as it is currently the most popular and active general programming help forum. D. Method We set out to answer RQ-1 and RQ-2 by performing a set of searches for “representative” C# exceptions. To identify this set of exceptions, we analyzed the 10 most popular projects on Codeplex\(^7\); Codeplex is a popular hosting site for Open Source C# projects. By analyzing the source code of these 10 projects, we identified the 18 most commonly-caught runtime exceptions. We decided to focus on runtime exceptions as we believe they are harder to resolve than compile-time errors. We split the experiment into three phases. During the first phase, we performed a Dora search against each of the 18 exceptions. Each of the participants independently examined the first five results for each query to determine whether the result was relevant or not; we always looked at the top five results, regardless of Dora’s thresholds (that would normally exclude poor results). The thresholds were ignored to increase the fairness to Google and Stack Overflow, both of which favour returning results over relevance. The participants determined relevance based on a result’s ability to help solve the queried exception. Based on previous research that showed that developers only examine the first few results [22], we chose to investigate only the first five returned items. During the second phase, the Google search was performed. Because Google web search results can depend on the past search history, we performed all our searches on one computer that was not logged into Google and had deleted all search cookies. We constructed the Google search query based on how we would perform the same search ourselves and thus created them by combining exception class names, parts of the exception message, and keywords that might describe the context. The results from these queries were saved, and each participant independently evaluated the relevance of the first five results. Here we considered all returned results (as a developer may need to in practice). The third phase involved performing the same steps as in the second phase but with the Stack Overflow website. We used the same queries that were created for Google search and independently evaluated the relevance of the returned results. E. Results We averaged the relevance results from each participant. Figure 4 demonstrates the differences in the proportion of relevant results for Dora, Google, and Stack Overflow among the first five. Dora only returned one result for one query; for all other queries five results were returned. The boxplot directly supports RQ-1: Dora improves the proportion of relevant results developers need to investigate for queries relating to specific programming errors. Dora provides 55% more relevant results than a Google web search and 75% more relevant results than a Stack Overflow search. Dora’s median relevance (for the first five results) was 0.47 while Google’s was 0.30 and Stack Overflow’s was 0.27. Google seemed to perform better than Stack Overflow by returning results from multiple help forums (in addition to Stack Overflow). In terms of the index of the first relevant result, the data also supports RQ-2: the first relevant Dora result was usually better than both Google and Stack Overflow (this is true for 14 of the 18 queries); these results are shown in Figure 5. The median index of the first relevant result for Dora was 1.0, while for Google it was 2.83 and for Stack Overflow it was 2.0. Interestingly, Stack Overflow performed better than Google; this may have been because the first two results from Google searches were often from the Microsoft MSDN Library and \(^7\)http://codeplex.com MSDN Social Forums web sites which were deemed to be less relevant. Although MSDN Social Forums is an official place for community communication, it is not very popular and did not contain relevant information for any of the 18 queries. We also evaluated the proportion of results in each of the three relevance categories. Dora returned at least one “good” result for 7 of the 18 queries; all 18 queries have a minimum of one “ok” result with an average value of 5.6 “ok” results. As shown in Figure 6, the proportion of results in each of the three categories is relatively constant between the queries from Section IV-B and Section V-C. Again, the results in the “poor” categories are not shown to the developer in practice. **Fig. 5.** Index of the first relevant result. The box represents the first and third interquartile ranges; the heavy line represents the median. Lower results are better. **Fig. 6.** Proportion of results found in each of the quality categories both data sets. Dora filters the result list to only display “good” and “ok” results to the developer. **F. Assessing the validity of the predictions based on the model** To get a sense of the accuracy of the model, we validated our pre-determined weights (set in Section IV-B) with an independent set of queries and relevance assessments. We revisited the 18 queries performed with Dora in Section V-D and evaluated all of the results Dora returned (instead of just the top 5); this produced 462 search results. The three participants individually went through the results and assigned values in the same way they did before. There were no major disagreements in the relevance assessment among the participants; the relevance assessments were averaged so each of the 462 results had a single relevance value. By comparing the relevance value we applied to that generated by our model, the absolute mean error was 0.37. This means that the score returned by our model averaged only 0.37 away from our manually identified scores, where our scores were assigned a value between 1 and 5. The $R^2$ value, which measures goodness-of-fit, was 0.55. $R^2$ shows the proportion of variation among assigned values that is explained by the characteristics of our model. The correlation coefficient $R$ that defines the overall correlation between values assigned by our model and by the participants was 0.74. From these results we conclude that although the model cannot predict the real values with 100% accuracy, the model’s values and the manually assigned ones are connected by a statistically significant relationship. **VI. Discussion** In this section we discuss some threats to validity for our approach and evaluation, some limitations of our approach, and provide some recommendations for future work. **A. Threats to validity** In terms of internal validity, the personal experience of the experimental participants could have biased their selection of relevant results. Importantly, the determination of whether a result is relevant for a given task is a subjective measure. While the weighting we have applied works to identify results we deem relevant, it is possible that a different rubric for determining relevance would result in different model weights. We also only measured the relevance of a result. We did not further investigate how the knowledge gained from an irrelevant result could be used. For example, if developers learn more about their problem from examining irrelevant results, focusing solely on relevance may not be the right approach. In terms of external validity, our prototype tool was geared towards queries targeting the C# language from within the Visual Studio IDE. Alternate languages, particularly rarer languages like TXL or Alloy, would likely require support for a different set of search sites. We also have only examined exceptional circumstances; our approach may not be applicable for higher-level knowledge (e.g., design knowledge). **B. Tool limitations and future work** The greatest limitation of our tool is that we require site-specific parsing rules to analyze a given site’s posts to determine how they fit our model. This means that for general purpose queries, or for queries for which the programming language of the result is not important, our approach may not be general enough without querying a larger corpus of sites. In addition, our model may not translate to a non-discussion forum style of resource (i.e., blogs). For instance, we have identified the marked solved/answered characteristic of a post to be the most useful in determining relevance. This characteristic does not translate well to blogs. Further integration could enable new collaborative filtering possibilities (for instance, code being adopted from a specific response post could indicate an implicit acknowledgment of the value of the post). Also, further research could be conducted in transferring gained knowledge from online resources directly into the developer’s system; this seems particularly tractable for compilation errors as the correct locality for the fix is well defined (e.g., where the error is identified by the compiler). Finally, launching queries within the IDE enables us to generate a more contextually-relevant query (than a traditional keyword search). While we do not currently do this, a query could include logging or other feature-related information (for example, Query-Feature Graphs [23]). By experimenting with various levels of contextual information, Dora could start a search session with a very specific query and gradually relax the query once results are returned. In this way, Dora could automatically find the most descriptive query that returns results; this could potentially find results that are more contextually relevant than more general queries. VII. CONCLUSION Software developers frequently encounter compilation errors and runtime exceptions as they work on their systems. One common source of help for understanding these errors and exceptions is through archived developer conversations on the Internet. Unfortunately, finding relevant answers to specific programming problems online can be challenging due to the number of online resources that contain only questions or incorrect answers. We have built the Dora search tool to automatically identify helpful programming posts. Through a semi-controlled experiment, we found that Dora returned 55% more relevant results to developers than searching the web manually. We believe that Dora can help developers spend less time examining unhelpful online posts enabling them to concentrate more effectively on their core development tasks. ACKNOWLEDGMENTS The authors would like to thank Olga Baysal for her assistance with WEKA and for providing valuable feedback on the work. We also thank the anonymous reviewers for their comments. REFERENCES
{"Source-Url": "https://cs.uwaterloo.ca/~okononen/vlhcc2012.pdf", "len_cl100k_base": 7575, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25451, "total-output-tokens": 9443, "length": "2e12", "weborganizer": {"__label__adult": 0.0003056526184082031, "__label__art_design": 0.00021958351135253904, "__label__crime_law": 0.0002384185791015625, "__label__education_jobs": 0.0008878707885742188, "__label__entertainment": 3.814697265625e-05, "__label__fashion_beauty": 0.0001175999641418457, "__label__finance_business": 0.00017702579498291016, "__label__food_dining": 0.0002083778381347656, "__label__games": 0.00035762786865234375, "__label__hardware": 0.00046753883361816406, "__label__health": 0.0002157688140869141, "__label__history": 0.0001131892204284668, "__label__home_hobbies": 6.186962127685547e-05, "__label__industrial": 0.00015985965728759766, "__label__literature": 0.00015866756439208984, "__label__politics": 0.0001500844955444336, "__label__religion": 0.00028228759765625, "__label__science_tech": 0.001605987548828125, "__label__social_life": 7.706880569458008e-05, "__label__software": 0.005466461181640625, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00019049644470214844, "__label__transportation": 0.0003020763397216797, "__label__travel": 0.00015532970428466797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43045, 0.01685]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43045, 0.26101]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43045, 0.93152]], "google_gemma-3-12b-it_contains_pii": [[0, 5050, false], [5050, 9349, null], [9349, 15521, null], [15521, 21513, null], [21513, 26726, null], [26726, 31525, null], [31525, 36068, null], [36068, 43045, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5050, true], [5050, 9349, null], [9349, 15521, null], [15521, 21513, null], [21513, 26726, null], [26726, 31525, null], [31525, 36068, null], [36068, 43045, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43045, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43045, null]], "pdf_page_numbers": [[0, 5050, 1], [5050, 9349, 2], [9349, 15521, 3], [15521, 21513, 4], [21513, 26726, 5], [26726, 31525, 6], [31525, 36068, 7], [36068, 43045, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43045, 0.05479]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
1be2abfc3fd802ece8130ba769ca132728e8e053
CS111 Jeopardy: The Home Version The game that turns CS111 into CSfun11! Spring 2018 This is intended to be a fun way to review some of the topics that will be on the CS111 final exam. These questions are not indicative of the style and difficulty of the questions that will be on the final. QUESTIONS --- List One-Liners List One-Liners [1] This is an expression whose value is the number of elements in the list L. List One-Liners [2] This is a one-line expression whose value is a list of all even numbers between 1 and 100 (inclusive). List One-Liners [3] This is a one-line statement in the body of ```python with open('essay.txt', 'r') as infile: # print the total number of words in the file. ``` List One-Liners [4] Given a list L of single-character digit strings, this is a one-line expression whose value is the integer that corresponds to concatenating the digits in reverse order. For example: - if L is the list ['3', '4', '5'], the code should compute 543 - if L is the list ['5', '3', '7', '2'], the code should compute 2735 List One-Liners [5] Given a list L, this is a one-line statement that moves the last element of the list to the beginning of the list (shifting all other elements to the right). It doesn’t create a new list, it mutates the original. --- Iteration & Recursion Iteration & Recursion [1] This is the definition of a function that satisfies the following contract: ```python def sumLengths(listOfStrings): """Returns the sum of the lengths of all the strings in the given list of strings." """ ``` Iteration & Recursion [2] This is the definition of a recursive function that satisfies the following contract: ```python def sumRange(start, end): """Returns the sum of all integers in the range between the 1st argument, inclusive, and the 2nd argument, exclusive. """ ``` For example: - `sumRange(5, 7)` returns 11 - `sumRange(-2, 4)` returns 3 - `sumRange(3,3)` returns 0 - `sumRange(4,3)` returns 0 Iteration & Recursion [3] Consider the following function. ```python def mystery(x): print(x) if x < 5: mystery(3*x - 1) elif x > 5: mystery(x - 3) ``` This is printed by the invocation `mystery(10)`. Iteration & Recursion [4] This is the definition of a recursive function that satisfies the following contract: ```python def whichPowerOf2(n): """Returns the power for a given power of 2.""" ``` For example: - `whichPowerOf2(1)` returns 0 (because $2^0 = 1$) - `whichPowerOf2(8)` returns 3 (because $2^3 = 8$) - `whichPowerOf2(16)` returns 4 (because $2^4 = 16$) Iteration & Recursion [5] These statements read the content of tweets retrieved from the Twitter API, and print this list of hashtags: `["ruhlman18", "redClass18", "wellesley",...]. (The list may contain duplicates.) ```python tweets = [ { "source": "Come to PNE 249 for our Ruhlman! #ruhlman18", "entities": { "urls": [], "hashtags": ["ruhlman18"] }, "source": "Congratulations to our seniors! #purpleClass18, #wellesley", "entities": { "urls": [], "hashtags": ["purpleClass18", "wellesley"] } } ] ``` Dictionaries & Sets Dictionaries & Sets [1] This is the value of the expression `len(set('abracadabra'))` Dictionaries & Sets [2] These are all the following types that cannot be keys in a dictionary: - int - bool - string - list - tuple - set - dict Dictionaries & Sets [3] These are all of the following boolean variables that are not necessarily True: \[ \begin{align*} \text{bool1} &= \{'x':1, 'y':2, 'z':3\} == \text{dict}([('x', 1), ('y', 2), ('z', 3)]) \\ \text{bool2} &= \{'x':1, 'y':2, 'z':3\}.\text{items()} == [(x', 1), ('y', 2), ('z', 3)] \\ \text{bool3} &= \text{set}({'x':1, 'y':2, 'z':3}.\text{items()}) == \{(x', 1), ('y', 2), ('z', 3)} \end{align*} \] Dictionaries & Sets [4] This one variable has a value that is not equal to the others: \[ \begin{align*} d1 &= \{'a':3, 'b':2, 'c':1\} \\ d2 &= \text{dict}([('b', 2), ('c', 1), ('a', 3)]) \\ d3 &= \{\text{key}: 'bacaba'.\text{count(key)} \text{ for key in 'cab'}\} \\ d4 &= \text{dict}(.\text{zip('cba', [1, 2, 3]))} \\ d5 &= \text{json.loads('"b":2, "a":3, "c":1')} \\ d6 &= \{(a', 3), ('b', 2), ('c', 1)} \end{align*} \] Dictionaries & Sets [5] This is a printed representation of the value returned by \textit{mysteryDict('a bat ate an oval berry')} given the following definition: \[ \begin{align*} def \text{mysteryDict(sentence)}: \\ \text{dct} &= \{} \\ \text{for word in sentence.split():} \\ \text{c} &= \text{word[0]} \\ \text{dct[c]} &= \text{dct.get(c, 0) + len(word)} \\ \text{return dct} \end{align*} \] Objects Objects [1] A class declaration typically includes these entities, used to keep track of an object’s state. Objects [2] Consider the following classes: \[ \begin{align*} \text{class Food}: \\ \text{def info(self):} \\ \text{print('Good to eat')} \end{align*} \] \[ \begin{align*} \text{class Dessert(Food):} \\ \text{def calories(self):} \\ \text{print('Lots of calories')} \end{align*} \] \[ \begin{align*} \text{class Cake(Dessert):} \\ \text{def flavor(self):} \\ \text{print('I like chocolate')} \end{align*} \] This is the number of user-defined methods that a \textit{Cake} object has. Objects [3] This will be printed by the following program. class A(): def number(self): print(8) class B(A): def number(self): A.number(self) print(9) class C(B): def number(self): print(7) B.number(self) C().number() # C() invokes the default zero-argument constructor for a class Objects [4] This will be printed by the following program. import math class RightTriangle: def __init__(self, base, height): self.base, self.height = base, height def scale(self, factor): self.base, self.height = self.base*factor, self.height*factor def hypotenuse(self): return math.sqrt(self.base**2 + self.height**2) def perimeter(self): return self.base + self.height + self.hypotenuse() tri = RightTriangle(6, 8) tri.scale(0.5) print tri.perimeter() Objects [5] These statements in a MinionYoga step method make its instances rotate by 90 degrees half the time: import random class MinionCeiling(Minion): def step(self): # most of body omitted for space reasons else: self.minionLayer.move(self.deltax, -self.deltay) class MinionYoga(MinionCeiling): '''Rotate the minion by 90 degrees 50% of the time''' def step(self): # FILL IN THE MISSING STATEMENTS Bugs That Bite Bugs That Bite [1] This is a bug in the following function definition: ```python def compare(a, b): if a == b: return 'equal' else: return 'not equal' ``` Bugs That Bite [2] This is a bug in the following class definition: ```python class Animal: def __init__(self, numLegs): numberOfLegs = numLegs def isBiped(self): return self.numberOfLegs == 2 ``` Bugs That Bite [3] Recall that `random.randint(a, b)` returns a random integer i such that \(a \leq i \leq b\). This is a bug in the following function definition: ```python import random def chooseRandom(aList): if len(aList) > 0: randomIndex = random.randint(0, len(aList)) return aList[randomIndex] ``` Bugs That Bite [4] This is a bug in the following code: ```python vowelDict = {} for vowel in 'aeiou': vowelDict[vowel] = vowel.upper() print(vowelDict['E']) ``` Bugs That Bite [5] The following definition of the `areAllPositive` function does not satisfy the contract specified in its comment. Show this by giving a sample input on which it returns an incorrect answer. ```python def areAllPositive(numbers): '''Returns True if all elements in the list of numbers are positive, and False otherwise.''' if len(numbers) == 0: return True # *correct* answer for so-called ``vacuously true` case else: for num in numbers: if num <= 0: return False else: return True When the below program is executed, this will be printed. ```python x = 3 y = 8 def f(): x = 6 y = 7 f() print(x) print(y) ``` This is the definition of a function `swap` that takes three arguments (a list `L` and two list indices `i` and `j`) and modifies `L` by swapping the contents of its slots at indices `i` and `j`. Consider the function below: ```python def appendages(L): if len(L)==0: return [] else: return L + appendages(L[1:]) ``` This is the list returned by the invocation `appendages([1,2,3,4])`. This is (1) the buggy expression and (2) the corrected expression in the following function definition: ```python def frequenciesBuggy(strings): """Returns a dictionary mapping each string in the given list of strings to the number of times it appears in the list." freqDict = {} for s in strings: freqDict[s] = 1 + freqDict[s] return freqDict ``` This is a function that satisfies the following contract: ```python def countOfMostCommonCharacter(s): """Returns the number of times the most commonly occurring character in the string `s` occurs." For example: • `countOfMostCommonCharacter('eerie')` returns 3 • `countOfMostCommonCharacter('Mississippi')` returns 4 ANSWERS List One-Liners List One-Liners [1] len(L) List One-Liners [2] Here are some of many possible answers: - [i for i in range(1, 101) if i\%2==0] - filter(lambda i: i\%2==0, range(1,101)) - range(2,101)[::2] - range(2,101,2) List One-Liners [3] len(infile.read().split()) List One-Liners [4] int(''.join(L)[::-1]) List One-Liners [5] L.insert(0, L.pop()) Iteration & Recursion Iteration & Recursion [1] Here are some of many solutions: Solution 1: ```python def sumLengths(listOfStrings): sum = 0 for s in listOfStrings: sum += len(s) return sum ``` Solution 2: ```python def sumLengths(listOfStrings): return sum([len(s) for s in listOfStrings]) ``` Solution 3: ```python def sumLengths(listOfStrings): return sum(map(len, listOfStrings)) ``` Iteration & Recursion [2] ```python def sumRange(start, end): if start>=end: return 0 else: return start + sumRange(start+1, end) ``` Iteration & Recursion [3] 10 7 4 11 8 5 Iteration & Recursion [4] def whichPowerOf2(n): if n==1: return 0 else: return 1 + whichPowerOf2(n/2) Iteration & Recursion [5] Solution 1: hashtags = [] for tweet in tweets: hashtags += tweet['entities']['hashtags'] print hashtags Solution 2: hashtags = [] for tweet in tweets: hashtags.extend(tweet['entities']['hashtags']) print hashtags Dictionaries & Sets Dictionaries & Sets [1] 5. set('abracadabra') is {'a', 'b', 'c', 'd', 'r'}, which has 5 elements. Dictionaries & Sets [2] list, set, and dict These are mutable object types that cannot be dictionary keys. Dictionaries & Sets [3] bool2. Although bool2 could be True, it can also be False, because the ordering of pairs returned by .items() is unpredictable. bool1 and bool3 are necessarily true. Dictionaries & Sets [4] d6. It is a set of tuples, not a dictionary. But dict(d6) would be dictionary equal to the others. Dictionaries & Sets [5] {'a': 6, "b": 8, "o": 4}. The order of key/value pairs is arbitrary, as is using double or single quotes for the string keys. Objects Objects [1] instance variables, state variables, or data attributes. (instance variables is standard across object-oriented programming languages; state variables is a more generic term that means the same thing in an object-oriented context. The term data attributes is specific to Python.) Objects [2] 3 methods (info, calories, flavor). This does not include default object methods like __repr__, and does not include __init__ (which is used to create the instance, but not operate on it after it has been created). Joke: Why was 6 afraid of 7? Because 7 8 9! **Objects [4]** 12.0 The hypotenuse of 3.0,4.0 triangle is 5.0, and $3.0 + 4.0 + 5.0 = 12.0$. The result is necessarily a float, not an integer, both because of the multiplication by 0.5 (which returns a float) and the use of `math.sqrt` (which always returns a float). **Objects [5]** ```python def step(self): MinionCeiling.step(self) # move up toward ceiling if random.randint(0,1) == 0: self.minionLayer.rotate(90) ``` --- **Bugs That Bite** **Bugs That Bite [1]** $a = b$ should be $a == b$ **Bugs That Bite [2]** In the `__init__` method, `numberOfLegs = numLegs` assigns to the local variable `numberOfLegs` in the execution frame for the `__init__` method, but does not create an instance variable in the new `Animal` instance. This can be fixed by changing this line to `self.numberOfLegs = numLegs`. **Bugs That Bite [3]** `random.randint(0, len(aList))` is inclusive on its second argument. So in the case where `randomIndex` is `len(aList)`, the error `list index out of range` will be raised. The correct expression is `random.randint(0, len(aList)-1)`. **Bugs That Bite [4]** `vowelDict['E']` raises the error `Key Error 'E'` because 'E' is not a key in `vowelDict` (but 'e' is). **Bugs That Bite [5]** `areAllPositive` is wrong because its return value is based only on the first element of the list. Any list with a positive first element and some later nonpositive element will be incorrect. For example, `brokenAreAllPositive([3,-2])` returns `True`. --- **Potpourri** **Potpourri [1]** The assignments to `x` and `y` in the body of the function `f` create local variables in the execution frame for `f` and do not change the values of the global variables `x` and `y`. So the answer is: ```python 3 8 ``` **Potpourri [2]** Solution 1: ```python def swap(L, i, j): ival = L[i] L[i] = L[j] L[j] = ival ``` Solution 2: def swap(L, i, j): ival, jval = L[i], L[j] # simultaneous assignment L[i] = jval L[j] = ival Solution 3: def swap(L, i, j): L[j], L[i] = L[i], L[j] # simultaneous assignment to list slots! Potpourri [3] [1, 2, 3, 4, 2, 3, 4, 3, 4, 4] Potpourri [4] 1. \(1 + \text{freqDict}[s]\) (because when the string \(s\) is not yet in \text{freqDict}, there is a key error) 2. \(1 + \text{freqDict}.get(s, 0)\) (because this evaluates to 1 when \(s\) is not in \text{freqDict}) Potpourri [5] Solution 1: def countOfMostCommonCharacter(s): countDict = {} for ch in s: if ch not in countDict: countDict[ch] = 1 else: countDict[ch] += 1 return max(countDict.values()) Solution 2: def countOfMostCommonCharacter(s): countDict = {} for ch in s: countDict[ch] = countDict.get(ch, 0) + 1 return max(countDict.values()) Solution 3: def countOfMostCommonCharacter(s): maxCount = 0 for ch in s: maxCount = max(maxCount, s.count(ch)) return maxCount Solution 4: def countOfMostCommonCharacter(s): return max([s.count(ch) for ch in s]) Solution 5: def countOfMostCommonCharacter(s): return max(map(lambda ch: s.count(ch), s))
{"Source-Url": "http://cs111.wellesley.edu/content/review/files/cs111-jeopardy.pdf", "len_cl100k_base": 4420, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23404, "total-output-tokens": 5225, "length": "2e12", "weborganizer": {"__label__adult": 0.0009169578552246094, "__label__art_design": 0.0008678436279296875, "__label__crime_law": 0.000598907470703125, "__label__education_jobs": 0.041656494140625, "__label__entertainment": 0.0004153251647949219, "__label__fashion_beauty": 0.0003848075866699219, "__label__finance_business": 0.0002722740173339844, "__label__food_dining": 0.0013103485107421875, "__label__games": 0.007904052734375, "__label__hardware": 0.0016021728515625, "__label__health": 0.0007052421569824219, "__label__history": 0.0007181167602539062, "__label__home_hobbies": 0.0003440380096435547, "__label__industrial": 0.0007405281066894531, "__label__literature": 0.00136566162109375, "__label__politics": 0.0004498958587646485, "__label__religion": 0.0010223388671875, "__label__science_tech": 0.014739990234375, "__label__social_life": 0.0004973411560058594, "__label__software": 0.0163116455078125, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0011377334594726562, "__label__transportation": 0.0006604194641113281, "__label__travel": 0.0004730224609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15049, 0.04316]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15049, 0.50976]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15049, 0.70612]], "google_gemma-3-12b-it_contains_pii": [[0, 1564, false], [1564, 3255, null], [3255, 5250, null], [5250, 6541, null], [6541, 8036, null], [8036, 9296, null], [9296, 10248, null], [10248, 11888, null], [11888, 13800, null], [13800, 15049, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1564, true], [1564, 3255, null], [3255, 5250, null], [5250, 6541, null], [6541, 8036, null], [8036, 9296, null], [9296, 10248, null], [10248, 11888, null], [11888, 13800, null], [13800, 15049, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 15049, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15049, null]], "pdf_page_numbers": [[0, 1564, 1], [1564, 3255, 2], [3255, 5250, 3], [5250, 6541, 4], [6541, 8036, 5], [8036, 9296, 6], [9296, 10248, 7], [10248, 11888, 8], [11888, 13800, 9], [13800, 15049, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15049, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
adcf708a65a71ae9e9a2a447d124081401c08642
Usability as a Key Quality Characteristic for Developing CAAD Tools and Environments Burak Pak¹, Johan Verbeke² ¹,²Sint-Lucas School of Architecture, Faculty of Architecture and Arts, Association KU Leuven Belgium ¹http://blog.associatie.kuleuven.be/burakpak ¹burak.pak@architectuur.sintlucas.wenk.be, ²johan.verbeke@architectuur.sintlucas.wenk.be Abstract. In this paper, we will stress the importance of usability as a key quality characteristic for the Computer Aided Architectural Design (CAAD) software prototypes. We claim that usability evaluation practices can assist the integration of human factors and the accommodation of local differences. These practices are not solely limited to interface tests, but they can also provide valuable information on the possible added values of CAAD software prototypes, increase the overall product quality and thus contribute to the sustainable development of the CAAD research field. In this context, we aim to initiate a constructive discussion on this topic by reviewing various usability frameworks and highlighting possible opportunities and challenges of applicable evaluation methods. Consequent to this discussion, we will elaborate on our recent findings relating to the reliability and effectiveness of particular evaluation methods applied to a web-based geographic virtual environment prototype. In conclusion, we will introduce a new “design usability” framework that is suitable for CAAD software development; which suggests a variety of design usability quality characteristics, cost-effective evaluation methods and possible influence factors in the evaluation process. Keywords. Usability; Quality in Use; Evaluation; CAAD Software Development; Human Factors. INTRODUCTION Fifteen years ago, in his seminal paper “CAAD’s Seven Deadly Sins”, professor Tom Maver (1995) presented a critical view on the direction of research and development in CAAD. He introduced seven topics of criticism: overestimating the short term impacts and underestimating the longer term impacts (macro-mymopia), re-visiting ideas (déjà vu), absence of a core research discipline (xenophilia), discarding fitness-for-purpose, cost-effectiveness and environmental sustainability (unsustainability), generation of hypotheses without rudimentary testing (failure to validate), failure to criticize and finally, insufficient investigation of usability and functionality in teaching or practice (failure to evaluate). Since then, CAAD research has made significant progress. The last decade has witnessed a widespread use of scripting languages, open source software and application programming interfaces followed by an admirable number of researchers who developed, shared, tested and implemented new CAAD tools. Quite often, these tools have been evaluated and presented together with novel design products in the form of case studies as “proofs of concepts”. These studies are essential because they demonstrate the utility of the tools by offering visible evidence related to the integration of experimental CAAD research into design practices and/or the profession. Thus, they inspire future studies. On the other hand, evaluation of usability still remains a difficult and ill-defined topic, especially when it comes to the experimental CAAD tools and design environments. USABILITY AND QUALITY IN USE CHARACTERISTICS IN VARIOUS QUALITY FRAMEWORKS AND MODELS In this section, we will critically review and compare definitions of usability characteristics in various frameworks with the purpose of initiating a discussion on the applicability of these definitions to the CAAD field. Our comprehensive literature study on usability points out to an inflation of definitions and methods proposed for software usability evaluation. Authorities such as the International Standards Organization (ISO) issued more than 50 standards related to software usability and Human Computer Interaction (HCI) (Bevan, 2006). Various other descriptive theoretical usability frameworks were also suggested by Nielsen and Mack (1994), Norman (2004), Davis et al. (1989), Venkatesh et al. (2003). To begin with, HCI-related ISO standards are primarily shaped around four topics: quality in use, product quality, process quality and organizational capability. Concerning the usability evaluation of CAAD software prototypes, “quality in use” stands out as the most relevant quality characteristic described in ISO/IEC 25000 Series (2006). In this framework (Figure 1), quality in use is defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (ISO/IEC 25062, 2006). Besides the international standards organizations, Jakob Nielsen is a leading figure who has produced a significant number of publications on usability evaluation and engineering. In 1994, Nielsen and Mack proposed “a framework of system acceptability” to describe a complete set of quality characteristics (Figure 2). They defined five different usability attributes: learnability, use efficiency, memorability, freedom from errors and subjective pleasantness. The “system acceptability” in Nielsen and Figure 1 A concept map (created by the authors) illustrating the Quality in Use Characteristics in ISO/IEC 25000 Series Software Product Quality Requirements and Evaluation Standards. Mack’s (1994) framework can be compared to “quality in the software life cycle” (ISO 25000 Series, 2006), whereas “practical acceptability” and “utility characteristics” contain similar topics as “product quality” in ISO 25000 Series. Overall, it would not be wrong to state that these two frameworks are complementary in their interpretation of usability. In Nielsen and Mack’s framework, subjective pleasantness refers directly to user attitudes and satisfaction. It is suggested to be measured by post-task satisfaction surveys. The usability attribute efficiency is similar to productivity in ISO 25000 Series and described as “the time required to accomplish the designated tasks” whereas memorability refers to the efficiency of the casual users that are away from the system for a specified amount of time (Table 1). Learnability is related to the time needed for the users to perform the required number of tasks (the required number of tasks is a threshold value determined by the evaluators). Nielsen and Mack’s (1994) final attribute error is defined as any action that does not reach the desired goal. The system’s error rate is calculated by counting the number of such actions made by users while performing the designated tasks. There are various other incomparable usability frameworks that introduce different quality characteristics with different scopes. For instance, in the User Experience Model (UX), Norman (2004) asserts emotions and aesthetics as separate (but dependent) quality characteristics besides utility and usability. On the other hand, the “Technology Acceptance Model” by Davis et al. (1989) suggests perceived usefulness as primary indicators for system usability. The “system usability” in this framework refers to acceptance and use. It is interpreted as a behavior that is predicted rather than observed. In the extended version of this theory, the “Unified Theory of Acceptance and Use of Technology” Venkatesh et al. (2003) included performance expectancy, effort expectancy, social influence, facilitating conditions, voluntariness of use, gender, age and experience to these criteria (Figure 3). This theory is important because it stresses the possible influences of the social, individual and facilitating factors on system usability. This is in line with the ACCOLADE project (2001) where it was found that social and behavioral factors are utterly important when using software for collaborative action. ![Figure 2](image_url) *Figure 2* *The framework of system acceptability quality characteristics (Nielsen and Mack, 1994)* <table> <thead> <tr> <th>ISO 25000 Series “Quality in Use Characteristics”</th> <th>Nielsen and Mack’s (1994) framework “Usability Attributes”</th> </tr> </thead> <tbody> <tr> <td>User Satisfaction</td> <td>Subjective Pleasantness</td> </tr> <tr> <td>Efficiency</td> <td>Efficiency of use</td> </tr> <tr> <td>No direct equivalence</td> <td>Memorability</td> </tr> <tr> <td>Effectiveness</td> <td>Memorability is the change in efficiency after the casual users are away from the system for some time</td> </tr> <tr> <td>Task completion rates by time, freedom from errors, learnability, number of assists…</td> <td>No direct equivalence</td> </tr> <tr> <td></td> <td>~Learnability</td> </tr> <tr> <td></td> <td>~Freedom from errors</td> </tr> </tbody> </table> Based on the theories and models on usability and quality in use characteristics reviewed above, it is possible to construct specific measures and metrics applicable in CAAD software development and evaluate them using a variety of methods. In the following topic, we will discuss the opportunities and challenges of these methods. **USABILITY OF USABILITY EVALUATION METHODS FOR THE DEVELOPMENT OF CAAD TOOLS AND ENVIRONMENTS: OPPORTUNITIES AND CHALLENGES** A comprehensive literature review reveals that there is a variety of usability evaluation methods. Each of these methods provides valuable opportunities but also present challenges for usability evaluation (Table 2). Moreover, some of these methods may be considered as more suitable to be utilized in the early stages of CAAD software development -especially in the academic field- as they are relatively easy to conduct, efficient, effective and reliable; thus satisfactory. To begin with, *task observation* is an effective method for evaluating how well the software facilitates users to accomplish a number of tasks. In this method, the evaluators choose around ten vital tasks to be completed by the representative users. These tasks are then given to the users in a preferably controlled space (such as a fixed laboratory or a conference room) and they are observed by the evaluator and/or a video camera. The evaluator times and records the specific indicators either during the test or after the test. This method can be performed with a limited number of participants. In an experimental study, Lewis (1994) found that only eight evaluators are sufficient to detect 95% of the problems with an individual detection rate of 0.45. In this context, task analysis is an easy-to-use method for evaluating the user and software performance, observing how the interactions are related to the relevant tasks and prioritizing possible functionalities. *Logging users’ interactions* is another effective usability evaluation method. The strength of this method comes from the fact that it can be applied to a large number of actual users (although analyzing them may take some time). In this sense, through logging, it is possible to find usability issues which cannot be revealed through observation. Moreover, use logs are valuable sources especially when combined with task observations and other collected data. Figure 3 Unified Theory of Acceptance and Use of Technology (Venkatesh et al, 2003) based on (Davis et al., 1989). Questionnaires and surveys can be utilized for various purposes related to usability (ISO/IEC 25062, 2006). A common practice is the user satisfaction assessment. In the last thirty years, numerous user satisfaction questionnaires are developed and tested by established researchers. Most of these questionnaires are well-documented and publicly available. Therefore, conducting such studies is not so difficult. In addition, questionnaires are efficient tools for collecting information on user characteristics; which is essential for profiling the users and determining the possible influence factors. Furthermore, questionnaires can be conducted online, saving plenty of resources and making this method even more cost-effective. Interviewing is another beneficial method which is often used as a follow-up measurement tool in combination with other methods (Shuy et al., 2001). When performed rigorously, interviews are useful for collecting information on users’ experiences and ideas. In particular, follow-up interviews are highly complementary with task observation and questionnaire methods. However, this method can be very time-consuming especially when conducted with a high number of users. Focus group studies are moderated roundtable discussions that are conducted with carefully selected participants to collect information on their ideas and experiences. They are useful for obtaining various perspectives on use case scenarios, functionalities and the design of the interfaces. In this sense, focus group discussions are valuable in the exploratory stages of CAAD software development; but leading the group in an efficient way and keeping the discussion on track or “focused” is a challenging task. Through these methods, it is possible to obtain high quality feedback in a limited period of time. <table> <thead> <tr> <th>Method</th> <th>Opportunities</th> <th>Challenges</th> </tr> </thead> <tbody> <tr> <td>Task Observation</td> <td>Evaluates the extent to which the software facilitates users to accomplish their tasks</td> <td>Sensitive to user profiles, requires careful setup and task definition</td> </tr> <tr> <td>Logging use</td> <td>Provides a valuable data record of the real use and interfaces; powerful when combined with the content</td> <td>Possible decrease in efficiency and issues related to privacy may arise in certain cases</td> </tr> <tr> <td>Questionnaires</td> <td>Measures users’ attitude towards the software</td> <td>Bias should be minimized in the design</td> </tr> <tr> <td>Interviews</td> <td>Powerful as a follow-up tool; valuable for understanding user experiences</td> <td>Provides incomparable information; possible interference with the outcomes</td> </tr> <tr> <td>Focus groups</td> <td>Obtaining various perspectives; valuable in the exploratory stages</td> <td>Requires efficient moderation; hard to keep a focus</td> </tr> <tr> <td>Ethnography</td> <td>Allows long-term information collection in a natural setting</td> <td>Time and resource consuming; high observer bias involved</td> </tr> <tr> <td>Cultural Probes</td> <td>Reveals cultural aspects and values</td> <td>Based on self reports; difficult to interpret</td> </tr> <tr> <td>Think-aloud method</td> <td>Reveals users’ thinking processes, misconceptions can be identified, provides comparable data</td> <td>Negative effects on efficiency and performance; takes long time to analyze</td> </tr> <tr> <td>Eye Tracking</td> <td>Provides objective, accurate, visual and time-based data on interface use</td> <td>It is difficult to infer causality between the collected data and usability problems</td> </tr> <tr> <td>Model Based Evaluation (KLM/ GOMS)</td> <td>Can be used in the early development phase, formative and summative evaluation</td> <td>Based on an expert user model, not real users; predicts mostly quantitative measures</td> </tr> <tr> <td>Inspection: Heuristic Evaluation and Walkthroughs</td> <td>When applied prior to other methods, it can reduce the number of user errors</td> <td>Fails to include real users; relies on expert knowledge; can possibly hinder innovation and creativity</td> </tr> </tbody> </table> Table 2 Opportunities and challenges of usability evaluation methods; based on (Nielsen and Mack, 1994), (ISO/IEC 25062, 2006), (Shneiderman and Plaisant, 2005), Usability Body of Knowledge [1]. In contrast, we can reference various other methods that are valuable but not so cost-effective in early CAAD software prototype development. Among those are *ethnography and cultural probes* which require long-term commitment from the users (Gaver et al., 1999). Overall, these two methods give high quality results but these are only suggested when the CAAD software developers have enough time and resources to execute them. Similarly, *think-aloud* is a reliable but not so cost-effective research method. It can provide critical insight into the users’ thinking processes (Ericsson and Simon, 1993) and help evaluators to identify misconceptions. On the other hand, it takes a lot of effort and time to make a pilot study, design the experiment and build a coding scheme, conduct the real experiment, transcribe, segment and codify the verbalizations and perform statistical analysis. *Eye tracking* is another valuable usability evaluation method that provides accurate and time-based data on interface use (Nielsen and Pernice, 2010). However, it is difficult to infer causality between the data collected from the experiments and usability problems. Furthermore, sophisticated equipment, software, and training are required to conduct eye tracking experiments; which makes this method expensive. In the future, eye tracking is expected to become more affordable and accessible as eye tracking equipment and technologies are developed further which may increase the cost effectiveness of this method. *Goals, Operators, Methods, and Selection rules (GOMS) and Keystroke Level Modeling (KLM)* are predictive methods based on the human information processor model for human computer interaction observation (Card et al., 1983). Relying on expert user models (not real users), these methods estimate metrics such as the time required to learn the system use and execute specific tasks. In this context, they can be highly usable for developing interfaces for users (such as pilots) who meet certain physical and mental requirements and follow guidelines to execute defined tasks. In contrast, the profiles of architectural designers are highly heterogeneous which makes their performance difficult to predict. As a final topic to be reviewed, *Heuristic Evaluation* is a method that heavily relies on “inspector” knowledge. This is a questionable approach to the evaluation of CAAD software prototypes because it insulates the users from the development process and replaces them with highly normative heuristic guidelines which are open to discussion. In this sense, use of heuristic guidelines and creative design may be contradictory because they limit the potentials and possibilities (Burmester and Machate, 2003). Another well-known negative aspect of this method is that it is costly to employ usability experts. In conclusion, each usability evaluation method has different flaws and limitations and they detect different usability problems. The best practice is to combine various evaluation methodologies to evaluate usability (ISO/IEC 25062, 2006). *Task observation, questionnaires, interviews, focus groups and logging use are efficient, effective and reliable;* thus satisfactory methods for the early development of CAAD software, even at the concept generation stage (Pak and Verbeke, 2011). These methods can be easily customized to fit into the problem area, depending on the characteristics of the software environment. **CASE STUDY: FOCUS GROUPS, QUESTIONNAIRES AND USE LOGGING AS USABILITY EVALUATION METHODS IN THE CAAD CONTEXT** In this section, we will briefly discuss the effectiveness and reliability of various usability methods based on the usability studies that we have recently conducted for the evaluation of a web-based geographic virtual environment prototype; primarily developed in the framework of a long-term research project (Pak and Verbeke, 2011). For evaluating the usability of this virtual environment prototype, we arranged focus group meetings, employed questionnaires for collecting user characteristics, determining user satisfaction levels and attitudes towards the system (task observation studies are in progress). Moreover, we logged the users’ activities for further assessment. Focus group meetings have significantly contributed to the early design development of our prototype. During the two meetings conducted with two different groups of experts, we have gained a clear insight into their expectations. Besides being established experts, the participants were also possible future users of the virtual environment that we are developing. In this sense, we were able to conceptualize numerous novel and critical ideas about possible improvements and new features. We believe that the characteristics of the participants played a positive role in these meetings. Moreover, these meetings were also useful for promotion and social networking. In addition to the focus group meetings, we employed two types of questionnaires for usability evaluation: user attitude and user satisfaction. The user attitude questionnaire included eleven Likert scale questions related to the goals of our study; an open-ended question for comments and three questions related to the computer and language skills of the students. The user satisfaction questionnaire was a standard after scenario questionnaire (ASQ) developed by Lewis (1991). Both of these were answered online by 25 students who used the web-based geographic virtual environment prototype for eight weeks in an international design studio context. Overall, both questionnaires were valuable tools for gathering the user feedback in a formal and holistic manner. Furthermore, with the open-ended questions we were able to receive constructive criticisms. The responses to these questionnaires effectively illustrated the (positive) attitude towards our virtual environment prototype and (high) user satisfaction levels. In order to test the reliability of these questionnaires, we have presented the same questions to the same students four months later in print format. The comparative analysis indicates a high level of correlation between the two measurements (Figure 4). These results can be considered as suggestive evidence for the reliability of the questionnaires as evaluation tools in CAAD context. Logging system use was an effective method for usability evaluation of the virtual environment prototype. We were able to record various user activities and compare them with the results from other measurement methods. For instance, 87% of the users in the attitude questionnaire strongly, mostly or somewhat agreed that they have actively collaborated with each other. This result was consistent with the use logs which indicated that 79% of the activities were collaborative (this study is also available in the proceedings book as an individual publication by the authors). Moreover, we have detected strong correlations (r>.71 with 95% confidence) between user computer skills, language skills, design studio grades and satisfaction levels; which suggests that the individual characteristics of the users may influence the usability evaluation process. A COMPREHENSIVE FRAMEWORK OF DESIGN USABILITY Our observations and the comprehensive background study that we have reported in the previous topics illustrate that usability -especially in the CAAD field- cannot be reduced to a “software quality”. Individual characteristics of the designers and the design context should also be taken into account as major influence factors. With this motivation, we are offering a novel framework for an extended understanding of usability in design (Figure 5). In this framework, we describe design usability as a multivariate quality emerging from the interactions between the designer, CAAD software and the design context (“designer” in this sense does not necessarily need to be an architect; this definition can include all actors involved in the design process The proposed design usability characteristics include (but are not limited to): designer satisfaction, effectiveness, efficiency, freedom from errors, learnability, memorability, use sustainability and sociability. The first six characteristics are derived from the ISO Standards (2006) and the framework of system acceptability quality characteristics by Nielsen and Mack (1994); whereas “use sustainability” and “sociability” are added as possible attributes of next-generation CAAD software. Use sustainability refers to the potential of the CAAD software for the long-term maintenance of the design usability for a defined group of designers, in a specified design context. Sociability can be described as the quality of the options offered by the CAAD software to the designers that allows them to reflect their design progress through social networks and receive feedback. CONCLUSIONS AND FUTURE DIRECTIONS In this paper, we initiated a constructive discussion on design usability evaluation by reviewing various definitions of usability and highlighting possible opportunities and challenges of CAAD-applicable evaluation methods. Moreover, we reflected our recent findings relating to the reliability and effectiveness of particular evaluation methods. As a result of this discussion, we introduced a new design usability framework that is suitable for CAAD software prototype development; which suggests a variety of design usability quality characteristics and possible influence factors in the evaluation process. The framework that we have proposed in the previous section represents design usability as a multivariate quality emerging from the interactions between the designer, CAAD software and the design context. We claim that design should be a sustainable and social practice, so should the CAAD software and usability evaluation criteria be. In this sense, the proposed framework includes two new and essential concepts, “use sustainability” and “sociability”. With the rapid evolution of information and communication technologies, “use sustainability” is a critical issue to be addressed. Any software or script can ultimately become unmaintainable, inefficient, unreliable and dysfunctional in a relatively short time period (one or two years). Ensuring use sustainability is the responsibility of the developers as well as the design actors. Moreover, in the age of social media, “sociability” emerges as an essential quality for the future software. There are unlimited opportunities for promoting conversation and interactions between the parties involved in the design processes. Software designers can make use of social networking tools and strategies to ensure the sociability of CAAD software. We suggest the proposed framework and suggested evaluation methods as a basis for conducting cost-effective, easy to set-up and reliable usability tests; which can improve the value of the software products by accommodating local differences and integrating human factors. In our usability study (that we have presented in the preceding section), we have observed that usability evaluation is not solely limited to interface testing, but it can also provide valuable information on the possible added values of CAAD software prototypes and increase the overall product quality. As a future recommendation—considering the complex and time consuming nature of the CAAD software prototype development and evaluation—we propose to employ open-source strategies and tools to enhance the sustainability of CAAD software. As an example, a web-based medium (or the “CUM-INCAD of source code”) can be created for sharing and collaboratively developing software prototypes under the Creative Commons license. This environment may also be designed to provide web tools for “community-based” or “crowdsourced” usability testing; which can assist the sustainable development of the CAAD research field and enhance its sociability. ACKNOWLEDGEMENTS This study is partially based on a post-doctoral research project supported by a grant from the Brussels Capital Regional Government, Institute for Research and Innovation to dr. Burak Pak, promoted by Prof. dr. Johan Verbeke. REFERENCES Pak, B, Verbeke, J 2010, ‘A Virtual Environment Model for Analysis and Evaluation of Future Develop-
{"Source-Url": "https://cumincad.architexturez.net/system/files/pdf/ecaade2011_110.content.pdf", "len_cl100k_base": 5453, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29523, "total-output-tokens": 6317, "length": "2e12", "weborganizer": {"__label__adult": 0.000885009765625, "__label__art_design": 0.05108642578125, "__label__crime_law": 0.0008234977722167969, "__label__education_jobs": 0.01528167724609375, "__label__entertainment": 0.0003247261047363281, "__label__fashion_beauty": 0.0005450248718261719, "__label__finance_business": 0.000774383544921875, "__label__food_dining": 0.000827789306640625, "__label__games": 0.0015697479248046875, "__label__hardware": 0.001972198486328125, "__label__health": 0.0014944076538085938, "__label__history": 0.0014972686767578125, "__label__home_hobbies": 0.0003268718719482422, "__label__industrial": 0.0010576248168945312, "__label__literature": 0.0018291473388671875, "__label__politics": 0.0004668235778808594, "__label__religion": 0.0013751983642578125, "__label__science_tech": 0.1202392578125, "__label__social_life": 0.0002503395080566406, "__label__software": 0.029571533203125, "__label__software_dev": 0.765625, "__label__sports_fitness": 0.0005369186401367188, "__label__transportation": 0.0010385513305664062, "__label__travel": 0.0005373954772949219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30708, 0.01955]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30708, 0.39892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30708, 0.91712]], "google_gemma-3-12b-it_contains_pii": [[0, 2459, false], [2459, 5410, null], [5410, 9021, null], [9021, 11517, null], [11517, 16384, null], [16384, 20288, null], [20288, 24382, null], [24382, 27691, null], [27691, 28812, null], [28812, 30708, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2459, true], [2459, 5410, null], [5410, 9021, null], [9021, 11517, null], [11517, 16384, null], [16384, 20288, null], [20288, 24382, null], [24382, 27691, null], [27691, 28812, null], [28812, 30708, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30708, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30708, null]], "pdf_page_numbers": [[0, 2459, 1], [2459, 5410, 2], [5410, 9021, 3], [9021, 11517, 4], [11517, 16384, 5], [16384, 20288, 6], [20288, 24382, 7], [24382, 27691, 8], [27691, 28812, 9], [28812, 30708, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30708, 0.18803]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5fab19ab105475bf126bbb73859855777173e18b
Comparison of Pair and Solo Programming through software metrics in University Students’ Projects Comparación de la programación por pares e individual a través de las métricas de software de proyectos de estudiantes universitarios Comparação da programação entre pares e individual através de métricas de software para projetos de estudantes universitários Ramón Ventura Roque Hernández Universidad Autónoma de Tamaulipas, México rvHernandez@uat.edu.mx https://orcid.org/0000-0001-9727-2608 Jesús Cárdenas Domínguez Universidad Autónoma de Tamaulipas, México jesus.cardenas.d@gmail.com https://orcid.org/0000-0001-9962-796X Adán López Mendoza Universidad Autónoma de Tamaulipas, México aLopez@uat.edu.mx https://orcid.org/0000-0003-4801-640X Juan Antonio Herrera Izaguirre Universidad Autónoma de Tamaulipas, México jaHerrera@uat.edu.mx https://orcid.org/0000-0002-4088-7772 Carlos Manuel Juárez Ibarra Universidad Autónoma de Tamaulipas, México cJuarez@docentes.uat.edu.mx https://orcid.org/0000-0003-4310-8938 Abstract Introduction: Pair Programming is an agile practice that can be used both in software development for business and in the teaching of programming in university courses. Objective: This paper presents a research that was conducted to compare pair programming and solo programming in university courses from the perspective of the metrics of the programs created by freshmen enrolled in the Bachelor Degree in Information Technologies. Method: The participants were divided into two groups: those who applied pair programming and those who programmed individually. Both developed the same program under the same conditions. The following metrics were analyzed in their programs: Number of Statements, Percentage of Comments, Maximum Depth, Average Depth, Maximum Complexity, Number of methods per class, Number of calls per method and Number of sentences per method. The values of the metrics were obtained by the Source Monitor software. Then, Mann-Whitney tests were performed in SPSS. Results: Results showed that students that worked in pairs wrote code with more statements (p=0.038, U=17.00) and a higher level of depth (p=0.032, U=18.00) compared to solo programmers. Conclusions: This paper contributes to the field of software development teaching by providing quantitative empirical evidence on the effectiveness of pair programming. It is concluded that pair programming can be an appropriate educational approach for the beginner’s software development university courses. Keywords: Computer Programming, Software, Higher Education, Measurement. Resumen Introducción: La programación por pares es una práctica ágil que puede ser utilizada tanto en el desarrollo de software en los negocios como en la enseñanza universitaria de la programación. Objetivo: Este artículo presenta una investigación que se realizó para comparar la programación por pares y en solitario en cursos universitarios considerando las métricas de los programas creados por estudiantes de reciente ingreso a una carrera universitaria en tecnologías de la información. Método: Se dividió a los participantes en dos grupos: uno aplicó la programación por pares y otro programó individualmente. Ambos desarrollaron el mismo programa bajo las mismas condiciones. Las siguientes métricas fueron analizadas en sus programas: número de sentencias, porcentaje de comentarios, profundidad máxima, profundidad promedio, complejidad máxima, número de métodos por clase, número de llamadas por método y número de sentencias por método. Los valores de las métricas fueron obtenidos con el software Source Monitor. Posteriormente se realizaron pruebas Mann-Whitney en SPSS. **Resultados:** Se observó que quienes trabajaron en pares escribieron código con más sentencias (p=0.038, U=17.00) y mayor nivel de profundidad (p=0.032, U=18.00) que quienes programaron individualmente. **Conclusiones:** Este artículo contribuye a la enseñanza del desarrollo de software al proveer evidencia empírica cuantitativa de la efectividad de la programación por pares. Se concluye que la programación por pares puede ser un enfoque educativo apropiado para los primeros cursos universitarios de desarrollo de software. **Palabras clave:** Programación informática, Software, Enseñanza superior, Medición. **Resumo** Introdução: A programação por pares é uma prática ágil que pode ser usada tanto no desenvolvimento de software nos negócios quanto no ensino universitário de programação. Objetivo: Este artigo apresenta uma investigação realizada para comparar a programação entre pares e solo em cursos universitários, considerando as métricas dos programas criados por estudantes recentes que ingressam em uma carreira universitária em tecnologia da informação. Método: Os participantes foram divididos em dois grupos: um aplicado por pares e outro programado individualmente. Ambos desenvolveram o mesmo programa sob as mesmas condições. As métricas a seguir foram analisadas em seus programas: número de frases, porcentagem de comentários, profundidade máxima, profundidade média, complexidade máxima, número de métodos por classe, número de chamadas por método e número de frases por método. Os valores métricos foram obtidos com o software Source Monitor. Posteriormente, os testes de Mann-Whitney foram realizados no SPSS. Resultados: Observou-se que aqueles que trabalhavam em pares escreviam código com mais sentenças (p = 0.038, U = 17,00) e maior nível de profundidade (p = 0.032, U = 18,00) do que aqueles que programavam individualmente. Conclusões: Este artigo contribui para o ensino do desenvolvimento de software, fornecendo evidências empíricas quantitativas da eficácia da programação por pares. Conclui-se que a programação por pares pode ser uma abordagem educacional apropriada para os primeiros cursos universitários de desenvolvimento de software. Introduction In the beginning of software engineering there were methodologies that set order to the developing process, but they were not flexible nor adaptable to the needs of the users, which were more demanding every time. They were unsuitable for environments with changing requirements and high priority for quality and delivery time. The developmental, incremental and agile methodologies emerged later with simplified visions focused on people with the objective of quickly obtaining good quality programs to satisfy the users’ requirements. Extreme programming, or XP is an agile approach to create software that proposes a model guide of development. XP eliminates the need to spend time in tedious and rigorous tasks, such as creation and extensive revision of documents and handling of huge volume of requirements lists. Among the variety of practices of XP, pair programming stands out, which consists of a couple of programmers that always use the same computer with defined and changing roles. The opinion about the use of this approach can be controversial. There are authors that have found positive results and recommend it; on the other hand, other researchers prefer other means to work. It has been found that the perceptions about the effectiveness of pair programming vary according to the amount of time that the programmers have worked with it. For example, those who have more experience working in pairs are convinced that this technique helps to reduce costs, while those who have used it less, perceive the contrary (Sun, Marakas, & Aguirre-Urreta, 2016). In the university area, previous research suggests that an agile approach will be useful in university courses and that working in pairs could be beneficial for students. However, there are no studies that lead to conclusive results. If pair programming is applied to beginner programming university courses, in which way will it be useful? How will it affect the programs developed by the students? The objective of this research was to compare pair programming and solo programming in a beginner programming university course. For this comparison, the metrics of the projects that were developed by freshmen using both work methods were taken into account. The paper is organized as follows: the next section shows the software background and its development; it emphasizes the agility, Extreme Programming and Pair Programming. Then, the materials and methods used to carry out the investigation are explained. The results and their discussion are presented afterwards. Finally, the conclusions and the possibilities for future works are explored. **Background** **Software Development** Software is a basic component of computer systems, which includes programs and data that make the hardware work. According to Forouzan (Forouzan, 2003), software is divided into two categories: system and application. System software allows the computer to efficiently manage the resources such as memory, storage and processor. The application software provides features to perform a concrete task oriented to directly help the final user. Regardless of the type of software in question, creating it means to develop a program from instructions or statements that are written using a programming language; its purpose is to tell the hardware what to do (Sánchez-Montoya, 1995). The set of instructions written by the programmer is called source code. Software development means much more than only writing these instructions; it also includes the participation of the work team in the different activities of the program creation and the management of the process itself. Software crisis is known as the phenomenon which main characteristic is the failure in software development projects due to exceeded budgets, requirements and deadlines that are not met, and the work team’s lack of skills. Software development is risky and hard to control due to the multiple factors that intervene in the process. Brook (Brooks, 1987) acknowledges that complexity is part of the software essence and not product of fate. In recent years, the methodologies to ensure better process control have evolved with the aim of solving the crisis previously described. Software Engineering is the discipline that covers processes, methods and tools that are used to produce computer programs. Thanks to this field of study, the activities of the work team can be efficiently organized, and repeatable approaches can be applied to ensure the quality of the development process and the final software. **Agility in software development** Agility is a combination of philosophy and development guidelines (Pressman, 2014), where change is accepted and perceived as a regular phenomenon; therefore, it is possible to continuously provide an adequate response to it. The agile philosophy is specified in the agile manifest for software development (Agile Alliance, 2018), where fast software development has a higher priority than documentation and people have greater value than processes. There are different agile approaches, and each one accentuates the philosophies of the manifest in a larger or smaller scale; however, all of them offer different ways to achieve the same objective. Some of the agile methods emphasized by Martin (Martin, 2011) are: Extreme Programming (XP), Adaptive Development, Scrum, Dynamic Systems Development Method, and Crystal. **Extreme Programming: an agile approach** Extreme programming is an agile approach used to develop software (Fowler, 2018) that includes twelve practices aimed at obtaining working software in the shortest time. It is focused on people that produce and use the software (Beck & Andres, 2004). One of its main advantages is that it reduces the cost of implementing changes during the entire life cycle of the system. Software starts in a small scale, and then it becomes more functional as a result of the client’s feedback, who is part of the work team. Efforts are determined to obtain what is needed and not wasted in developing additional features. **Pair programming: a practice of Extreme Programming** Pair programming is one of the fundamental practices of Extreme Programming. Two programmers work together in the same space and with the same computer with the purpose of producing software collaboratively through all the activities involved in this process. One of the programmers takes the keyboard and mouse and plays a guiding role; the other one is the navigator, who is in charge of observing, make timely revisions, manage tasks, locate faults, and see beyond the source code (Beck & Andres, 2004). Both of them act as a single intelligent unit that adopts the responsibility of everything that it does (Williams, Kessler, Cunningham, & Jeffries, 2000). Both roles are periodically interchangeable. Pair programming in university The agile practices to develop software are important nowadays in the business world since they have been proved to have a positive effect in projects. Experts consider that their use can be encouraged from within the programming courses (Kropp & Meier, 2013; von Wangenheim, Savi, & Ferreti Borgatto, 2013), where potential present and future software developers are trained. Smith, Giugliano y DeOrio (2017) think that encouraging pair programming in university produces relevant benefits for students. They studied the long term effects of using pair programming in beginner courses and found its positive effect on academic performance in more advanced courses. Pair programming promotes confidence, course completion and pass rates; this approach can be beneficial for all students, especially for women because it overcomes many factors that may prevent women’s participation in computer science (Werner, Hanks, & McDowell, 2005). Pair programming has been researched from different perspectives; however, its application within the classrooms has not been studied enough (Prabu & Duraisamy, 2015) and there are still discrepancies in the results and opinions about its true effectiveness (Coman, Robillard, Silliti, & Succi, 2014; Salleh, Mendes, & Grundy, 2014). For this reason, adopting an objective criteria in the pair programming research against solo programming is relevant. Software metrics have been used previously for this purpose. For instance, in Hulkko & Abrahamsson (2005) the defects in projects made in C++ and Java were analyzed. The least amount of defects was found in one of the projects that was developed in pairs. On the other hand, the works of Tsompanoudi, Satratzemi, & Xinogalos (2016), Zacharis (2011) and Mohd Zin, Idris, & Kumar Subramaniam (2006) have studied pair programming in university courses using online tools; they have found positive results that recommend this approach as an alternative to solo programming. In the same way, the work of McDowell, Werner, Bullock, & Fernald (2006) found favorable results with pair programming in regards to the performance, confidence, and collaborative learning developed by the students. **Measurement and metric analysis** Measurement is a process through which a value is assigned to a programming feature with the purpose of obtaining useful references to evaluate the software quality. On the other hand, the metrics are features of software that can be objectively measured. There are metrics oriented to the development process for example the average effort to perform a task; there are also metrics oriented to product, for example, the number of lines of source code and the level of cyclomatic complexity. Sommerville (Sommerville, 2015) explains the use of a measurement performed in two different scenarios to determine the usefulness of a tool. Measurements performed on software are used in the decision making process oriented to resource optimization; they are also fundamental elements for empirical software engineering, an area of study that uses experimentation and data gathering for hypotheses testing in the software development field. **Method** **Participants** This study had the participation of 26 freshmen obtaining a bachelor’s degree of Information Technologies at a Mexican University (Note: This information was removed for confidentiality reasons). They were taking the course “Fundamentals of Computer Science and Methodology of Programming”. The students were randomly distributed as follows: 12 students were assigned to work in 6 pairs and 14 students were assigned to work individually. **Scenario** The students worked in the programming lab at the University campus, where this study was conducted. This lab was chosen because the students work there regularly. This lab has 30 computers with the following features: i5 Intel Processor, 8GB of RAM memory, 500 GB of storage capacity in hard drive and a 21” flat screen monitor. They used Visual Studio 2013 as Integrated Development Environment (IDE) with Visual Basic. NET and a Console Project. **Instrument** The instrument used to evaluate the differences between the software development through pair programming and solo programming was defined by the following metrics: number of sentences, percentage of comment lines, maximum depth, average depth, maximum complexity, number of methods per class, number of calls per method and number of statements per method, which were obtained with the SourceMonitor software (Campwood, 2018) for each of the projects developed by the participants. **Procedure** The research was conducted in a single regular two-hour session of the course “Fundamentals of Computer Science and Programming Methodology”. Before starting, there was a waiting period of 15 minutes; students who arrived late were not allowed to participate. Details from this experiment were not disclosed previously; thus, the students did not know they were going to be a part of it. The participants were not given any compensation or incentive. First, an introductory twenty minute talk was given. They were explained the way they would be working, and researchers avoided encouraging trends in the participants’ perception of the studied approaches. Then, the students were organized to work in one of two modes: pairs or solo. This was done using a list of the attendees and assigning each student one of the two ways of working with the help of the evenly distributed random numbers found in the Coss Bu (Coss Bu, 1995) materials. After that, they were informed about the programming problem that they would have to solve (see Table 1). Everyone was asked to develop the same program under the same conditions. Table 1. Description of the program developed by students. <table> <thead> <tr> <th>Description of the program developed by students.</th> </tr> </thead> <tbody> <tr> <td>Develop a program to request a number from the user and perform the following operations with it:</td> </tr> <tr> <td>1) Add the same number to it.</td> </tr> <tr> <td>2) Multiply it by the same number.</td> </tr> <tr> <td>3) Divide it between (the same number plus 1).</td> </tr> <tr> <td>4) Subtract (the same number minus 1) to it.</td> </tr> <tr> <td>The program must also provide the sum of all these results plus the number that the user entered.</td> </tr> <tr> <td>If this total sum is less than 30, it must print the message “the sum is too small”. If the total sum is bigger than 50, it must print the message “the sum is too big”. No other message should be printed otherwise.</td> </tr> <tr> <td>Finally, the total sum must be stored and listed in a log that contains all the operations performed so far; it must include the date and hour of execution as well.</td> </tr> <tr> <td>Source: Own preparation</td> </tr> </tbody> </table> Those who worked in pairs were assigned only one computer, in which both students had to program. Those who worked individually were assigned a computer for each one. Due to the design of the facilities where the experiment was performed, all the students used adjacent computers, but pairs and individual programmers were alternately distributed. The maximum time to develop the program was one hour. For those who worked in pairs, the time to switch roles between guide and navigator was five minutes. Such periods were timed, publicly announced and supervised to be fulfilled. Copying code from other students was not allowed. Participants had free internet access for surfing, but chats and social media were not allowed. It was also presented to them a descriptive illustration of the program’s execution. This way, the requirements were more evident. Finally, the participants were asked to compress their projects into a single ZIP format file and upload it to the Blackboard Learning System (UAT, 2018). Then, the research team downloaded and processed the projects with the Source Monitor software. Finally, the results were entered in a Microsoft Excel spreadsheet and exported to the SPSS Software (Wagner, 2014), where the statistical processes were performed. Type of study This study had a “Posttest-only control group design” or “After-only with control design” as described in the book of Zikmund, Barry, Carr, & Griffin (2013). Conceptual and operational definition of variables The studied variables in this research with their conceptual definition are presented in Table 2. Their operational definition is described by the measurement of the metrics in each one of the projects according to the Source Monitor Software (Campwood, 2018). Table 2. Definition of variables in this research. <table> <thead> <tr> <th>Variable</th> <th>Conceptual Definition according to the documentation in Campwood (2018).</th> </tr> </thead> <tbody> <tr> <td>Number of sentences</td> <td>Defines the reserved words and sentences of the language such as if, foreach, do/until, for and while, the operations to assign values to variables, calls to methods, definition of variables (Dim and Redim), methods, attributes, and the exception sentences: try, catch, finally.</td> </tr> <tr> <td>Percentage of comments</td> <td>It is the proportion of the number of lines of the program that are marked as comments compared to the number of total lines of the program file.</td> </tr> <tr> <td>Maximum depth</td> <td>It refers to the maximum level of nesting in the source code (this is the deepest level of code blocks within others).</td> </tr> <tr> <td>Average depth</td> <td>It is the weighted average of the depth of the blocks of all the sentences in a program.</td> </tr> <tr> <td>Maximum complexity</td> <td>It is the biggest complexity value observed in the methods of the analyzed project.</td> </tr> <tr> <td>Number of methods per class</td> <td>This metric is the total number of methods divided by the total number of classes, interfaces and structures.</td> </tr> <tr> <td>Number of calls per method</td> <td>It is the result of dividing the total number of calls to other methods by the number of methods in that project.</td> </tr> <tr> <td>Number of sentences per method</td> <td>It is the result of dividing the total number of sentences within all the methods of the project by the number of methods of that project.</td> </tr> </tbody> </table> Source: Own preparation Data Analysis The projects developed by the students were analyzed with the *Source Monitor* software, and the metrics indicated in Table 2 were obtained for each project. The results were entered in a Microsoft Excel spreadsheet and then exported to SPSS, where a preliminary analysis of the data was performed. Then, *Mann-Whitney* tests were conducted to see if the arithmetic differences observed between the metrics of the groups were statistically significant with a 95% confidence reference. Results As a result of the analysis performed, the descriptive data of the metrics for each group was obtained first. This information is summarized in Table 3, where the median and inter-quartile range are presented for Solo and Pair Programming groups. ### Table 3. Descriptive data of the analyzed metrics. <table> <thead> <tr> <th>Metric</th> <th>Solo Programming</th> <th>Pair Programming</th> </tr> </thead> <tbody> <tr> <td></td> <td>Median</td> <td>Inter-quartile range</td> </tr> <tr> <td>Number of Sentences</td> <td>17.50</td> <td>12</td> </tr> <tr> <td>Percentage of comments</td> <td>0</td> <td>1.5</td> </tr> <tr> <td>Maximum depth</td> <td>2.00</td> <td>2</td> </tr> <tr> <td>Average depth</td> <td>1.86</td> <td>.30</td> </tr> <tr> <td>Maximum complexity</td> <td>1.00</td> <td>3</td> </tr> <tr> <td>Number of methods per class</td> <td>0</td> <td>0</td> </tr> <tr> <td>Number of calls per method</td> <td>0</td> <td>0</td> </tr> <tr> <td>Number of sentences per method</td> <td>0</td> <td>0</td> </tr> </tbody> </table> Source: Own preparation The results of the *Mann-Whitney* test are presented in Table 4. This test was performed to find significant differences in the metrics of both groups. **Table 4. Results of the *Mann-Whitney* test.** <table> <thead> <tr> <th>Metric</th> <th>PValue</th> <th><em>Mann-Whitney</em> U</th> </tr> </thead> <tbody> <tr> <td>Number of Sentences</td> <td>.038</td> <td>17.00</td> </tr> <tr> <td>Percentage of comments</td> <td>.768</td> <td>39.50</td> </tr> <tr> <td>Maximum depth</td> <td>.032</td> <td>18.00</td> </tr> <tr> <td>Average depth</td> <td>.025</td> <td>15.00</td> </tr> <tr> <td>Maximum complexity</td> <td>.198</td> <td>27.00</td> </tr> <tr> <td>Number of methods per class</td> <td>.526</td> <td>38.00</td> </tr> <tr> <td>Number of calls per method</td> <td>.127</td> <td>35.00</td> </tr> <tr> <td>Number of sentences per method</td> <td>.476</td> <td>37.50</td> </tr> </tbody> </table> Source: Own preparation Finally, the mean ranks for the metrics with statistical significant differences (PValue <0.05) according to the *Mann-Whitney* tests are shown in Table 5. It can be noted that pair programmers used more sentences and wrote code with higher level of depth than solo programmers. **Table 5. Mean ranks obtained in *Mann-Whitney* tests for statistical significant results.** <table> <thead> <tr> <th>Metric</th> <th>Individual</th> <th>Pairs</th> <th>Conclusion</th> </tr> </thead> <tbody> <tr> <td>Number of sentences</td> <td>8.71</td> <td>14.67</td> <td>The participants that worked in pairs used more instructions in their programs than those who worked alone.</td> </tr> <tr> <td>Maximum depth</td> <td>8.79</td> <td>14.50</td> <td>The participants that worked in pairs wrote programs with more code blocks than those who worked alone.</td> </tr> <tr> <td>Average depth</td> <td>8.57</td> <td>15.00</td> <td>The participants that worked in pairs wrote programs with more code blocks than those who worked alone.</td> </tr> </tbody> </table> Source: Own preparation Discussion The Mann-Whitney tests revealed that only the number of sentences and the level of depth can be considered significant. It was noted that the participants that worked in pairs wrote a higher number of sentences and their code had higher levels of depth. This means that students that applied pair programming used the reserved words of the programming language more frequently, and they were also capable of writing source code with more structures of nested blocks. It is true that the highest levels of nesting produce a more complex code because it can be more difficult to read and analyze. Nevertheless, it must be considered that the selection and iteration instructions like the ones needed to solve the exercise presented to the students in this research increase the depth metrics naturally. We consider the values obtained by pair programmers as positive results of using pair programming in a beginner university course. When working in pairs, the students produced more elaborated programs that imply a better use of the programming language and a higher performance in the participants. These findings suggest that the work in pairs could be more efficient and give better results than solo programming, such as expressed by the theory of Kent Beck (Beck & Andres, 2004). This is also consistent with the benefits of pair programming found by Werner (Werner, Hanks, & McDowell, 2005) and with the opinions in the work of Smith, Giugliano y DeOrio (2017). It must be taken into consideration that we did not conduct an additional analysis of the individual projects to investigate if the code written by the students could be improved to increase the performance of the programs or the legibility of the source code. On the other hand, it should be also contemplated that the participants were students without previous experience on collaborative development; their only experience was on solo programming, since it is the way they usually work. Conclusions This paper presented a study based on the analysis of software metrics to compare the development results of solo and pair programming in a university programming course. Statistical significant differences were found between both groups in the number of sentences written and the level of depth in the source code. Pair programmers wrote code with a higher number of statements and a higher level of depth than solo programmers. These findings allow to foresee that the implementation of pair programming in university courses could be appropriate to motivate students to write more exhaustive programs with more structural richness. As future work, it is suggested to increase the number of metrics studied in the projects developed by the participants and to conduct a further analysis on each of the projects to evaluate the quality of the code. We recommend that pair programming continue being used and studied in educational settings. This will continuously generate more specific knowledge and will help to deeply understand how Pair Programming contributes to learning. Acknowledgements Authors would like to express their deep gratitude to the Autonomous University of Tamaulipas for providing them with valuable support and helpful resources while this research was being conducted. References Salleh, N., Mendes, E. & Grundy, J. (2014). Investigating the effects of personality traits on pair programming in a higher education setting through a family of experiments. Empirical Software Engineering, 19(3), 714-752. doi: 10.1007/s10664-012-9238-
{"Source-Url": "http://www.scielo.org.mx/pdf/ride/v10n19/2007-7467-ride-10-19-e030.pdf", "len_cl100k_base": 6617, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 41633, "total-output-tokens": 8607, "length": "2e12", "weborganizer": {"__label__adult": 0.0010232925415039062, "__label__art_design": 0.000934600830078125, "__label__crime_law": 0.0007562637329101562, "__label__education_jobs": 0.09100341796875, "__label__entertainment": 0.000171661376953125, "__label__fashion_beauty": 0.0004580020904541016, "__label__finance_business": 0.0008339881896972656, "__label__food_dining": 0.0011119842529296875, "__label__games": 0.0014753341674804688, "__label__hardware": 0.0011682510375976562, "__label__health": 0.0011301040649414062, "__label__history": 0.0005769729614257812, "__label__home_hobbies": 0.00032210350036621094, "__label__industrial": 0.0008473396301269531, "__label__literature": 0.0010890960693359375, "__label__politics": 0.0006694793701171875, "__label__religion": 0.001148223876953125, "__label__science_tech": 0.00843048095703125, "__label__social_life": 0.0005974769592285156, "__label__software": 0.00634002685546875, "__label__software_dev": 0.876953125, "__label__sports_fitness": 0.0009379386901855468, "__label__transportation": 0.0014028549194335938, "__label__travel": 0.0005583763122558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34380, 0.03944]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34380, 0.60225]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34380, 0.87335]], "google_gemma-3-12b-it_contains_pii": [[0, 1020, false], [1020, 3455, null], [3455, 5861, null], [5861, 7939, null], [7939, 10105, null], [10105, 12281, null], [12281, 14758, null], [14758, 16360, null], [16360, 18301, null], [18301, 20696, null], [20696, 23102, null], [23102, 25031, null], [25031, 26844, null], [26844, 28816, null], [28816, 30125, null], [30125, 32097, null], [32097, 34253, null], [34253, 34380, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1020, true], [1020, 3455, null], [3455, 5861, null], [5861, 7939, null], [7939, 10105, null], [10105, 12281, null], [12281, 14758, null], [14758, 16360, null], [16360, 18301, null], [18301, 20696, null], [20696, 23102, null], [23102, 25031, null], [25031, 26844, null], [26844, 28816, null], [28816, 30125, null], [30125, 32097, null], [32097, 34253, null], [34253, 34380, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34380, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34380, null]], "pdf_page_numbers": [[0, 1020, 1], [1020, 3455, 2], [3455, 5861, 3], [5861, 7939, 4], [7939, 10105, 5], [10105, 12281, 6], [12281, 14758, 7], [14758, 16360, 8], [16360, 18301, 9], [18301, 20696, 10], [20696, 23102, 11], [23102, 25031, 12], [25031, 26844, 13], [26844, 28816, 14], [28816, 30125, 15], [30125, 32097, 16], [32097, 34253, 17], [34253, 34380, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34380, 0.27168]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
739e4e3e7eeb20fd4f0f4f73154a0917171b4bd3
TOWARDS A FORMAL VERIFICATION OF A SECURE AND DISTRIBUTED SYSTEM AND ITS APPLICATIONS Cui Zhang, Rob Shaw, Mark R. Heckman, Gregory D. Benson, Myla Archer, Karl Levitt, and Ronald A. Olsson Department of Computer Science University of California, Davis, CA 95616 Email: Last-Name@cs.ucdavis.edu Abstract This paper presents research towards the formal specification and verification of a secure distributed system and secure application programs that run on it. We refer to the whole system — from hardware to application programs written in a concurrent programming language — as the Silo, and to a simplified view of the Silo as the miniSilo. Both miniSilo and Silo consist of a collection of microprocessors interconnected by a network, a distributed operating system and a compiler for a distributed programming language. Our goal is to verify the full Silo by mechanized layered formal proof using the higher order logic theorem proving system HOL. This paper describes our current results for verifying the miniSilo and our incremental approach for evolving the verification of the miniSilo into the verification of the full Silo. Scalability is addressed in part by extending the distributed operating system with additional servers which in turn provide services that extend the programming language. Keywords: verification, distributed operating systems, security servers, distributed programming languages. 1 Introduction This paper describes our research on a long term project called the Silo. This project is aimed at verifying a complete distributed computer system by mechanized layered formal proof. Our layered system includes a set of microprocessors, a network model, the operating system kernel and servers (some in support of security) running on each microprocessor (hence, a secure distributed operating system), the concurrent programming language microSR (a derivative of SR [1]), and a Hoare-like programming logic. Each layer will be formally modeled as an interpreter that interacts with the other layers. Our layered approach will allow us to verify that secure and distributed applications run correctly on the entire system. In its final form, the Silo will be somewhat limited when compared to “real” computer systems; however, we hope it will be the most comprehensive distributed computer system that is verified and demonstrates a methodology for “full system verification” of distributed systems. The CLI stack [2] has shown the feasibility of full system verification for a sequential system using a layered proof technique, but their model does not allow for concurrency and distributed programming, nor have they fully integrated the operating system into their “stack”. When we began specifying the Silo system, we realized that an incremental approach is necessary for revealing unforeseen difficulties and for making the formal proof more manageable. Rather than attempting to specify and prove the entire Silo, we have identified a subset of the Silo to specify and prove correct by limiting the scope of each layer to reduce the complexity. As shown in Figure 1, we refer to this simplified view of the Silo as the miniSilo. As our preliminary results on miniSilo have --- 1This work was sponsored by the National Security Agency University Research Program under contract DOD-MDA904-93-C-4088 and by ARPA under contract USN N00014-93-1-1322 with the Office of Naval Research. **Towards a Formal Verification of a Secure and Distributed System and Its Applications** **University of California (Davis), Department of Computer Science, 1 Shields Avenue /2063 Kemper Hall, Davis, CA, 95616** **Approved for public release; distribution unlimited** **14. ABSTRACT** *see report* **15. SUBJECT TERMS** - unclassified **16. SECURITY CLASSIFICATION OF:** - a. REPORT: unclassified - b. ABSTRACT: unclassified - c. THIS PAGE: unclassified **17. LIMITATION OF ABSTRACT** **18. NUMBER OF PAGES** 11 **19a. NAME OF RESPONSIBLE PERSON** --- Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 shown the usefulness of our layered proof methodology, we are now growing the miniSilo system into the full Silo by developing the system and proof by incrementally adding functionality to all layers. Our specification, verification, and augmentation process is being carried out using the Cambridge HOL theorem prover [8], because it allows the definition of embedded theories, such as we are using for a programming logic of concurrency and a generic model of a layer. We also hope our work demonstrates the expressiveness, flexibility, and feasibility of higher order logic in formal specification and verification for more complicated computer systems, including a concurrent programming language that support security applications and a distributed operating system. This paper concentrates on our miniSilo effort, as a step in the full Silo effort. Section 2 describes our work on the network layer. Section 3 gives our work on the mpmachine layer. Section 4 describes our effort on the language implementation for microSR. Section 5 presents the Hoare logic derived from the microSR semantic specification. Section 6 concludes our work. ## 2 The Network ### 2.1 The Network for MiniSilo The lowest layer of miniSilo consists of a network which allows individual processors (vmachines, see Section 3) to communicate through message passing. The miniSilo network consists of a set of processors and an interconnect service. Each processor communicates with the network through a --- <table> <thead> <tr> <th>Application Layer</th> <th>Interface: microSR &amp; a Hoare Programming Logic</th> </tr> </thead> <tbody> <tr> <td>Implementation</td> <td>Soundness Proofs</td> </tr> <tr> <td>Static process naming, few variables, simple send/receive</td> <td>Infinite set of variables &amp; true SR IPC mechanisms</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Language Implementation Layer</th> <th>Interface: Formal Semantics of microSR including IPC</th> </tr> </thead> <tbody> <tr> <td>Implementation</td> <td>Compilation Functions</td> </tr> <tr> <td>Generation of simple machine instructions</td> <td>Full symbol tables, richer target language with system calls</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Mpmachine Layer</th> <th>Interface: IPC System Calls</th> </tr> </thead> <tbody> <tr> <td>Implementation</td> <td>OS Kernel &amp; Servers (including security servers)</td> </tr> <tr> <td>User &amp; system processes running with time</td> <td>multiplexed across the processors and communication through system provided mailboxes</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Processor Layer</th> <th>Interface: Instruction Set</th> </tr> </thead> <tbody> <tr> <td>Implementation</td> <td>CPU &amp; Memory</td> </tr> <tr> <td>Simplified processors with simple instruction set</td> <td>Richer instruction set with true user/system modes</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Network Layer</th> <th>Interface: Host-host Communication Network</th> </tr> </thead> <tbody> <tr> <td>Implementation</td> <td>Network Controllers &amp; Interconnect</td> </tr> <tr> <td>Hardwired data lines and infinite resources</td> <td>Finite resources, memory mapped I/O via interrupt mechanism</td> </tr> </tbody> </table> --- Figure 1: Overview of UCD Silo and MiniSilo Network Interface Unit (NIU), as shown in Figure 2. In miniSilo we assume that a processor has dedicated, hardwired data lines that interface directly with an NIU. The network provides reliable transmission of messages and preserves message ordering between communicating processors. ![Diagram of the MiniSilo Network](image) Figure 2: The MiniSilo Network The miniSilo network is specified abstractly. By specifying the network in general terms, we do not impose any restrictions on the network topology or on the communication protocol. We do ensure that the network provides the properties that the higher miniSilo layers assume of the network. Later, if desired, one could develop an implementation of the abstract network specification. The next logical layer is the protocol layer. There has been considerable work on the verification of network protocols [5, 12], which could be used to implement the abstract specification presented. For “complete” verification the protocol layer must ultimately be specified in terms of the underlying hardware. Protocol and network hardware verification are beyond the scope of this project. The network is also specified operationally, where each NIU is modeled as an interpreter that reads and modifies state. The entire network is modeled as the composition of all the NIUs. The network interpreter is driven by send requests from the processors. Send requests result in receive requests from an NIU to a processor, which allows for nonblocking I/O at the operating system level. Sends and receives are accomplished through memory mapped I/O. The set of NIUs are modeled as a fully connected network through send and receive queues, collectively called in-transit queues. For n processors, each NIU has n – 1 send queues and n – 1 receive queues. Each queue is shared by exactly two NIUs, one NIU views the queue as a send queue and the other NIU views it as a receive queue. The send and receive queues form the In Transit State. The NIU State for each NIU combined with the In Transit State form the Network State. The specification of the network interpreter is a relation, Network State → Network State → Bool. This interpreter is used to prove properties about the network itself, as well as to serve as an implementation for the higher layers of miniSilo. Because the miniSilo network specification is given in terms of abstract operational semantics, we need to prove certain safety properties to ensure that the network functions correctly. The most important safety property is the ordering of messages between communicating processors. This property follows from the representation of the In Transit State. Other safety properties, such as no duplication of messages, are also verified. ### 2.2 The Network for Silo The proof obligation of the mpmachine requires us to verify that the network specification combined with the vmachine specification logically imply the mpmachine specification. In miniSilo, the distinction between the mpmachine communication abstractions and the network abstractions are small, but this will change once the full Silo is developed and each layer is expanded to more realistic specifications of a distributed system. In particular, the network specification will be modified in two respects. First, the network will be specified in terms of finite resources rather than infinite resources. Currently the specification allows infinitely many messages to be present in the in-transit queues. Therefore, each NIU is always ready to send another message, the processor is not required to wait or resend messages. Moving from infinite queues to finite queues entails certain specified error conditions and can result in storage channels. In miniSilo, we also assume that the message being transferred is a single, but infinite integer. We intend to alter the specification to handle finite packets. Second, the interface between an NIU and a vmachine will be enhanced to one based on memory-mapped I/O and interrupts rather than memory-mapped I/O alone. This will allow the operating system to implement non-blocking I/O, and more importantly, allow for more than one process and operation per processor as described in Section 3. The new processor to NIU interface will also be enhanced to handle simple error conditions such as a network busy error or packet lost error, both of which will result in the processor resending the packet. Again, there are security implications to these decisions, which we will consider. 3 The Mpmachine 3.1 The MiniSilo Mpmachine The miniSilo abstraction mpmachine represents multiple processors, each running a single process. Processes communicate by passing messages through a network. From a user process’s point of view, the operating system interface appears as an “extended machine”, consisting of the basic machine instructions plus communication primitives (system calls). The communication primitives are used to send and receive messages, through message queues. MiniSilo has one message queue per process, where only one process can read from the queue and all other processes can send messages to the queue. The vmachine specification describes a single processor in miniSilo. Each vmachine consists of an infinite set of registers, an infinite set of memory locations, and a program counter. Since these are modeled in HOL using natural numbers, each location may hold a non-negative integer of any size. A single vmachine operates much as one would expect, interpreting a typical set of simple machine instructions consisting of load, store, arithmetic, comparison, and branching instructions. It can not, however, issue any kind of communication action with other vmachines; the mpmachine provides this ability. This modularization is intended to isolate the processor from changes in the network hardware — the mpmachine is responsible for the compatibility of these two lower components and for defining the pool of message queues and system calls. Neither component depends upon the other’s specification in any way. An mpmachine contains \( N \) vmachine processors and \( N \) network interface units (NIUs) connected to a bus. Within the mpmachine specification, however, this bus is abstracted as a pool of queues. This pool contains one queue for each NIU, representing the ordered list of pending messages destined for the vmachine corresponding to the particular NIU. The external appearance of an mpmachine, therefore, is an \( N \)-tuple of vmachines (whose appearance is “passed-up”, unaltered), plus a pool of “in-transit” message queues. Similarly, the language interpreted by an mpmachine is an \( N \)-tuple of lists of instructions. The set of instructions contains all the operations executable on a vmachine, plus communication primitives. Similar to earlier efforts [4, 10], the actual operation of the mpmachine is modeled with --- 2Initially, we chose this term as an abbreviation of “virtual machine”. Presently, however, “vanilla machine” is perhaps more appropriate. transition relations. Each kind of transition allows a single component of the \( N \)-tuple to advance a single step. To issue a vmachine instruction, only the state of the corresponding vmachine hardware is affected. To issue a communication primitive, however, the global pool of queues may be altered as well. The HOL specification of this machine model consists of straightforward type definitions for the objects described, plus the transition relation associated for each kind of mpmachine instruction. These relations have the type \( \text{Args} \rightarrow \text{MPprocess} \rightarrow \text{MPprocess} \rightarrow \text{Vid} \rightarrow \text{Bool} \). The type \( \text{Args} \) characterizes the numerical operands to the instruction. An \( \text{MPprocess} \) represents a pair whose first component is the local state of the vmachine which is executing this instruction, and whose second component is the pool of queues. Finally, \( \text{Vid} \) is the index of the executing vmachine; this information is not available within an MPprocess. If we were to include, say, a read-only "processor id register" in each vmachine, then the information in \( \text{Vid} \) above would become redundant. From this type definition, we see that the following question can be answered of each mpmachine instruction: Given the indicated operands, and the indicated initial configuration of the mpmachine, is it possible to arrive in a given configuration after the indicated processor executes this instruction? For example, the relation for a simple vmachine jump instruction would require that the pools in both MPprocess objects are identical, because a jump does not affect communications. Moreover, the underlying vmachine specification would ensure that the register and memory contents of the vmachine object within the first MPprocess must also be identical to the corresponding vmachine within the second MPprocess. Only the processor’s program counter will differ between the two configurations, and this difference must agree with the target location given in the operand to the jump (indicated in the \( \text{Args} \)). The mpmachine specification does not directly contain these facts, but rather defers to the vmachine specification itself. As an example of message passing, if the instruction in question were a receive operation, both the processor and the pool contents will differ accordingly. In particular, after the instruction is complete, the destination register in the processor will contain the received value, and the appropriate queue in the pool (indicated by the \( \text{Vid} \)) will have one less message than it did before the instruction began. Armed with a semantic relation for each instruction, the mpmachine specification only requires two more definitions to encompass the complete system behavior. The first of these, is an inductive definition of how a thread, an instance of a sequential program piece, may legally execute for \( k \geq 0 \) steps. To execute for zero steps, both the initial and the final MPprocess must be completely identical. To execute for \( k > 0 \) steps, there must exist an intervening MPprocess value, call it \( M \), such that the appropriate semantic relation allows a one-step transition from the initial state into \( M \), and the final \((k - 1)\)-step transition from \( M \) into the final state is allowed inductively. The second definition describes the legal behaviors of complete programs on an mpmachine, and it is not inductive. Here, a final state of the entire system is reachable from an initial one precisely when the corresponding initial and final MPprocess’s for each component of the program are allowed by the above inductive definition, for some \( k \geq 0 \). ### 3.2 Growth to Complete Silo The complete Silo system consists of multiple processors, connected by a network and each running a copy of the Silo operating system. The operating system design is based on the kernel and server model used, for example, in Mach [14] and in Synergy [15]. The kernel provides a multi-programmed, message passing environment for the server processes and user processes on a particular processor. The abstraction of a distributed system is maintained by the servers. As shown in Figure 3, from a user process’s point of view, the operating system interface in Silo will extend that of miniSilo with richer basic machine instructions and system calls. In this way, the language work can proceed concurrently with the operating system work. Silo includes additional system calls for processes to create message queues, called mailboxes, and for processes to request access to specific mailboxes. The mailbox management calls are subject to a system security policy implemented by a security server, as shown in Figure 4. These calls, while an essential part of the Silo system specification, are only relevant to user processes when an application is initially loaded and, therefore, do not require significant changes to our language work. A mailbox is a queue of messages with at most one process receiving messages through the mailbox and possibly many senders. The complete operating system specification guarantees that messages sent by a particular process to the same mailbox will be queued in the mailbox in the same order that they were sent but, due to the concurrent nature of the system, does not guarantee the relative ordering of messages sent by different processes. For this reason, Silo specifies a mailbox as a set of queues—one per sender, rather than one per receiver as in miniSilo. A major challenge in the Silo project is to specify the entire distributed operating system at its interface to user processes, to specify each of the servers and the kernel, and to prove that a composition of the server, kernel and network specifications satisfy the secure distributed operating system specification. We are accomplishing this in stages: first composing the servers that manage mailboxes, then adding the servers that implement system security and support the security features in the programming language. 4 Implementation of MicroSR 4.1 MicroSR Semantics The interpreted language at this layer is microSR whose constructs include those basic to common sequential programming languages, in addition to an asynchronous send statement, a synchronous receive statement, a guarded communication input statement, and a co statement for specifying concurrent execution. This language has the appearance of a high-level system programming language that supports distributed applications. For each statement, we have a semantic transition relation of type $Gstate \rightarrow Gstate \rightarrow Pid \rightarrow Bool$. These semantic relations are analogous to, though more complex than, the mmachine relations. Here, the type $Gstate$ (for "global state") represents a complete system configuration, and the relation is true if and only if the system may evolve from the first $Gstate$ into the second $Gstate$ by the execution of the given microSR statement within the logical process indicated by $Pid$. The semantics are also formalized operationally, using multiple copies of a local state abstraction conjoined with a shared pool of messages. These 1. User process requests a port to a particular mailbox. 4. Port Server tells kernel to create port or else denies request based on result from Security Server. **Figure 4: Operating System: Security management** Local states are now mappings from variable names into values, rather than register and memory contents. However, the internal structure of this microSR message pool is almost identical to that of the mpmachine — for each program thread, the pool contains a queue of all messages which have been sent to this thread, plus an indication of which ones have been received thus far. To handle security, processes and objects are assigned security levels, and transitions are allowed if they satisfy the standard multilevel security policy. ### 4.2 Compiler Correctness Like the previous successful efforts to prove compiler correctness for sequential languages [6, 9], to claim that a compiler is correct is to claim that the target code behavior achieve the source code semantics. Yet, as we have seen, the mpmachine behaviors and the microSR semantics are distinct enough that no canonical equivalence exists between them. We, as the verifiers, must provide this mapping from the abstract microSR global states down into the more concrete mpmachine states. As shown in Figure 5, once this mapping is available, the compiler correctness proof becomes an equivalence proof of two relations, given by the dashed line and the dotted line. For any given starting state, $S$, of the microSR program, these two relations must agree on which final mpmachine configurations are reachable. In particular, the compiler correctness condition is the following logical equivalence: If the microSR semantics for the source program indicate that a certain final state, $F$, is reachable, then it must be true that the mpmachine semantics for the compiler’s output code indicate that $F' = \text{Mapdown}(F)$ is reachable from $S' = \text{Mapdown}(S)$. The compiler itself is simply another mapping function over the domain of legal microSR programs that provides a list of mpmachine instructions for each construct. A few implementation details are not evident in Figure 5. First of all, since both the Mapdown function and the compiler assign variables to registers, these two assignments must agree in order for... the above equivalence to hold. Consequently, the Mapdown function takes a symbol table argument that indicates the compiler’s choices. In miniSilo, there is a fixed symbol table because the microSR language has a small, fixed set of legal variable identifiers. To allow arbitrary strings as identifiers, the compiler needs simply to make an initial pass over the source and gather the necessary symbol table information needed by Mapdown and the second pass. This process involves no concurrency nor composability issues whatsoever other than requiring the extra argument to Mapdown—an aspect that has been accommodated. As described above, the “dashed” relation, whether true or false, must be equal to the “dotted” relation. This is not entirely possible because a small amount of information is lost across the Mapdown function. For instance, a microSR global state contains a component that indicates the current time of the state. Suppose that we have two states, $S_1$ and $S_2$, which are legal starting and ending states for some program. Both the dashed and dotted relations indicate truth. However, suppose that we now alter $S_2$ ever so slightly, by making its time indicator earlier than that of $S_1$. Since the global time does not appear in the mpmachine specification, the result of mapping down $S_2$ is just as it was before, and the dotted relation continues to indicate truth. The dashed relation, however, does not allow for time to decrease, and indicates falsity. The use of the time counter is merely an example; the microSR semantics contain other auxiliary data, such as the number of receives on a particular channel, that are not mapped down to the hardware level. Indeed, when the full Silo contains a kernel with many internal tables, it would not even be clear how the language-level receive counts should be mapped. We do not want the language layer imposing bookkeeping requirements on the kernel, and the correct choice is to not map down the information that is only needed by the language semantics. As a result, the compiler proof is not a complete equivalence, but it must distinguish different means by which the language semantics may indicate falsity. Similarly, the mpmachine abstraction also contains some items that are not within the image of Mapdown. The first few memory locations are considered to be “reserved” for system use, and the Mapdown function does not dictate the values of the addresses. The fact that the language layer relation holds does not impart any knowledge about this hidden system state within the mpmachine. Consequently, the actual proof requires a third machine configuration (not shown) which is both reachable from $S'$ and equivalent to $F'$ in all respects except the hidden system state. Finally, within this proof, the complete program is really viewed as a collection of processes, and the picture indicates what must be shown for each individual process. Rather than use fully defined states and configurations, we show that for each process, the relationships of Figure 5 hold amongst that process’ view of the system state. 5 MicroSR Applications 5.1 The Hoare Logic for MicroSR The top application layer is a mechanized Hoare logic for verifying microSR concurrent applications. Our effort to formally derive, using HOL, a sound Hoare logic from microSR semantics is a generalization of similar work by Gordon for a small sequential language [7, 13]. We use semantic relations, rather than functions, in our formal specification for microSR constructs; doing so obviates the possible need for powerdomains in the state abstraction for microSR programs due to the inherent non-determinism. To handle the interference problem arising from concurrent execution, We introduced atomicity and global invariants [2] into our logic system. This logic has been formally proven to be sound within HOL, i.e., axioms and inference rules are all mechanically derived in HOL as the logical implication of the same microSR semantic specification against which the microSR implementation is verified. This logic allows one to reason and state formal assertions about concurrently executing processes that do not share any data objects, but communicate through shared channels that are called operations in SR terminology. The partial correctness specification in our logic has two levels. The definition of predicate SPEC shown below gives our interpretation of \{P_{\land/or}GI\} S \{Q_{\land/or}GI\}, the intra-process partial correctness specification, where S is the microSR statement, P and Q are assertions mainly on program variables, GI is the assertion of global invariant mainly on operations, associated with executing S and taken with respect to a particular process. The definition of predicate G_SPEC gives our interpretation of the global partial correctness specification \{(P_{list}) \land GI\} S \{(Q_{list}) \land GI\}, where S is the top level statement for specifying concurrent executions, global invariant GI is the assertion mainly on operations, P_{list} and Q_{list} are assertion lists mainly on program variables. The ith elements of the two lists are taken with respect to a particular process for executing the ith sequential program within the top level statement S. Notice that all arguments of SPEC and G_SPEC in the following definitions are abbreviated forms of their meaning functions. \[ \text{SPEC} (P_{\land/or}GI, S, Q_{\land/or}GI) = \vdash \text{def } \forall \text{ Gstate1 Gstate2 Pid.} \\ \text{P}_{\land/or}GI(Gstate, Pid) \land S(Gstate1, Gstate2, Pid) \\ \Rightarrow Q_{\land/or}GI(Gstate2, Pid) \\ \text{G_SPEC} ((P_{list}) \land GI, S, (Q_{list}) \land GI) = \vdash \text{def } \forall \text{ Gstate1 Gstate2 Pid_list.} \\ (\forall i. (E_i P_{list}(Gstate1, (E_i Pid_list)) \land GI(Gstate, (E_i Pid_list))) \land S(Gstate1, Gstate2, Pid_list) \\ \Rightarrow (\forall i. (E_i Q_{list}(Gstate2, (E_i Pid_list)) \land GI(Gstate2, (E_i Pid_list))) \] The following gives our representative axioms and inference rules in the derived logic for microSR. Those axioms and rules for microSR sequential constructs, such as the Skip Axioms, Assignment Axiom, If Rule, Do Rule, Sequencing Rule, Precondition Strengthening Rule, and Post-condition Weakening Rule, are not listed below, because their appearance is similar to that in [2, 7], though the way to formally specify and derive them for microSR is actually more complex. All axioms and inference rules are theorems of our language semantics. The “sent-set” \(\sigma\) and “received-set” \(\rho\) denote all messages ever sent and received on that channel. Frontier\((\sigma_{op})\) denotes the earliest message in the channel \(op\) that has not been received. \(\mu\) is simply a message constructor function for converting an entity of type integer into one of type message. - **Co Rule** \[ \{GI \land P_i\} \text{ SLi } \{GI \land Q_i\} \\ \{\{GI \land P_{list}\}\} \text{ co } \text{ SLi } // \ldots // \text{ SLn oc } \{\{GI \land Q_{list}\}\} \] - **Send Axiom** \[ \{P \land GI \land GI_{\text{op}}^{\sigma_{\text{op}} \cup \mu[E_i]}\} \text{ send } \text{ op } (E) \{P \land GI\} \] • Receive Rule \[ P \land GI \land \mu(E) \in \text{Frontier}(\sigma_{\text{op}}) \Rightarrow Q_{E}^{\text{op}} \land GI_{\text{op}} \cup \mu(E) \] \[ \text{receive op(v) } \{Q \land GI\} \] • In Rule \[ \{P \land GI\} \text{receive op1(v) } \{R1 \land GI\} S1 \{Q \land GI\}, \{P \land GI\} \text{receive op2(v) } \{R2 \land GI\} S2 \{Q \land GI\} \] \[ \{P \land GI\} \text{ in op(v) } \rightarrow S1 \quad \square \text{ op2(v) } \rightarrow S2 \quad \text{ni } \{Q \land GI\} \] 5.2 Extensions for Silo Following our incremental approach, we expect that our final language for Silo will be close to its parent language in its expressive power for distributed computing and our logic will be extended as well. For instance, in our current version of microSR, input statements support only message passing because operations serviced by an input statement can only be invoked by send statements. In our later version, we will allow operations to be invoked by call statements, which will provide rendezvous. We will also extend our input statement with synchronization expressions to allow selective receipt. We will also add some feature into our language to allow users to specify the security level of their programs, resources and processes that they create. The current results at this layer serve as a basis of our research for the complete Silo, since our research so far indicates that SR concurrency features, such as dynamic process creation and that synchronization via message-passing, remote procedure calls, and rendezvous, are all amenable to a Hoare-like programming logic, because the components of our semantic model for microSR have already formalized most of entities and behaviors that SR programmers must consider during their design process. We are now also evaluating the expressive power of our logic by carrying out proofs of programs. The preliminary attempts at manual proof of microSR programs have motivated us to establish a systematic method for creating annotated microSR programs. Another challenging task is to develop, using HOL as well, an interactive prover of LCF [11] style for microSR. 6 Conclusion Our research on miniSilos has shown how to structure proofs according to vertical layers, how to formally model different layers, how to model the interactions between layers, how to express the proof obligations between layers, and how to compose all the proved layers together. We are extending our research on system design and proof to show that how to evolve miniSilos to Silo in an incremental manner. By our layered proof, we hope to demonstrate that secure and distributed applications can be verified with respect to the entire system, namely showing that microSR applications that are proved correct in our Hoare logic will run correctly on our Silo system. References
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a445719.pdf", "len_cl100k_base": 7246, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33737, "total-output-tokens": 8519, "length": "2e12", "weborganizer": {"__label__adult": 0.0004506111145019531, "__label__art_design": 0.0003325939178466797, "__label__crime_law": 0.0006284713745117188, "__label__education_jobs": 0.0005645751953125, "__label__entertainment": 8.082389831542969e-05, "__label__fashion_beauty": 0.0001933574676513672, "__label__finance_business": 0.0003592967987060547, "__label__food_dining": 0.0004398822784423828, "__label__games": 0.0007257461547851562, "__label__hardware": 0.0020427703857421875, "__label__health": 0.0008683204650878906, "__label__history": 0.0003063678741455078, "__label__home_hobbies": 0.0001266002655029297, "__label__industrial": 0.0007367134094238281, "__label__literature": 0.00033402442932128906, "__label__politics": 0.00041365623474121094, "__label__religion": 0.0006575584411621094, "__label__science_tech": 0.09271240234375, "__label__social_life": 9.506940841674803e-05, "__label__software": 0.00652313232421875, "__label__software_dev": 0.89013671875, "__label__sports_fitness": 0.00036215782165527344, "__label__transportation": 0.0008988380432128906, "__label__travel": 0.00022733211517333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36717, 0.0268]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36717, 0.34678]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36717, 0.88658]], "google_gemma-3-12b-it_contains_pii": [[0, 3429, false], [3429, 4058, null], [4058, 7076, null], [7076, 10144, null], [10144, 14118, null], [14118, 18582, null], [18582, 21386, null], [21386, 23873, null], [23873, 26982, null], [26982, 31055, null], [31055, 34390, null], [34390, 36717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3429, true], [3429, 4058, null], [4058, 7076, null], [7076, 10144, null], [10144, 14118, null], [14118, 18582, null], [18582, 21386, null], [21386, 23873, null], [23873, 26982, null], [26982, 31055, null], [31055, 34390, null], [34390, 36717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36717, null]], "pdf_page_numbers": [[0, 3429, 1], [3429, 4058, 2], [4058, 7076, 3], [7076, 10144, 4], [10144, 14118, 5], [14118, 18582, 6], [18582, 21386, 7], [21386, 23873, 8], [23873, 26982, 9], [26982, 31055, 10], [31055, 34390, 11], [34390, 36717, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36717, 0.12579]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
927b2f5f10e5bb024c27cdaffff748bae46c0d5b
PySE: Automatic Worst-Case Test Generation by Reinforcement Learning Jinkyu Koo, Charitha Saumya, Milind Kulkarni, and Saurabh Bagchi Electrical and Computer Engineering Purdue University {kooj, cgusthin, milind, sbagchi}@purdue.edu Stress testing • Stress testing o Testing the software beyond its normal operational capacity, and investigates the behavior of a program when subjected to heavy loads. • The goal of such tests o To identify performance bottlenecks o To identify algorithmic complexity attacks o To identify scale-dependent bugs • The key challenge o How to find the input that can lead to the worst-case complexity. Symbolic execution - Runs a program using symbolic variables as inputs, instead of concrete values. - Can explore all the possible execution paths, including the ones of worst-case complexity. - On each path that is executed, symbolic execution collects a set of symbolic conditions, called a path condition. - Then, it invokes a constraint solver, such as OpenSMT [7] or Z3 that generates concrete test input values. - Path explosion: the number of paths to search increase exponentially with the size of the input. WISE-like algorithms - WISE[1] and SPF-WCA[2] - Learn a branching policy that results in a path of the worst-case complexity for small input sizes by using exhaustive search, and - Then apply the learned branching policy to perform a guided search for a large input size. The worst-case branching policy Insertion sort: always True Limitations of WISE-like algorithms • Assumes a continuous program behavior across scales o Some conditional blocks are activated only when the input size is larger than a certain threshold. • Irregular branching policy Dijkstra implemented with min-priority queue: no simple way to describe the worst-case branching policy Limitations of WISE-like algorithms - Assumes a continuous program behavior across scales - Some conditional blocks are activated only when the input size is larger than a certain threshold. - Irregular branching policy Can we avoid/minimize these issues of white-box based approaches for large-scale test generation? PySE Solution approach • PySE: learns the worst-case branching policy using Q-learning, a model-free reinforcement learning. o Uses symbolic execution to collect behavioral information of a given branching policy o Updates the policy based on Q-learning. Observe the behavior of a branching policy, i.e., the length of a path Symbolic execution → Policy update Give a new branching policy The main objective of PySE - To find out a branching policy $\pi(s_t)$ for a given state $s_t$ at the $t$-th branch condition that it encounters while a program is being symbolically executed. - The branching policy $\pi(s_t)$ determines a branching decision $a_t = \pi(s_t) \in \{True, False\}$, which we also call action. - The state $s_t$ mainly consists of the current branch condition, previous $L$ branch conditions, and actions taken there. - $L$: the history length - The branching policy $\pi(s_t)$ continues evolving in such a way that the length of an execution path increases. Workflow of PySE • Step 1: (SYMBOLIC EXECUTION) o Execute a program by the branching policy \( \pi(s_t) \). o Collect resulting behavioral information such as which branch points the program visits, actions taken at each branch, and feasibilities of the actions. • Step 2: (POLICY UPDATE) o Update the branching policy \( \pi(s_t) \) in a way that an undesirable action that caused a program to terminate quickly can be avoided in the future. ✓ Q-learning Branching policy $\pi(s_t)$ • Design the branching policy $\pi(s_t)$ as: $$\pi(s_t) = \arg \max_{a_t} Q(s_t, a_t)$$ • $Q(s_t, a_t)$ is made from an artificial neural network (ANN), whose inputs are $s_t$ and its output layer produces two values, $Q(s_t, True)$ and $Q(s_t, False)$. o $\pi(s_t)=True$ if $Q(s_t, True) \geq Q(s_t, False)$. o $\pi(s_t)=False$ if $Q(s_t, True) < Q(s_t, False)$. State representation - \( s_t = (s_{t0}, s_{t1}, \ldots, s_{tL}) \) - \( s_{tl} \): an integer vector encoding the \((t - l)\)-th branch condition and the action taken there. - Encoding of a state when \( L = 2 \). - F2: unique identifier for each branch point (e.g. line number) - F3: action taken at the branch point (1 = TRUE, 0 = FALSE) How to update the branching policy (1/3) • Symbolic execution takes action $a_t$ at a given state $s_t$ and observes its consequence. o Whether the execution path is still feasible. o Feasibility can be checked by using a constraint solver like Z3. • Depending on the feasibility, the consequence of the action $a_t$ at the state $s_t$ is scored by a reward $r_t$: o $r_t = 1$ if feasible, and $r_t = P$ if not feasible. o $P = -20$ so that the infeasible decision is more distinguishable from the feasible one. How to update the branching policy (2/3) - We want \( \pi(s_t) \) to converge to the optimal branching policy \( \pi^*(s_t) \) that maximizes the expected sum of future rewards, \( E\left(\sum_{k=t}^{T} r_k \mid s_t\right) \). - \( T \) denotes the last branch condition before a program terminates normally or falls in an infeasible path condition. - Thus, equivalently, it maximizes the length of a feasible execution path. - Define the optimal action-value function \( Q^*(s_t, a_t) \) as the maximum expected sum of future rewards, after taking action \( a_t \) at a state \( s_t \): \[ Q^*(s_t, a_t) = \max_{\pi} E\left(\sum_{k=t}^{T} r_k \mid s_t\right) \] - \( Q^*(s_t, a_t) \) can be re-written recursively as: \[ Q^*(s_t, a_t) = E(r_t + \max_{a_{t+1}} Q^*(s_{t+1}, a_{t+1})) \] How to update the branching policy (3/3) • We try to learn $Q^*(s_t, a_t)$ by a sample mean $Q(s_t, a_t)$: $$Q(s_t, a_t) \leftarrow (1 - \alpha)Q(s_t, a_t) + \alpha(r_t + \max_{a_{t+1}} Q(s_{t+1}, a_{t+1}))$$ - $\alpha$ is called a learning rate. - By the law of large numbers, $Q(s_t, a_t)$ can converge to $Q^*(s_t, a_t)$ after iterations for a sufficiently small value of $\alpha$. - Such an update for learning $Q(s_t, a_t)$ without knowing the underlying probability distribution model is referred to as Q-learning in the reinforcement learning literature. Q-network architecture • In practice, updating $Q(s_t, a_t)$ separately for each $(s_t, a_t)$ is unattainable. o This is because the state is a multi-dimensional integer vector and thus the number of possible states can be too large. • Thus, a function approximator is commonly used to estimate the function $Q(s_t, a_t)$ with the limited number of observations for state-action pairs. • PySE also represents $Q(s_t, a_t)$ by using an ANN-based function approximator, which we refer to as a Q-network. Algorithm of PySE Algorithm 1 Basic mode of PySE 1: procedure SYMBOLIC EXECUTION 2: for $t$ from 1 to $T$ do 3: Choose a number $u$ randomly over $[0, 1]$. 4: if $u < \epsilon$ then 5: Choose $a_t$ randomly. ▷ $\epsilon$-greedy. 6: else 7: $a_t = \pi(s_t)$. 8: Execute $a_t$, and observe $r_t$ and $s_{t+1}$. 9: if the experience $e_t = (s_t, a_t, r_t, s_{t+1})$ is new then 10: Add $e_t$ in $E$. 11: Delete old experiences in $E$ to keep $|E| \leq N_e$. 12: procedure POLICY UPDATE 13: Sort experiences in $E$ in a random order. 14: for $i$ from 1 to $|E|$ do 15: Read the $i$-th experience from $E$. 16: Update weights. • Exploration of a new path by $\epsilon$-greedy strategy. ○ With $\epsilon$ probability, take random action instead of $\pi(s_t)$. • Symbolic Execution step collects what is called the experience. ○ $e_t = (s_t, a_t, r_t, s_{t+1})$ • Policy Update step uses these experiences to update the Q-network. Unique Path Finder (UPF) • UPF attempts to help us gather at least one new experience in each symbolic execution step. • Virtual execution: o Defined as a sequence of state transitions using $\pi(s_t)$ with an $\epsilon$-greedy strategy over an observed computation tree, which means a computation tree built up by all of observed experiences. o Namely, the virtual execution is not an execution of a real program, but a simulation of state transitions among states that have been already observed. o Such a simulation takes negligible time to run. Unique Path Finder that discovers a prefix (P1) of a brand-new execution path by virtual execution, which is a run over a computation tree built by observed experiences. Symbolic execution that follows is guided by the prefix P1 and finds out the remaining (P2) of the new execution path. Experiments • Class 1 programs: o The worst-case branch behavior is continuous and follows a simple pattern like “always True” or “always False” o These are the programs where WISE is effective, and SPF-WCA works exactly the same as WISE. • Class 2 programs: o Some or all of branch points have a **irregular branch behavior** in the worst case. o The worst-case-leading decision at a branch point can change depending on the scale \( (N) \), or the time \( (t) \) that the branch point is visited. o WISE cannot handle Class 2 programs efficiently. o SPF-WCA can be effective for some of them, *i.e.*, when the pattern can be expressed in terms of the history-length Class 1 example <table> <thead> <tr> <th>Benchmark 1: Biopython pairwise2: Smith-Waterman [39]</th> <th>(N, longest path length)</th> <th>(3,9)</th> <th>(4,12)</th> <th>(5,15)</th> <th>(10,30)</th> <th>(20,60)</th> <th>(30,90)</th> <th>(100,300)</th> </tr> </thead> <tbody> <tr> <td>Exhaustive search</td> <td>Paths</td> <td>127</td> <td>511</td> <td>2047</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td></td> <td>Time</td> <td>0:04</td> <td>0:18</td> <td>1:14</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>WISE</td> <td>Paths</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td></td> <td>Time</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:01</td> </tr> <tr> <td>PySE</td> <td>Paths</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>2</td> </tr> <tr> <td></td> <td>Time</td> <td>0:02</td> <td>0:02</td> <td>0:02</td> <td>0:02</td> <td>0:02</td> <td>0:02</td> <td>0:13</td> </tr> </tbody> </table> - Exhaustive search: search time exponentially grows - WISE: small-scale tests predict the worst-case at a larger scale. - PySE: finds the worst-case within a few trials. Class 2 example (1/2) <table> <thead> <tr> <th>GNU grep : Boyer-Moore</th> <th>(N, longest path length)</th> <th>(3,3)</th> <th>(4,3)</th> <th>(5,3)</th> <th>(10,9)</th> <th>(20,18)</th> <th>(30,30)</th> <th>(100,99)</th> </tr> </thead> <tbody> <tr> <td><strong>Exhaustive search</strong></td> <td>Paths</td> <td>4</td> <td>4</td> <td>4</td> <td>40</td> <td>1093</td> <td>88573</td> <td>-</td> </tr> <tr> <td></td> <td>Time</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:01</td> <td>0:31</td> <td>43:39</td> <td>-</td> </tr> <tr> <td><strong>WISE</strong></td> <td>Paths</td> <td>4</td> <td>4</td> <td>4</td> <td>40</td> <td>1093</td> <td>88573</td> <td>-</td> </tr> <tr> <td></td> <td>Time</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:01</td> <td>0:32</td> <td>44:24</td> <td>-</td> </tr> </tbody> </table> - WISE cannot handle: GNU grep's worst-case branching behavior shows an irregular pattern <table> <thead> <tr> <th></th> <th>(N, longest path length)</th> <th>(3,3)</th> <th>(4,3)</th> <th>(5,3)</th> <th>(10,9)</th> <th>(20,18)</th> <th>(30,30)</th> <th>(100,99)</th> </tr> </thead> <tbody> <tr> <td>SPF-WCA trained</td> <td>Paths</td> <td>1</td> <td>1</td> <td>1</td> <td>9</td> <td>243</td> <td>19683</td> <td>-</td> </tr> <tr> <td>at N=3,4</td> <td>Time</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>00:07</td> <td>10:20</td> <td>-</td> </tr> <tr> <td>SPF-WCA trained</td> <td>Paths</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>at N=6,7</td> <td>Time</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> <td>0:00</td> </tr> <tr> <td>PySE pre-trained</td> <td>Paths</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>3</td> <td>276</td> </tr> <tr> <td>at N = 5</td> <td>Time</td> <td>0:11</td> <td>0:11</td> <td>0:11</td> <td>0:11</td> <td>0:12</td> <td>0:20</td> <td>48:21</td> </tr> <tr> <td>PySE pre-trained</td> <td>Paths</td> <td>2</td> <td>2</td> <td>2</td> <td>1</td> <td>2</td> <td>3</td> <td>82</td> </tr> <tr> <td>at N = 10</td> <td>Time</td> <td>0:11</td> <td>0:11</td> <td>0:11</td> <td>0:02</td> <td>0:12</td> <td>0:20</td> <td>13:03</td> </tr> </tbody> </table> - SPF-WCA may handle, but its performance is sensitive to the length of history. - PySE can handle it and the length of history is not critical. Concluding remarks • PySE uses symbolic execution to run a program and collects behavioral information. • PySE then updates a branching policy using the collected behaviors based on a reinforcement learning framework. • By iterating the symbolic execution and policy update, PySE gradually increases the length of an execution path towards a path of the worst-case complexity. • In various Python programs and scales, PySE can effectively find a path of worst-case complexity and has benefits against exhaustive search and WISE-like algorithms. Thank you!
{"Source-Url": "https://charitha22.github.io/files/ICST19_pyse_slides.pdf", "len_cl100k_base": 4186, "olmocr-version": "0.1.49", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 32867, "total-output-tokens": 4821, "length": "2e12", "weborganizer": {"__label__adult": 0.0003838539123535156, "__label__art_design": 0.0003066062927246094, "__label__crime_law": 0.00046324729919433594, "__label__education_jobs": 0.0008497238159179688, "__label__entertainment": 6.121397018432617e-05, "__label__fashion_beauty": 0.0001857280731201172, "__label__finance_business": 0.00018465518951416016, "__label__food_dining": 0.00037169456481933594, "__label__games": 0.0005917549133300781, "__label__hardware": 0.0012388229370117188, "__label__health": 0.0006661415100097656, "__label__history": 0.00020372867584228516, "__label__home_hobbies": 0.00012552738189697266, "__label__industrial": 0.0005002021789550781, "__label__literature": 0.0002083778381347656, "__label__politics": 0.0002956390380859375, "__label__religion": 0.0004892349243164062, "__label__science_tech": 0.033782958984375, "__label__social_life": 0.00011909008026123048, "__label__software": 0.00498199462890625, "__label__software_dev": 0.95263671875, "__label__sports_fitness": 0.00040221214294433594, "__label__transportation": 0.0005865097045898438, "__label__travel": 0.00020062923431396484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13564, 0.02703]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13564, 0.58092]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13564, 0.81288]], "google_gemma-3-12b-it_contains_pii": [[0, 234, false], [234, 647, null], [647, 1165, null], [1165, 1752, null], [1752, 2081, null], [2081, 2404, null], [2404, 2801, null], [2801, 3401, null], [3401, 3869, null], [3869, 4269, null], [4269, 4618, null], [4618, 5140, null], [5140, 5946, null], [5946, 6518, null], [6518, 7028, null], [7028, 7987, null], [7987, 8834, null], [8834, 9516, null], [9516, 10922, null], [10922, 11739, null], [11739, 13005, null], [13005, 13554, null], [13554, 13564, null]], "google_gemma-3-12b-it_is_public_document": [[0, 234, true], [234, 647, null], [647, 1165, null], [1165, 1752, null], [1752, 2081, null], [2081, 2404, null], [2404, 2801, null], [2801, 3401, null], [3401, 3869, null], [3869, 4269, null], [4269, 4618, null], [4618, 5140, null], [5140, 5946, null], [5946, 6518, null], [6518, 7028, null], [7028, 7987, null], [7987, 8834, null], [8834, 9516, null], [9516, 10922, null], [10922, 11739, null], [11739, 13005, null], [13005, 13554, null], [13554, 13564, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13564, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13564, null]], "pdf_page_numbers": [[0, 234, 1], [234, 647, 2], [647, 1165, 3], [1165, 1752, 4], [1752, 2081, 5], [2081, 2404, 6], [2404, 2801, 7], [2801, 3401, 8], [3401, 3869, 9], [3869, 4269, 10], [4269, 4618, 11], [4618, 5140, 12], [5140, 5946, 13], [5946, 6518, 14], [6518, 7028, 15], [7028, 7987, 16], [7987, 8834, 17], [8834, 9516, 18], [9516, 10922, 19], [10922, 11739, 20], [11739, 13005, 21], [13005, 13554, 22], [13554, 13564, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13564, 0.13559]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
7fda83684d677be3b7143a587e32c2e78af4ab71
Parallelizing Sequential Algorithms for the Generalized Assignment Problem (Extended Abstract) Ivan Yanasak*, Gautam Shah*, Zonghao Gu†, Chris DeCastro†, Kalyan Perumalla*, Yinhu Wang†, Anand Sivasubramaniam*, Aman Singla*, Martin Savesbergh†, Umakishore Ramachandran* H. Venkateswaran* and Sudhakar Yalamanchili† Technical Report GIT-CC-94/44 September 1994 College of Computing Georgia Institute of Technology Atlanta, GA 30332-0280 *College of Computing †School of Industrial and Systems Engineering ‡School of Electrical and Computer Engineering 1 Introduction The Generalized Assignment Problem (GAP) asks for a maximum profit assignment of \( n \) tasks to \( m \) agents such that each task is assigned to precisely one agent subject to resource restrictions on the agents. Although interesting and useful in its own right, its main importance stems from the fact that it appears as a substructure in many models developed to solve real-world problems in areas such as vehicle routing, plant location, resource scheduling, and flexible manufacturing. The GAP is easily shown to be NP-hard. Therefore, optimization algorithms have to rely on some form of enumeration, usually branch-and-bound. The various algorithms algorithms that have been developed for the GAP mainly differ in the way they obtain bounds. Although substantial progress has been made since the first algorithm for the GAP was developed by Ross and Soland [RS75], the problem sizes that can be handled today are still relatively small. We consider two state-of-the-art sequential algorithms for the GAP. The first has been developed by Karabakal, Bean, and Lohmann [KBL92] and applies Lagrangian relaxation to obtain upper bounds. A steepest descent multiplier adjustment method is used to solve the Lagrangian dual. The algorithm extends earlier work of Fisher, Jailakumar and Van Wassenhove [FJVW86] and Guignard and Rosenwein [GR89]. The second has been developed by Savelsbergh [Sav93] and applies decomposition principles to obtain a set partitioning formulation and solves the LP relaxation of this formulation to obtain upper bounds. Although in theory the bounds used by both algorithms are identical, on a sequential computer, the latter algorithm is clearly superior. In this paper, we investigate whether the inherent parallelism of the branch-and-bound paradigm can be exploited successfully to achieve significant performance increases in both algorithms. Performance increases may come from evaluating tasks in parallel as well as from earlier pruning of the search tree. In Section 2 we outline our project goals and in Section 3 we elaborate on the sources of parallelism in the two algorithms that we consider. In Section 4 we present the implementation details and computational results. The computational results demonstrate that parallelization provides both substantial speedup over sequential run-times and allows larger problem instances to be solved. Finally, in Section 5 we present some observations relating to parallelization of branch-and-bound algorithms, from an operations research perspective as well as from a parallel computing perspective. 2 Project Overview As mentioned earlier, the algorithms we consider apply the branch-and-bound paradigm to implicitly enumerate all feasible solutions. Branch-and-bound algorithms generate search trees in which each node corresponds to a subset of feasible solutions. A subproblem associated with a node is either solved directly, or its solution set is partitioned and for each subset a new node is added to the tree. The process is enhanced by computing a bound on the solution a node can produce. If this bound is worse than the best solution found so far, the node cannot produce a better solution and hence can be excluded from further examination. Parallel activity is achieved by simultaneously evaluating several nodes. We will often use the term task to represent the evaluation of a single node. Observe that in a branch-and-bound algorithm there is a minimum set of tasks that has to be completed in order to prove optimality - these tasks constitute the nodes of the critical tree. As far as this basic structure of computation is concerned, the two methods that we consider differ from each other in two primary ways for any instance of nontrivial size: (1) the linear programming based algorithm explores fewer tasks than the Lagrangian relaxation based algorithm, but (2) the linear programming based algorithm requires more time to evaluate a task than the Lagrangian relaxation based algorithm. These characteristics suggest that our linear programming based implementation will be more suited to “coarse-grained” hardware platform, whereas our Lagrangian relaxation based implementation will benefit from a more “medium-grained parallelism” hardware platform. Our primary goals in attempting to parallelize the GAP were two-fold. 1. We wanted to be able to solve instances with sizes that are not computationally viable in the sequential environment. 2. We wanted to explore various approaches to parallelize the branch-and-bound algorithm and implement a scalable scheme which shows good speedups. In addition, we wanted to investigate how easy (or hard) it is to parallelize the two sequential branch-and-bound algorithms and to study if their inherent differences make one of them more amenable to parallelization than the other. To these ends, we implemented our own version of the linear programming based branch-and-bound algorithm for a cluster of 6 Sun SPARC-1 and SPARC-2 workstations, using the PVM [Sun90] message passing library and the CPLEX linear programming library [Inc93], and parallelized the Lagrangian relaxation based branch-and-bound algorithm of Karabakal [Kar92] for the 64-node KSR-2 [Res92] (a cache coherent cache only memory architecture shared memory machine). We tested both algorithms on a small set of random instances, generated according to distributions suggested in Martello and Toth [MT90]. We have only experimented with instances in class D, in which there is a correlation between objective values and knapsack weights, since this class contains the most difficult instances. The remainder of this abstract presents the design issues and computational results for our PVM and KSR implementations. 3 Solution Approach The algorithms we consider have two sources of parallelism. First, the computation of the upper bounds involves the solution of multiple independent knapsack problems (intra-node parallelism). Multiple processors can be used to solve the knapsack problems simultaneously to speedup the upper bound computation. Secondly, the branch-and-bound paradigm involves the solution of many independent subproblems (inter-node parallelism). Multiple processors can evaluate distinct subproblems simultaneously to speedup the search process. Note that the use of one source of parallelism does not preclude the use of the other and both can be used in conjunction, if one has the necessary computational resources. We briefly discuss the trade-offs between exploiting just intra-node parallelism, exploiting just inter-node parallelism, and exploiting both intra- and inter-node parallelism that motivated our design choices. If we choose just intra-node parallelism, we have to consider the costs that result from state sharing, i.e., the computation to communication ratio should be high, so that the gain in computation time is not lost. Also there should be “enough” parallelism within each node for this method to be effective. The factors affecting inter-node parallelism are the cost of distributing the nodes to the various processors and the presence of a “sufficient” number of nodes at any given instant (the breadth of the search tree) so that processors are not idling. Inter-node parallelism could benefit from the faster pruning of the tree, because multiple paths are being explored in parallel. Since we have a low to medium number of processors available for both implementations, we wanted to use the source of parallelism that would be most beneficial, given our resources. A profiling of the sequential code indicated that a sufficient number nodes were available at any given instant to successfully exploit inter-node parallelism. Furthermore, the time to evaluate a node is relatively small, so that implementation costs for managing multiple processors to compute upper bounds might be a factor even if all other factors are beneficial for intra-node parallelism. Thus, as a first-cut, we chose to implement inter-node parallelism only. However, larger problem instances may benefit from exploiting both sources of parallelism, because for larger instances the per node time increases substantially. Therefore, parallelization of the multiple independent knapsack problems within each node is one of the future directions of our work. A key detail in the implementation of inter-node parallelism is the distribution of tasks to the various processors. It is a common belief that for sequential branch-and-bound algorithms a best-bound search of the tree gives the best performance. Consequently, an efficient implementation of a branch-and-bound algorithm needs to maintain a priority queue of tasks. There are many choices for maintaining such a queue in a distributed setting but we observed that a simple centralized queue performed well and there were no bottlenecks caused by this queue access. This situation could change if there are a many more processors accessing the queue and hence greatly increasing contention. Since most of the total execution time (> 85%) is spent in node evaluation, a sophisticated distributed queue implementation will not significantly change the overall costs. However, since a distributed queue could lead to lesser contention and hence better overall performance, it may be worth considering such an implementation. We now discuss the specific details and computational results of each of our implementations. 4 Implementation 4.1 PVM Implementation The LP-based sequential algorithm of Savelsbergh [Sav93] uses a column generation scheme to solve the linear program. In his implementation, all generated columns are kept in memory for the duration of the execution of the algorithm, and at each node only the relevant columns are used. In our implementation, none of the columns are permanently kept in memory, and at each node all relevant columns have to be regenerated. We decided upon this scheme for two reasons. First, maintaining a “global column pool” accessible to all PVM processes in the message-passing PVM environment would involve substantial memory and communication overheads, and would greatly increase the complexity of the parallel program. Secondly, we felt that maintaining local column pools would not be of obvious benefit, since the algorithm does not assume any relation between successive nodes evaluated by a given PVM process. (After our computational experiments, we now feel that local pools may be of some benefit, and implementing such a scheme is one of future directions of our work.) We rejected outright the possibility of sending the entire column pool with each node message, because it would require an extreme amount of memory and would greatly increase the communication costs. We discuss the consequences of our decisions in the next section. For reasons of better potential scalability, our first PVM version utilizes a distributed node-queue scheme. After initialization by a process designated the “host”, each task (one per workstation) takes a node message (if any) from its local “best-bound” queue (which maintains unevaluated nodes in order of nonincreasing bound values), evaluates the node (solves the LP using a column generation scheme), then, in case of an integral solution, broadcasts the new lower bound to all other PVM processes (to allow them to prune their node queues if possible), or, in the case of a fractional solution, places one of the two child nodes in its local queue and sends the other child node to another PVM process for placement in its queue. The recipient of the second child node is chosen at “random” (weighted so that processes on faster workstations are given preference over processes on slower workstations). This method is suggested by Karp and Zhang [KZ88] and provides relatively good load-balancing. After running several 10x50 class D test instances, we noticed that in general the slower machines were terminating later than the faster machines, and that the idle times of the faster machines were much higher <table> <thead> <tr> <th>Problem</th> <th>Sequential</th> <th>Distributed Send 1 Child</th> <th>Distributed Send both</th> <th>Centralized</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>Time</td> <td>Speedup*</td> <td>Time</td> </tr> <tr> <td>1</td> <td>a</td> <td>1</td> <td>5737</td> <td>-NA-</td> </tr> <tr> <td>2</td> <td>3574</td> <td>1</td> <td>1484</td> <td>2.40</td> </tr> <tr> <td>3</td> <td>3278</td> <td>1</td> <td>956</td> <td>3.42</td> </tr> <tr> <td>4</td> <td>9034</td> <td>1</td> <td>2198</td> <td>4.11</td> </tr> <tr> <td>5</td> <td>11985</td> <td>1</td> <td>3237</td> <td>3.70</td> </tr> </tbody> </table> * Speedup is with respect to the sequential time a Could not complete due to insufficient memory Table 1: Execution Times (in seconds) for 10x50 Class D Problems using 6 Processors <table> <thead> <tr> <th>Problem</th> <th>Keep Columns</th> <th>Don’t keep Columns</th> <th>Slowdown Factor</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1220</td> <td>3334</td> <td>2.73</td> </tr> <tr> <td>2</td> <td>269</td> <td>1067</td> <td>3.97</td> </tr> <tr> <td>3</td> <td>155</td> <td>975</td> <td>6.29</td> </tr> <tr> <td>4</td> <td>1876</td> <td>2894</td> <td>1.54</td> </tr> <tr> <td>5</td> <td>1385</td> <td>3455</td> <td>2.50</td> </tr> </tbody> </table> Table 2: Execution Times (in seconds) for the problems on the RS/6000 (sequential) than those of the slower ones. To ensure that keeping one child node is not biasing our load balancing method, we also created a second distributed queue version that sends both child nodes to other PVM processes. This new “send both” scheme did not improve the execution times in general. We therefore created a third PVM version around a centralized queue process. This effectively prevents any load imbalance (at the cost of a small increase in processor idle time), as well as making termination detection more straightforward. 4.1.1 PVM Results The resulting times for our PVM implementations, using 6 processors, are shown in Table 1. As the data in the table shows, all of the parallel implementations provide significant speedup over the sequential times. Note, however, that no consistent difference in total time can be seen between the parallel implementations. In the “distributed versus centralized queue” this seems to indicate that time gained by ensuring that “fast machines finish last” is not all that substantial. The lack of a clear winner between the distributed queue methods indicates that keeping one child local is not currently a benefit or a disadvantage. Given that an average of 85-95% of the total execution times were spent in the LP and knapsack code, the addition of one extra send per node evaluation is expected to have minimal impact. As discussed above, our implementation does not keep column data between node evaluations, but rather recomputes necessary columns for each node. The original sequential algorithm kept all columns around (marking those currently unusable) for the lifetime of the run. The two methods were compared (using the same problem set) on an IBM RS/6000 and the results are given in Table 2. The results clearly demonstrate that not keeping the columns around between node evaluations incurs a substantial amount of overhead due to recomputation of old columns. Although for reasons that we mentioned earlier, keeping a global column pool is not feasible on our PVM platform, this data does suggest that an implementation on a shared memory machine (like the KSR-2 or a network of workstations that provides a distributed shared memory abstraction), which could use a shared column pool with much greater efficiency, would likely result in even greater speedups. 4.2 KSR Implementation In the KSR implementation, aided by the support for shared memory, we only considered a centralized queue implementation in which each processor takes nodes from and updates a shared priority queue. In order to ensure that processors do not interfere with one another, we serialized access to the priority queue with mutual exclusion locks. The other major modification was to ensure termination of the algorithm. 4.2.1 KSR Results Example runs of four 10x30 class D problems are shown in Table 3. As shown here spreading the work among <table> <thead> <tr> <th>Number of Processors</th> <th>Problem Number</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td></td> </tr> <tr> <td></td> <td>Time</td> <td>Queue*</td> <td>Time</td> <td>Queue*</td> <td>Time</td> </tr> <tr> <td>1</td> <td>550.91</td> <td>50</td> <td>519.61</td> <td>110</td> <td>554.74</td> </tr> <tr> <td>2</td> <td>278.60</td> <td>45</td> <td>142.00</td> <td>41</td> <td>285.58</td> </tr> <tr> <td>4</td> <td>102.37</td> <td>20</td> <td>73.34</td> <td>40</td> <td>146.78</td> </tr> <tr> <td>8</td> <td>65.12</td> <td>21</td> <td>61.40</td> <td>72</td> <td>125.93</td> </tr> <tr> <td>16</td> <td>45.92</td> <td>7</td> <td>59.64</td> <td>104</td> <td>60.92</td> </tr> <tr> <td>24</td> <td>53.27</td> <td>28</td> <td>58.34</td> <td>128</td> <td>61.52</td> </tr> <tr> <td>32</td> <td>54.87</td> <td>25</td> <td>54.66</td> <td>123</td> <td>60.61</td> </tr> <tr> <td>40</td> <td>52.14</td> <td>29</td> <td>52.87</td> <td>111</td> <td>64.54</td> </tr> <tr> <td>Best Speedup</td> <td>12.00</td> <td>9.83</td> <td>9.15</td> <td></td> <td></td> </tr> </tbody> </table> * Average queue length of available tasks Table 3: Execution times (in seconds) of 10x30 class D problems with differing # processors more processors does lower the overall execution time, up to a point. As seen in problems 1 and 3, increasing the number of processors past a certain level (16 processors for both of these examples) does not result in further speedups, but slightly increases the execution time. This drop-off in performance gains occurs when any additional processors cease to improve pruning, and start to work on nodes not in the “critical tree” \(^1\). Any slight upturn in execution time is due to delay incurred while waiting for these useless evaluations to complete (this increase is bounded by the time to evaluate a single non-essential node). Even with this limit on the number of processors that could be gainfully used, all of the multiprocessor times shown here were far less than the time required for the sequential case (1 processor). Furthermore, very good speedups were observed (~38 using 40 processors for problem 4) when there was sufficient work. We are able to comment on the amount of work based on a couple of observations: \(^1\)It should be noted that the critical tree is dynamically determined so that at the time the processors take up tasks, it is not known whether they are in the critical tree or not. 1. The queue length for these cases is very small - in fact in smaller problem sizes we saw that it remains 1 so that adding additional processors will not help. 2. If the critical tree has \( n \) nodes, then \( n \) is a ceiling on the potential speedup achievable with inter-node parallelism, as well as a limit on the number of processors that can be gainfully used in such a parallel implementation. This limit assumes a synchronous execution model with equal node evaluation times. But even empirically (on the KSR-2) we have observed this limit to hold. Clearly, this limit does not hold for implementations that exploit intra-node parallelism. Because of the large number of node evaluations required, the algorithm of Karabakal, Bean, and Lohmann [KBL92] could not viably solve any of 10x50 instances of class D on the KSR using only a single processor. However, our parallel implementation on the KSR is able to solve many such instances, including the five that we listed previously for our PVM implementation. The results of the runs for those five problems are given in Table 4. Please note that these times should not be directly compared with the PVM method times: the number of processors, the hardware characteristics and the algorithm characteristics are drastically different. <table> <thead> <tr> <th>Problem</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> </tr> </thead> <tbody> <tr> <td>Time</td> <td>2552</td> <td>1197</td> <td>909</td> <td>1992</td> <td>2716</td> </tr> </tbody> </table> Table 4: Execution times (in seconds) of 10x50 class D problem using 30 processors 5 Discussion When designing and testing both the PVM and the KSR implementations the biggest factor affecting our decisions was that 85% or more of the total execution time is used in “node evaluation” (LP and knapsack for the PVM implementation, Lagrangian dual and knapsack for the KSR implementation), with the communication and support code (e.g., initialization, termination, results gathering and display) constituting less than 15% of the total time. This time breakdown implies that improvements in non-“node evaluation” code will likely have less impact on overall performance than will improvements in the node evaluation code. Viewed another way, this means that achieving a better node distribution and load balancing scheme at the cost of decreased communication and support code efficiency would likely improve the overall performance. Our two main “performance enhancement guidelines” follow from the above observations. First, the support and communication code should first be optimized for load balancing, then for speed and efficiency. Secondly, one should make as many improvements as possible that reduce the node evaluation time. These improvements can be made by using more efficient sequential techniques (e.g. keeping columns in the LP case) and by taking advantage of further parallelism (e.g. performing the multiple independent knapsack problems for each node in parallel). Our results for the KSR approach also indicate that simply throwing more processors at a problem will not always result in improved performance. Depending on the size and shape of the critical tree for a given problem instance, adding processors beyond a limit will result in no further increase in speed, and will actually decrease performance to some extent. In summary, parallel execution can provide improved performance if the critical tree can be evaluated in parallel. By studying two distinct solution techniques we have found that this is indeed possible, and have been able to make the following observations: - Execution time is dominated by node evaluations, i.e., 80%-85% of the time. Thus optimizing message passing performance, and control overhead, etc. is not a large source of performance improvement. It is also the reason why performance is not quite as sensitive to these factors. - In a heterogeneous network (mixed speed workstations) slower processors can substantially increase execution time. Associated queue lengths are reduced at a much slower pace and these times can dominate execution time. A (simple) load balancing scheme can alleviate most of these bottlenecks. - In the PVM version, the use of a simple centralized queue is sufficient for relatively large problem sizes and incurs relatively low overhead for queue management. In the KSR version, overhead of locking (synchronization for queue access) is minimal even up to 40 processors (<2% in most cases). - Algorithms should just find enough parallelism to solve the critical tree in parallel. Beyond that most of the work is spent in creating and solving nodes that are non-essential. - Some support for state sharing (a la distributed shared memory) can substantially improve performance in the PVM version by facilitating the use of optimizations used in serial implementations. - With low overhead queue access, and small computation/communication ratio, intra-node parallelism on the KSR was not a viable option. 6 Research Contributions 6.1 Operations Research Perspective As shown in the PVM and KSR results sections, not only are we able to obtain substantial performance gains over the sequential algorithms, but we are able to solve instances not computationally viable with the sequential algorithms. It should be noted that these increases are obtained by parallelizing well-known sequential algorithms. This indicates that one can exploit the benefits parallelism without spending an excessive amount of time and effort in designing specialized parallel algorithms. Our approach has the additional advantage that any future sequential algorithmic improvements could likely be reflected in the parallel version as well. 6.2 Parallel Computation Perspective Based upon the number of nodes required and the time needed to evaluate a node, we were able to characterize a branch and bound algorithm in terms of its granularity. We used this characterization to indicate which of our two platforms, PVM (coarse-grained) and KSR (medium-grained), would be best suited to each of our base algorithms. In the design of future parallel branch and bound programs, one could therefore use this guideline to make a good match between the available sequential code and the available hardware platforms. In terms of scalability, because of the number of nodes available to be executed and the relatively lower frequency of queue access, a simple central queue gave good speedups and proved to be fairly scalable for the problem sizes we considered. Our results also suggest that a shared memory platform is more suited to branch and bound than is message-passing hardware. Aside from the “ease of programming” issue, a shared memory machine is a better match to the dynamic nature of the processors’ interaction with the node queue (e.g., retrieving a node, pruning based upon a new bound). 7 Concluding Remarks We hope that our experiences given in this report will encourage further study into the parallelism of branch and bound algorithms. In addition to our comments above, we believe that any future study work should explore the following issues: - Can we define a precise upper bound for number of processors that achieves the potential speedup due to inter-node parallelism based on the size and shape of the critical tree? - Could a different, possibly more complex, method of choosing the next node to evaluate result in better performance? Our current best-bound search is considered to be the overall best choice for sequential algorithms. This may not be true for parallel algorithms. - How does the presence of multiple optimal solutions affect speedup? Is this the only reason for super-linear speedup? References
{"Source-Url": "http://www.cc.gatech.edu/people/home/rama/TASS/papers/git.cc.94.44.pdf", "len_cl100k_base": 5974, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 46423, "total-output-tokens": 6917, "length": "2e12", "weborganizer": {"__label__adult": 0.0003829002380371094, "__label__art_design": 0.00031065940856933594, "__label__crime_law": 0.0006351470947265625, "__label__education_jobs": 0.001956939697265625, "__label__entertainment": 9.000301361083984e-05, "__label__fashion_beauty": 0.00022554397583007812, "__label__finance_business": 0.0010013580322265625, "__label__food_dining": 0.0004901885986328125, "__label__games": 0.0009102821350097656, "__label__hardware": 0.0020656585693359375, "__label__health": 0.000995635986328125, "__label__history": 0.0005316734313964844, "__label__home_hobbies": 0.00021445751190185547, "__label__industrial": 0.002655029296875, "__label__literature": 0.00026798248291015625, "__label__politics": 0.0005283355712890625, "__label__religion": 0.0005941390991210938, "__label__science_tech": 0.41064453125, "__label__social_life": 0.00013196468353271484, "__label__software": 0.01332855224609375, "__label__software_dev": 0.55908203125, "__label__sports_fitness": 0.0004777908325195313, "__label__transportation": 0.002086639404296875, "__label__travel": 0.0002770423889160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28687, 0.07245]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28687, 0.27261]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28687, 0.92112]], "google_gemma-3-12b-it_contains_pii": [[0, 555, false], [555, 4582, null], [4582, 8384, null], [8384, 12614, null], [12614, 15804, null], [15804, 19305, null], [19305, 22835, null], [22835, 26105, null], [26105, 28687, null]], "google_gemma-3-12b-it_is_public_document": [[0, 555, true], [555, 4582, null], [4582, 8384, null], [8384, 12614, null], [12614, 15804, null], [15804, 19305, null], [19305, 22835, null], [22835, 26105, null], [26105, 28687, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28687, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28687, null]], "pdf_page_numbers": [[0, 555, 1], [555, 4582, 2], [4582, 8384, 3], [8384, 12614, 4], [12614, 15804, 5], [15804, 19305, 6], [19305, 22835, 7], [22835, 26105, 8], [26105, 28687, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28687, 0.24219]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
fc2ad4d4612c171ea018b314528c80a2407fc158
[REMOVED]
{"Source-Url": "https://riuma.uma.es/xmlui/bitstream/handle/10630/16189/Caepia18-AJSanchez.pdf?isAllowed=y&sequence=3", "len_cl100k_base": 5158, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27024, "total-output-tokens": 6739, "length": "2e12", "weborganizer": {"__label__adult": 0.0009403228759765624, "__label__art_design": 0.0007061958312988281, "__label__crime_law": 0.002422332763671875, "__label__education_jobs": 0.0008234977722167969, "__label__entertainment": 0.00011616945266723631, "__label__fashion_beauty": 0.0004620552062988281, "__label__finance_business": 0.00033164024353027344, "__label__food_dining": 0.0004968643188476562, "__label__games": 0.0012044906616210938, "__label__hardware": 0.004581451416015625, "__label__health": 0.002017974853515625, "__label__history": 0.0006031990051269531, "__label__home_hobbies": 0.0001852512359619141, "__label__industrial": 0.000934123992919922, "__label__literature": 0.0003647804260253906, "__label__politics": 0.0007419586181640625, "__label__religion": 0.0007510185241699219, "__label__science_tech": 0.165771484375, "__label__social_life": 0.00017392635345458984, "__label__software": 0.007511138916015625, "__label__software_dev": 0.806640625, "__label__sports_fitness": 0.00058746337890625, "__label__transportation": 0.0012445449829101562, "__label__travel": 0.00028967857360839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26481, 0.04142]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26481, 0.52371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26481, 0.86771]], "google_gemma-3-12b-it_contains_pii": [[0, 2483, false], [2483, 5444, null], [5444, 7711, null], [7711, 10978, null], [10978, 12644, null], [12644, 15773, null], [15773, 18218, null], [18218, 20365, null], [20365, 22971, null], [22971, 26481, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2483, true], [2483, 5444, null], [5444, 7711, null], [7711, 10978, null], [10978, 12644, null], [12644, 15773, null], [15773, 18218, null], [18218, 20365, null], [20365, 22971, null], [22971, 26481, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26481, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26481, null]], "pdf_page_numbers": [[0, 2483, 1], [2483, 5444, 2], [5444, 7711, 3], [7711, 10978, 4], [10978, 12644, 5], [12644, 15773, 6], [15773, 18218, 7], [18218, 20365, 8], [20365, 22971, 9], [22971, 26481, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26481, 0.13014]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
6e8ac85733869432f7b744ae36a02f853d27352d
TWO ALGORITHMS FOR LOCATING ANCESTORS OF A LARGE SET OF VERTICES IN A TREE Oleksandr Panchenko, Arian Treffer, Hasso Plattner, Alexander Zeier Hasso Plattner Institute for Software Systems Engineering, P.O.Box 900460, 14440 Potsdam, Germany panchenko@hpi.uni-potsdam.de Keywords: Query processing, tree processing, XML database, data storage, algorithms. Abstract: A lot of tree-shaped data exists: XML documents, abstract syntax trees, hierarchies, etc. To accelerate query processing on trees stored in a relational database a pre-post-ordering can be used. It works well for locating ancestors of a single or few vertices because pre-post-ordering avoids recursive table access. However, it is slow if it comes to locating ancestors of hundreds or thousands of vertices because ancestors of each of the input vertices are located sequentially. In this paper, two novel algorithms (sort-tilt-scan and single-pass-scan) for solving this problem are proposed and compared with a naïve approach. While the sort-tilt-scan improves the performance by a constant factor, the single-pass-scan achieves a better complexity class. The performance gain is achieved because of a single table scan which can locate all result vertices by a single run. Using generated data, this paper demonstrates that the single-pass-scan is orders of magnitude faster than the naïve approach. 1 INTRODUCTION A lot of tree-shaped data exists: XML documents, abstract syntax trees, hierarchies, etc. It is often the case that leaves of a tree contain the actual information and all other vertices (their ancestors) describe the meaning and interplay of the leaves. Usually, users request information from the leaves, but need all corresponding ancestors to reconstruct the context or to visualize them properly. Therefore the result set often should contain the matching vertices and their ancestors. To store a tree in a relational database, each vertex is stored as a tuple with a unique ID. To represent the tree structure, an attribute for the parent ID is required. With this data schema, it is possible to fetch direct children or parent of a vertex. To locate all ancestors or descendants of a vertex efficiently, pre-post-order numbering can be used (Grust et al., 2004). For creating pre-post-order labels, the entire tree has to be traversed once. For each vertex, on entering and leaving the pre- and post-order are assigned as consecutive numbers. Figure 1 illustrates it with an example. For a given vertex, its ancestors are all vertices that were entered before and left after visiting this vertex. Thus, they all have a smaller pre-order value and a larger post-order value. Now it is possible to fetch all ancestors or descendants of a vertex with one table scan. For the example given in Figure 1, the following SQL query could be used to find the ancestors of the vertex H: Figure 1: A set of matches in a tree. The result paths are highlighted. SELECT * FROM vertices WHERE preOrder < 11 AND postOrder > 14 If the comparison operators are changed to opposite, the same query can be used to compute the descendants of the vertex. However, it is slow if it comes to locating ancestors of hundreds or thousands of vertices simultaneously because ancestors of each of the input vertices are located sequentially (see Section 2.1). In this paper, two algorithms for solving this problem are proposed and compared with a naïve approach. While the sort-tilt-scan improves the performance by a constant factor, the single-pass-scan achieves a better complexity class. The performance gain is achieved because of single table scan which can qualify all result vertices by a single run. Using generated data, this paper shows that the single-pass-scan is orders of magnitude faster than the naïve approach. Pre-post-order numbering is not the only labeling scheme that was proposed to improve XPath queries. For instance, a scheme for improving sequences of child steps has been proposed (Chen et al., 2004). Since the problem addressed in this paper occurs after the evaluation of the query, the evaluation of XPath queries itself is out of scope and is not discussed further. Implementations of XPath engines are available as open source projects (i.e., Apache Xalan) and are subject to current research (Gou and Chirkova, 2007; Peng and Chawathe, 2005). Conceptually, the paper proposes applying an algorithm of a streaming nature to the data that is stored in a database. We show that such a combination is efficient for our scenario as we have a rather simple query but a large input data set. Although a number of algorithms exist for locating patterns on a tree, as far as we know no algorithm exists that is specially designed for a large number of input vertices as we have. Streaming processing of XML data (Li and Agrawal, 2005) is capable of handling large amounts of data. In our paper we apply the idea of streaming processing to data that is stored in a database. Our algorithms assume that the data is rarely changed. This assumption allows us to use pre-post-ordering and to physically sort the data as described below. This condition significantly restricts the number of scenarios in which the algorithms can be applied. Nevertheless, there are still a number of cases that do not change data often. For example, an abstract syntax tree, if changed, the entire tree could be replaced because it should be parsed anyway (Panchenko et al., 2010). In this case the vertices of the new (updated) tree will be placed at the end of the table. The new pre-orders and post-orders will be assigned to preserve the physical order. The old data will be then removed from the table. The paper is organized as follows. Section 2 discusses a naïve approach to the problem and introduces the two novel algorithms. In Section 3 the algorithms are evaluated. The last section summarizes the results and discusses future work. 2 ALGORITHMS Once the XPath query was successfully executed, the paths to the result vertices have to be computed. Figure 1 shows an example that will be used throughout this paper to explain the proposed algorithms. 2.1 Naïve Approach A frugal way of solving the problem is appending /ancestor-or-self::* to each query. However, with a certain database size, this approach is not feasible since the computation would take too long. For the example of Figure 1, the ancestors of the result could be computed with this SQL query: SELECT * FROM vertices WHERE (preOrder < 2 AND postOrder > 3) /* Vertex C */ OR (preOrder < 6 AND postOrder > 7) /* Vertex E */ OR (preOrder < 11 AND postOrder > 14) /* Vertex H */ OR (preOrder < 17 AND postOrder > 20) /* Vertex J */ It is clear that for each additional result vertex another condition has to be added to the WHERE clause. Since low selective queries in large systems can produce thousands of hits, this approach quickly exceeds the maximum query size of most database systems. But even a manual implementation would not solve the fundamental problem: it is necessary to iterate over the entire vertex set and for each vertex it has to be checked whether it is an ancestor of one of the result vertices. This leads to a runtime complexity of \(O(|\text{vertices}| \cdot |\text{input}|)\). Thus, assuming that for a given query the size of the result is proportional to the size of the database, the naïve approach has a quadratic complexity. The algorithm is depicted in the following listing. Algorithm 2.1: NAIVE APPROACH(input, vertices) comment: evaluate each input vertex for each c ∈ input for each v ∈ vertices comment: check if the vertex qualifies as ancestor if v.pre < c.pre and v.post > c.post then (comment: add to the result set output (v)) 2.2 Sort-tilt-scan Algorithm Since in general the entire tree is significantly bigger than the result set, it is better to iterate over the tree in the outer loop and use the inner loop to iterate over the result set. Even if the entire database is kept in memory, switching the loops slows down the execution. However, this loss of performance may be justified if the overall efficiency of the algorithm can be improved. With a sorted index over the pre-order attribute it is possible to stop searching for ancestors of a result vertex once the vertex itself has been reached. Furthermore, for vertices in the rear part of the tree, the same thing can be done even faster with an index that is sorted by post-order. Figure 2 shows how this approach would process the example tree. For each vertex, first it has to be decided in which table it has a higher position (using the function usePreOrderSorting). This can be done in constant time if the pre-post-order numbering scheme is adjusted. When using separate counters for pre- and post-order, the positions in the sorted indices can be derived directly from the order numbers. For consistency with the other figures, this modification is not shown in the example. After the index was selected, the algorithm searches for ancestors until the result vertex is reached. Because of the sorting, no ancestors will be found after that point. This algorithm has the same complexity as the naïve approach, but improves execution time with a constant factor. The algorithm is depicted in the following listing. Algorithm 2.2: SORT-TILT-SCAN(input, preOrdered, postOrdered) comment: evaluate each input vertex for each c ∈ input comment: decide which sorting to use if usePreOrderSorting(c) then for each v ∈ preOrdered, (v.pre < c.pre) if v.post > c.post then (comment: v is an ancestor of c output (v)) else for each v ∈ postOrdered, (v.post > c.post) if v.pre < c.pre then (comment: v is an ancestor of c output (v)) We also tried to further improve the performance by calculating the min pre-order and max post-order. Starting the table scans with min pre-order and max post-order should have to reduce the number of tuples to be scanned. However, in practice this optimization negligibly improves performance because of the distribution of the pre- and post-orders. 2.3 Single-pass-scan Algorithm As it can be seen in Figure 2, some vertices are checked multiple times. This is true insofar as these vertices are ascendants of more than one result vertex. However, for the final result it is not important to find these vertices more than once. This trait can be exploited if the table and the result vertices are sorted by pre-order. Once all ancestors of a result vertex \( V_1 \) have been found, some ancestors of the next result vertex \( V_2 \) are found as well. As it can be seen in Figure 3, all ancestors of \( V_2 \) that are not ancestors of \( V_1 \) must have a higher pre-order than \( N_1 \). Thus, for computing the ancestors of a result vertex \( V_r \), in a table sorted by pre-order only the vertices between \( V_{r-1} \) and \( V_r \) have to be checked. This algorithm assumes that the input vertices are sorted by pre-order. Figure 4 shows how the algorithm scans the table. The removed overlapping of the result paths is shown in Figure 3. As it can be seen in Figure 4, the algorithm iterates over the vertex set only once. In the worst-case scenario, when at least one result vertex is at the very end of the table, the execution time of the algorithm only depends on the size of the vertex set, and is independent from the number of result vertices. Thus, especially for queries with low selectivity, a good execution time is expected. After the \textit{single-pass-scan} is done, each of the output vertices has to be mapped to corresponding ancestors. This can be done using the naive approach or\textit{sort-tilt-scan}. This additional step negligibly affect the performance because the number of output vertices is significantly smaller than the size of the entire table. The algorithm is depict in the following listing. **Algorithm** 2.3: \textit{SINGLE-PASS-SCAN}(input, preOrdered) \begin{verbatim} comment: evaluate each input vertex for each \( c \in \text{input} \) \begin{align*} &\text{comment: continue with last v!} \\ &\text{do} &\text{comment: no ancestors of } c \text{ after this point} &\text{if } v\.preOrder > c\.preOrder &\text{then} &\text{break} &\text{else if } v\.postOrder > c\.postOrder &\text{then} &\text{output } (v) \end{align*} \end{verbatim} 3 EVALUATION When writing algorithms for large datasets, it is important to have them operating as close as possible on the original data. Furthermore, for the purpose of comparing the algorithms, the execution should not be affected by database specific optimizations. Therefore, a simple prototype was developed in Java for the evaluation. The prototype uses a column store that is implemented with int-arrays. On startup, the generated data is loaded from a file, sorted by pre-order. Additionally, a sorted index is created on the post-order attribute. Each algorithm accesses the data with an iterator that hides the column architecture. This provides data access in constant time and ensures the benefits of a column store, such as sequential reading. As a result of the computation, the indices of the selected vertices are stored in an array list, which provides amortized constant time for inserts. The generated data consisted of 2,000 equally shaped trees that were combined to one large super tree. Each tree had a depth of seven with four children per vertex. In total, the data set contained 10,922,000 vertices. Since the location of the ancestors is independent of the evaluation of the query\(^1\), the set of vertices of which the ancestors will be computed was generated. Although a number of standard XML benchmark exists, we decided to generate data because we can show the properties of the algorithm in the best way and because standard benchmarks do not have such a query. We performed our tests on a Linux machine equipped with Intel\textsuperscript{®} Xeon\textsuperscript{®} CPU E5450 taked at 3GHz. \(^1\)Some algorithms for streaming evaluation of XPath are capable of locating ancestors during the evaluation of the query. However, in this paper we focus on those algorithms which are not. The execution time of the algorithms mainly depends on the size of the database and the selectivity of the original query. Figure 5 presents dependency of query execution time from the query selectivity. The selected vertices were distributed randomly in the super tree. Each data point shows the average of five runs with a different random seed. As expected, the execution time of the naïve approach and the sort-tilt-scan grows linear with the selectivity, while the execution time of the single-pass-scan remains almost constant. A slight increase can be observed because of the additional overhead of adding more vertices to the ascendants list. We also compared our implementation with MySQL and MonetDB. Although it was not a primary goal of our research, we added these measurements to illustrate that convenient database systems behave similar to our naïve implementation. In this test, the vertices were distributed randomly throughout the entire tree. However, when searching for certain vertices, it is likely that most matches concentrate on a certain area of the table. To represent this trait, the ordered vertex set was split into 200 equal subsets. Then, for each of 15 tests, all vertices were selected from only one of these subsets. Figure 6 shows the result of these tests. The selectivity of the input set of vertices was chosen to be 0.0007%. The naïve approach exhibits almost constant execution time because the entire table has to be scanned anyway. Since the last two algorithms start with iterating over the vertex set, beginning with the smallest pre-order, they are faster when all hits can be found early in the table. Furthermore, the sort-tilt-scan is faster if all hits concentrate at the end of the table. With increasing pre-order of the result vertices, the algorithms become linearly slower. As it was expected for the sort-tilt-scan, it becomes faster again in the second half, when the vertices ascend in the table that is sorted by post-order. In the last test run, the execution time was measured for different database sizes. To simulate that all vertex sets were queried with the same query, the selectivity for creating the result set was 0.0007% for each run, which corresponds to 80 hits in the full database. The results of this test are shown in Figure 7. On one hand, one can see that the execution time of the naïve approach and of the sort-tilt-scan is growing quadratic, caused by the linear increase of database and result set sizes. On the other hand, the single-pass-scan shows a linear behavior as it was predicted in the analysis in Section 2.3. When comparing the overall performance of larger result sets, the single-pass-scan is orders of magnitude faster, while the sort-tilt-scan is about twice as fast as the naïve approach. Since the single-pass-scan scans tuples of the table sequentially, it can exploit the advantages of modern processors if the table is stored using column-oriented layout. Furthermore, to reduce the amount of scanned data a simple dictionary compression can be used (Cockshott et al., 1998). Figure 8 illustrates the impact of the physical data layout on performance. In our prototype we changed the layout of the arrays as proposed by Li et al. (Li et al., 2004). The performance advantages come at costs of additional space requirements because a sorted index is needed. Furthermore, the performance of data manipulation operations (insert, update, delete) is affected because of the need of maintaining the index and of keeping the records sorted by pre-order. 4 CONCLUSION AND FUTURE WORK This paper showed how the computation of the paths from the root to the result vertices of an XPath query can be improved. Therefore two algorithms were proposed. The algorithms were designed for a relational in-memory database and rely on pre-post-order numbering. The sort-tilt-scan improves the naïve approach by using two sorted tables and minimizing the number of rows that have to be scanned for a vertex to find its ancestors. The single-pass-scan exploits the fact that the result is a set and does not need the information how often a certain vertex was found as an ancestor. In this way it is possible to solve the problem with a single table scan. In the evaluation part it was found that the single-pass-scan greatly improves the performance of the task, especially for queries with a high selectivity. It was significantly faster than other algorithms, regardless of the database size, the queries selectivity or the distribution of the results. Since the single-pass-scan is based on a table scan, it provides possibilities for parallelization. Further tests should show how the algorithms behave in a multithreaded environment. Another approach for fetching the paths to the result vertices would be using a streaming-based XPath engine as XSQ (Peng and Chawathe, 2005). It could be modified to keep a stack of ancestors and write them into the result once the query produced a hit. However, this might slow down the actual execution of the query, as the advantage of targeted, index based access to the data could not be used. Finally it should be mentioned that with a slight modification the algorithms could be used as well to fetch the descendants of a set of vertices. This way, a new potential for optimizing the implementation of the XPath axes arises. Thus, the algorithms proposed in this paper can not only improve the post-processing of the result, but the evaluation of the query as well. REFERENCES
{"Source-Url": "http://ares.epic.hpi.uni-potsdam.de/apps/static/papers/prepost_INSTICC_.pdf", "len_cl100k_base": 4245, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22155, "total-output-tokens": 5084, "length": "2e12", "weborganizer": {"__label__adult": 0.0003066062927246094, "__label__art_design": 0.0002440214157104492, "__label__crime_law": 0.00037169456481933594, "__label__education_jobs": 0.0006160736083984375, "__label__entertainment": 6.008148193359375e-05, "__label__fashion_beauty": 0.0001538991928100586, "__label__finance_business": 0.00021386146545410156, "__label__food_dining": 0.00034236907958984375, "__label__games": 0.0004069805145263672, "__label__hardware": 0.00098419189453125, "__label__health": 0.0006322860717773438, "__label__history": 0.00022602081298828125, "__label__home_hobbies": 8.946657180786133e-05, "__label__industrial": 0.00040841102600097656, "__label__literature": 0.00023543834686279297, "__label__politics": 0.0002112388610839844, "__label__religion": 0.00040435791015625, "__label__science_tech": 0.044219970703125, "__label__social_life": 7.796287536621094e-05, "__label__software": 0.01067352294921875, "__label__software_dev": 0.93798828125, "__label__sports_fitness": 0.0002562999725341797, "__label__transportation": 0.0004477500915527344, "__label__travel": 0.0001939535140991211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21373, 0.01767]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21373, 0.55481]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21373, 0.91948]], "google_gemma-3-12b-it_contains_pii": [[0, 2947, false], [2947, 7472, null], [7472, 10685, null], [10685, 14185, null], [14185, 17235, null], [17235, 21373, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2947, true], [2947, 7472, null], [7472, 10685, null], [10685, 14185, null], [14185, 17235, null], [17235, 21373, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21373, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21373, null]], "pdf_page_numbers": [[0, 2947, 1], [2947, 7472, 2], [7472, 10685, 3], [10685, 14185, 4], [14185, 17235, 5], [17235, 21373, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21373, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
01a508d5b3a884c3a4b1f9290c91ec5dcc38c868
[REMOVED]
{"Source-Url": "http://fsl.cs.illinois.edu/pubs/hills-rosu-2007-fmoods.pdf", "len_cl100k_base": 7762, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 40746, "total-output-tokens": 9796, "length": "2e12", "weborganizer": {"__label__adult": 0.00040268898010253906, "__label__art_design": 0.00030159950256347656, "__label__crime_law": 0.00031256675720214844, "__label__education_jobs": 0.0005626678466796875, "__label__entertainment": 7.146596908569336e-05, "__label__fashion_beauty": 0.00016164779663085938, "__label__finance_business": 0.00017178058624267578, "__label__food_dining": 0.0003888607025146485, "__label__games": 0.0005025863647460938, "__label__hardware": 0.0006260871887207031, "__label__health": 0.0005173683166503906, "__label__history": 0.00021517276763916016, "__label__home_hobbies": 7.355213165283203e-05, "__label__industrial": 0.0003647804260253906, "__label__literature": 0.0003871917724609375, "__label__politics": 0.00028204917907714844, "__label__religion": 0.0005354881286621094, "__label__science_tech": 0.01390838623046875, "__label__social_life": 8.720159530639648e-05, "__label__software": 0.003751754760742187, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.0003364086151123047, "__label__transportation": 0.0005660057067871094, "__label__travel": 0.00019228458404541016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39544, 0.02194]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39544, 0.55605]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39544, 0.90932]], "google_gemma-3-12b-it_contains_pii": [[0, 2417, false], [2417, 5598, null], [5598, 8752, null], [8752, 10129, null], [10129, 12138, null], [12138, 15186, null], [15186, 16135, null], [16135, 18748, null], [18748, 21868, null], [21868, 25096, null], [25096, 28129, null], [28129, 31638, null], [31638, 34587, null], [34587, 36373, null], [36373, 39544, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2417, true], [2417, 5598, null], [5598, 8752, null], [8752, 10129, null], [10129, 12138, null], [12138, 15186, null], [15186, 16135, null], [16135, 18748, null], [18748, 21868, null], [21868, 25096, null], [25096, 28129, null], [28129, 31638, null], [31638, 34587, null], [34587, 36373, null], [36373, 39544, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39544, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39544, null]], "pdf_page_numbers": [[0, 2417, 1], [2417, 5598, 2], [5598, 8752, 3], [8752, 10129, 4], [10129, 12138, 5], [12138, 15186, 6], [15186, 16135, 7], [16135, 18748, 8], [18748, 21868, 9], [21868, 25096, 10], [25096, 28129, 11], [28129, 31638, 12], [31638, 34587, 13], [34587, 36373, 14], [36373, 39544, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39544, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
f20cbb3e3f4a3e7b8e37ac1b9d67eaa82c25e0db
Big SaaS: The Next Step Beyond Big Data Hong Zhu, Ian Bayley, M. Younas, David Lightfoot, Basel Yousef and Dongmei Liu Applied Formal Methods Research Group Department of Computing and Communication Technologies Oxford Brookes University, Oxford OX3 3JX, UK E-mail: hzhu@brookes.ac.uk Abstract Software-as-a-Service (SaaS) is a model of cloud computing in which software functions are delivered to the users as services. The past few years have witnessed its global flourishing. In the foreseeable future, SaaS applications will integrate with the Internet of Things, Mobile Computing, Big Data, Wireless Sensor Networks, and many other computing and communication technologies to deliver customizable intelligent services to a vast population. This will give rise to an era of what we call Big SaaS systems of unprecedented complexity and scale. They will have huge numbers of tenants/users interrelated in complex ways. The code will be complex too and require Big Data but provide great value to the customer. With these benefits come great societal risks, however, and there are other drawbacks and challenges. For example, it is difficult to ensure the quality of data and metadata obtained from crowdsourcing and to maintain the integrity of conceptual model. Big SaaS applications will also need to evolve continuously. This paper will discuss how to address these challenges at all stages of the software lifecycle. 1 Introduction Software-as-a-Service (SaaS) is a cloud computing model in which computer applications are delivered to the users as services [1, 2]. It contrasts with the hitherto more conventional practice of selling applications as products to be owned by the customer, and has led to a revolution in what functions can be offered. Table 1 lists just some of the many successful SaaS applications that have arisen over the past few years. <table> <thead> <tr> <th>SaaS</th> <th>Application Area</th> </tr> </thead> <tbody> <tr> <td>Booking.com</td> <td>Hotel booking</td> </tr> <tr> <td>EasyChair</td> <td>Conference management</td> </tr> <tr> <td>Ebay</td> <td>Online shopping</td> </tr> <tr> <td>Facebook</td> <td>Web portal and Social networking media</td> </tr> <tr> <td>Gmail</td> <td>Message communication</td> </tr> <tr> <td>Just Eat</td> <td>Online order for Take Away restaurants</td> </tr> <tr> <td>Lastminute.com</td> <td>Travel agency</td> </tr> <tr> <td>LinkedIn</td> <td>Social networking media for professionals</td> </tr> <tr> <td>Moodle</td> <td>Online Learning Platform</td> </tr> <tr> <td>ResearchGate</td> <td>Social networking media for researchers</td> </tr> <tr> <td>Rightmove</td> <td>Estate Agency</td> </tr> <tr> <td>Salesforce.com</td> <td>Customer Relationship Management</td> </tr> <tr> <td>Whatsapp</td> <td>Instant message communication</td> </tr> </tbody> </table> 2 The Growth of SaaS Those SaaS applications well known to the public today are mostly small, but our vision of the near future is that an era of Big SaaS is emerging. Here, we define Big SaaS applications as those SaaS applications with the following characteristics. (1) **Big Tenancy.** A Big SaaS application usually serves a large number of tenants and users that may well be interrelated in a complex way. Examples of this include: - **Just Eat**: 40,800 takeaway restaurants (in 13 countries) and has 6 million users with active accounts. - **Booking.com**: 638,960 properties (in 211 countries) with over 800,000 room-nights reserved per day. - **Rightmove** (UK’s largest online estate property advertisement portal): 19,304 agent and new homes advertisers, for more than 1 million properties. Examples of complex interrelationships include hierarchies (e.g. a tenant may have sub-tenants etc.) and users being associated with many tenants or no particular tenants. (2) **Big Data.** Large volumes of data will be processed when the number of tenants and users is large. For example, in January 2014, the Rightmove.com website had a record 100 million visits viewing 1.5 billion (3) Big Code. For a Big SaaS application, the software will be typically large in size and high in complexity. Already, SaaS applications are connected to social media or even offer their own domain-specific social networking. Salesforce and Moodle are examples of this. Many already have mobile phone or tablet apps. Importantly, in the near future, this will extend to Internet of Things, Wireless Sensor Networks, robots etc, making the size and complexity of the code even greater. (4) Big Value. SaaS applications already provide extra services that were hitherto not possible. For example, Booking.com provides two types of cross-tenant services that individual hotel websites cannot: (a) for the hotel customers, access to a network of over 8000 affiliate partners, (b) for property owners, personalized account management to help to optimize revenue. Similarly, Rightmove.com claims that property sellers are 5x more likely to find a buyer here than any other website. Because of this Big Value, SaaS applications generate more revenue and profits with greater productivity than ever before, and it seems likely that this trend will continue. For example, Rightmove generated £167m revenue in 2014, up 19% from £140m in 2013, with a similar increase in profits. So, it seems likely that SaaS applications will advance towards Big SaaS and Big Value in particular. 3 The Challenges The development of Big SaaS applications poses three types of challenges common to all socio-technical systems. (1) Social challenges, for society as a whole, to accept the changes to various business, finance, legal and moral aspects; (2) Technical challenges, for industry and researchers, to develop new techniques and novel applications of existing techniques; and finally, (3) Engineering challenges, for engineers and methodologists, to develop new processes, methods and tools to produce applications systematically, efficiently and even automatically. Recent effort has focused on enabling techniques for SaaS applications. The engineering, on the other hand, is still ad hoc so we will focus only on this. These are what we recognize as the grand challenges to the advance of Big SaaS. 3.1 Societal Risks For a SaaS application, the risk $Risk_{SaaS}$ of failure is: $$Risk_{SaaS} = R \times T \times C,$$ where $T$ is the number of tenants reside in the system; $R$ is the failure rate of the system; $C$ is the average consequence of a failure per tenant. For a software application system that is owned by the customers, the total risk $Risk_{WS}$ of failure globally is: $$Risk_{WS} = R' \times C' \times S,$$ where $S$ is the number of copies of the system running at the same time globally; $R'$ is the failure rate of the system, and $C'$ is the average consequence of a failure to the customer who runs a copy of the software. Assume that each tenant runs one copy of the system (i.e. $T=S$), and that the SaaS is of the same level of reliability as the customer owned software (i.e. $R = R'$). Then, we have that $Risk_{SaaS} = Risk_{WS}$, if $C=C'$. From this one can conclude that the two modes of software have equal risks of failure. However, the calculation makes sense only for so-called individual risks. There is, however, a concept of societal risks, borrowed from safety engineering, where the risks from SaaS are considered greater. In general, individual risk is the risk for one person of loss of property or life due to system failures. In safety engineering, whether the risk is tolerable can be judged relatively easily for individuals as people knowingly take and accept risks all the time. Travelling in a car brings the risk of an accident but a train crash that kills many people causes an immense public reaction even many more die per year on roads than on trains. These situations are addressed by estimating societal risk, expressed as the relationship between the probability of a catastrophic incident and the number of users affected. It can be represented as an $F$-$N$ curve that plots the expected frequency ($F$) of failure and the number ($N$ or more) of users affected by each failure. Figure 1 illustrates the difference between societal risks for SaaS and those for customer-owned software of similar reliability. These risks are exacerbated if failure recovery is slow, as with the two recent outages of Salesforce’s CRM system. They each took more than 10 hours to recover, during which users of more than 100,000 tenants were deprived of the service. 3.2 Trustable Crowdsourcing When there are a large number of tenants, it is highly desirable that a SaaS application supports customization so that the specific needs of the customers and their users can be accommodated. However, for Big SaaS, such customization cannot be done by the service provider manually. A solution that adopted by almost all existing successful SaaS applications is **crowdsourcing**. This means that the customers perform customization themselves. For example, **Rightmove** provides a facility for the estate agents to upload themselves information on the properties for sale or to let. Likewise, **Booking.com** enables property owners to set room prices and room availabilities. Similarly, **EBay** enables sellers to enter the information about the goods for sale and the method of payment. Such facilities are fairly simple, however, when compared to **Salesforce’s** facility to let customers build their own applications. An unsolved problem is how to ensure the quality of data and of system configurations obtained by crowdsourcing. This is the second grand challenge to Big SaaS. ### 3.3 Continuous Evolution Continuous evolution has been applied to software development practice for web-based systems, as a part of agile methodologies. In this approach, a software system is revised, tested and updated so frequently that the notion of versions and releases no longer makes sense. Moreover, continuous evolution also requires that such updates and releases go live without any interruption to service. This is of paramount importance for Big SaaS but the unprecedented scale and complexity of Big SaaS presents a challenge. Imagine the situation where hundreds of thousands of tenants each have their own customized version of the system running simultaneously on a number of big clusters distributed around the globe. At the same time numerous new tenants are also performing customization and configuration to join the system. As both of these are happening, developers are committing multiple changes to the system in parallel to fix bugs, to introduce new functions, and to refactor system structure. These changes will inevitably interact with each other while each change may have devastating impact for a large number of users. After a few days of such frequent modifications, the relations between the components could soon become a spaghetti-like mess. No current software change impact analysis tool could be used here and yet updates will have to go live without interruption to the service. The pressure to complete the testing, verification and validation of each change within a short time with a high adequacy will be several magnitudes higher than ever before. To enable Big SaaS to be evolved continuously, we must overcome the barriers in software engineering, especially the methods and tools for change impact analysis, for testing, verification and validation, and for on-line refactoring of software structure. ### 3.4 Conceptual Integrity Conceptual integrity is one of the key features of a good software design. It means that there is a simple conceptual model of the system in which its structure, functionality and dynamic behavior can be understood. It appears that the design of a good conceptual model for a Big SaaS application and maintaining its integrity both play a crucial role in development and maintenance. They also play a role in the customization and continuous evolution of the system. Currently, such a conceptual model is rarely formally defined, and often not even documented explicitly, but conveyed instead informally through demonstrations, case studies, online training materials, marketing articles, etc. The advantages of such an approach is that it is user-oriented, but it leaves much scope for ambiguity, incompleteness and misunderstanding. On the other hand, most online documentation is too developer-oriented, with technical details in place of information about the conceptual model. Ontology and semantic web services can provide user-understandable descriptions of services at the conceptual model level. However, a weakness of ontology based service descriptions is that they are fragmented. Moreover, such documentation and descriptions of services are not verifiable and testable. A link seems missing from the conceptual model to low-level system specification. ### 4 Research Directions In this section, we seek for potential solutions to the engineering problems raised in the previous section. We focus on four phases of the software development lifecycle: functional specification, architectural design, implementation and testing. For each of these, we will briefly review the existing work, outline our approach, report the preliminary progresses we have made so far, and point out directions for future research. #### 4.1 Design: Fault Tolerance Architectures The societal risk must be addressed by appropriate architectural design of SaaS applications. Chong and Carraro asserted that “A well-designed SaaS application is scalable, multi-tenant-efficient, and configurable” [1]. These are the three key differentiators that separate it from a poorly-designed SaaS application. Based on architectural features, they proposed a 4-level maturity model of SaaS applications shown in Figure 2. Level 1 is ad-hoc, the least mature, and essentially the same as the traditional application service provider (ASP) model of software delivery. Each subsequent level adds one of the three key features (configurability, multi-tenant efficiency, scalable in that order). It is no surprise that almost all successful SaaS applications nowadays employ an architecture model of level 3 and 4, and it seems inevitable that level 4 will be needed for Big SaaS, because, as Chong and Carraro argued, “[such] a SaaS system is scalable to an arbitrarily large number of customers ... without requiring additional re-architecting of the application, and changes or fixes can be rolled out to thousands of tenants as easily as a single tenant” [1]. However, this architecture has not addressed the societal risks caused by system level failures. Addressing this problem, in [3] we suggested integrating the architecture with a fault tolerance facility to reduce the consequences of system-scale failures with reduced probability of failure and quicker recovery from failure. Fault-tolerance is one of the most challenging issues of distributed and high performance computing [4]. The extensive research in the past few years for cloud computing in particular can be classified according to the fault to be tolerated. Resource-level fault tolerance aims to achieve high reliability in individual computing resources, such as processor, memory, I/O and network bandwidth, which are lent to users as services, etc. [5,6]. Infrastructure-level fault tolerance techniques include those for virtual machines (VM) or virtual clusters [7], with required availability and reliability via tolerance of underlying hardware failures [8, 9]. At platform level, fault tolerance facilities have been provided in various parallel programming models, such as MapReduce, in which a failed map or reduce task is restarted and/or relocated to a new compute node. The performances of two most commonly used checkpoint / restart techniques for distributed systems, i.e. the Distributed Multi-Threaded Checkpointing and Berkeley Lab Checkpoint/Restart library, have been evaluated in Amazon Elastic Compute Cloud EC2 environment [10]. However, there is no work at application level for SaaS. Moreover, almost all research on fault tolerance in cloud computing assumes that a set of virtual machines are deployed on a number of physical servers and a virtual machine is created for one tenant/user. Thus, they are only applicable to those SaaS applications in the multi-instance architecture of Chong and Carraro’s level 2, but not suitable for those in the multi-tenancy architectures of level 3 and 4. In summary, while some of the above techniques are useful to reduce failure rate of lower level entities, they have not addressed satisfactorily the problem of the high societal risks of Big SaaS. The current practice still relies on traditional periodical backup operations. For example, Salesforce backs up all data to a tape storage on a nightly basis. This traditional checkpoint-and-rollback fault tolerance technique is unsatisfactory for Big SaaS applications. In fact, Salesforce’s tenants also use third party facilities for backing up their own data. Addressing this problem, in [3], we proposed a new approach called tenant-level checkpointing and implemented a prototype called Trench. In this approach, instead of saving the whole system’s state, each checkpointing only saves a part of system state related to a specific tenant. This is important because saving the state of the whole system with one checkpointing operation will cause I/O contention and long delays, as all users of all tenants lose access to the system. Figure 2 Four-Level SaaS Maturity Model [1] Figure 3 Integration of a fault tolerance facility with SaaS Application Architecture Figure 3 shows the architecture of such a fault tolerance facility and how it is integrated with the service-oriented SaaS application architecture [1]. In comparison with existing bulk checkpointing techniques, our preliminary theoretical and empirical studies demonstrated that tenant-level checkpointing increase the performance by a factor of O(N), where N is the number of tenants [11]. It has the following advantages. First, while a SaaS application runs continuously, tenant-level checkpointing can target a specific tenant when the users of the tenant are less active. Thus, a checkpoint can be created without causing too much disruption to normal operations of the system, as requests for services from other tenants are not blocked. Second, tenants with different quality of service requirements (e.g., different reliability levels) can be treated differently by having different checkpoint frequencies. Third, tenant-level checkpointing can be implemented to block only those users of the tenant being checkpointed without affecting any other users. The experiments reported in [3] have shown that the latency of creating a checkpoint for a tenant only depends on the size of the tenant’s state. It is independent of the number of tenants. Moreover, partial checkpointing enables different types of data to be treated differently, with the more important data being checkpointed more frequently. An example of higher priority data would be metadata as it plays an important role in SaaS applications. Finally, but most importantly, recovery from a system-scale failure can proceed tenant by tenant so that the most important tenants are roll-backed first. This significantly reduces the total outage time and hence the societal risk of system-scale failures. It is worth noting that VM checkpointing, replication and live migration facilities [12] not only provide fault tolerant solutions to reliability problems, but also balance service work load [13], reduce system energy consumption of data centers [14], and can even the cost of subscription per user [15]. Similar benefits can be obtained from a tenant-level checkpointing facility like Tench for SaaS applications that do not run on virtual machines. Therefore, tenant level checkpointing could be a viable fault-tolerance solution to Big SaaS’ societal risk problem. 4.2 Specification: Algebraic Method Formal methods have proved their value by their successful applications in safety-critical systems. They can significantly improve software reliability and ensure system safety. Their application in the development of Big SaaS can reduce their societal risk, too. Although this is considered to be a myth [16, 17], formal methods are widely regarded too expensive to be used. However, the great value of Big SaaS applications makes formal methods viable as its cost would then be justifiable. They can also be easy to learn for ordinary software engineers [18]. Moreover, we believe that formal methods can also provide better solutions to the problems of maintaining conceptual integrity, trustworthy crowdsourcing, and continuous evolution. The following reports our preliminary work on how formal methods address these issues. 4.2.1 Support for Crowdsourcing-Based Customization As discussed in Section 2, it is highly desirable to include a crowdsourcing-based customization facility in Big SaaS applications. In this approach, services are discovered and composed by the customers with little support from the service provider. One approach to realize such customization is to employ semantic descriptions of the services as illustrated in Figure 4. The results of these customizations and compositions must be of high reliability, due to our requirement to minimize societal risks. To achieve this service semantics need accurate descriptions, which should also be the following: - **Comprehensible**: easy for users to understand even if they have no IT professional knowledge or skills. - **Abstract**: the design and implementation details hidden from the users for comprehensibility and also to protect intellectual property. - **Machine-Searchable**: for the discovery, composition and configuration of services. - **Testable**: so that service providers and users can both verify the service’s correctness with respect to semantic descriptions. However, no existing technique satisfies all of these requirements. They tend to fall into two categories. The majorities are based on ontology and use a vocabulary to annotate services. The others are based on the mathematical notations of formal methods. Semantic Web Services are an example of the former approach [19] and OWL-S was the first major ontology definition language for this purpose [20]. It provides a set of constructs for describing the properties and capabilities of Web Services in a machine-readable format. Formal methods were applied to provide a precise mathematical meaning in a formal ontology. An alternative approach is the Web Service Modelling Ontology (WSMO) [21], which is a conceptual model that uses the Web Services Modelling Language (WSML) [22]. As well as Big Web Services, work has also been carried out on how to specify the semantics of RESTful web services, such as, MicroWSMO/hRESTS [23], WADL [24] and SA-REST [25]. testable definition of a service’s function, because any ontology is limited to stereotypes formed from the relationship between the concepts and their instances. Formal methods, as an alternative to the ontological approach, have been developed over the past 40 years to define the semantics of software systems in mathematical notations. One such formal method, algebraic specification was first proposed in the 1970s as an implementation-independent specification technique for defining the semantics of abstract data types. Over these years, it has been advanced to specify concurrent systems, state-based systems and software components, all based on solid foundations of the mathematical theories of behavioural algebras [26] and co-algebras [27]. We argue that it is particularly suitable for the development of Big SaaS. Algebraic specifications are at a very high level of abstraction. They are independent of any implementation details. One attractive feature they have is that they can be used directly in automated software testing; see Section 4.4. This feature is particularly important for SaaS engineering, because, when services are customized and composed together by the customer, testing must be performed automatically without the developer’s support. In [28], we investigated the application of the algebraic specification method to service-oriented software by extending and combining the behavioural algebra and co-algebra techniques. The algebraic specification language CASOCC, which originally designed for traditional software entities, such as abstract data types, classes and components, was extended to CASSOC-WS for the formal specification of Big Web Services. A tool was developed to automatically generate the signatures of algebraic specifications from WSDL descriptions of Big Web Services. CASOCC-WS was also applied to RESTful web services [29]. A tool was developed to check syntax-level consistency of formal specifications. A case study was conducted applying CASOCC-WS to a real industrial system, GoGrid. Based on these works, a new algebraic formal specification language called SOFIA [43] was proposed to improve the usability of algebraic specification languages when applied to services. However, algebraic specifications and other formal methods do not directly support efficient searching of services. To bridge the gap between algebraic specification and ontological descriptions, we proposed in [30] to derive the former from the latter, thereby augmenting algebraic specification with the machine-readable and human-understandable attributes of ontology. A software tool called TrS2O (Translator from Specification to Ontology) has been designed and implemented [30]. It translates formal specifications in SOFIA to ontological descriptions of services in OWL. Figure 6 shows the overall structure of the TrS2O Tool. Figure 6. The Overall Structure of The TrS2O Tool ![Figure 6. The Overall Structure of The TrS2O Tool](image) Figure 5. Ontology generated from the SOFIA specification ![Figure 5. Ontology generated from the SOFIA specification](image) A case study of the RESTful web service interface of an actual industrial system called GoGrid shows that the approach is practically useful. 4.2.2 Formal Specification of Conceptual Models One advantage of the algebraic method is that the infrastructure, platform, application domain knowledge, and the services of a SaaS application can all be formally specified in the same language and decomposed into a number of reusable specification packages. For example, in the case study of GoGrid’s RESTful API, we first specified the RESTful web service in a package, then used that to specify the basic constructs of computing infrastructure, and then used both packages to specify the services that GoGrid provides. Figure 5 gives the ontology generated from the SOFIA specification of RESTful web services. Therefore, the specification of domain concepts can be used to serve as a formal specification of the conceptual model of the system. This specification supports automated testing and its internal consistency can be verified. This enables it to support the maintenance of conceptual integrity, too. 4.3 Implementation: New Paradigm of Programming Currently, most web-based applications, including those for SaaS, are implemented in many different programming and scripting languages and even several different paradigms. This complicates development and makes it difficult to develop supporting tools. A desirable alternative is to have a new single paradigm that is particularly suitable for SaaS applications. The agent-oriented paradigm has long been considered suitable for dynamic environments such as the Internet [31], and many research efforts have been reported in the literature [32]. However, the IT industry has been slow to adopt the approach. There are a number of possible reasons for this. First, the notion of agents seems to be too strongly linked to distributed artificial intelligence for software engineers to accept it. Secondly, there are no efficient implementations of agent-oriented programming languages. We now report our work in progress that addresses these problems. 4.3.1 Agent-Oriented Programming Language To address the first problem, we proposed a simplified model of agent [33, 34]. Agents are service providers that consist of: - **actions** that the agent can perform, representing the services it provides or requests it can submit, - **variables**, which represents its internal state of the agent, - **behaviour rules**, forming the body of the service, that determine how the requests are processed, - **collaborating agents**, from which the service requests are received. This set can be updated at runtime. For example, the following is the Hello World example of the language CAOPLE, which we are developing. ```plaintext caste Peer; action say(word: string); init say("Hello world!") end Peer ``` Caste is the classifier of agents so agents are instances of castes. In the above example, the caste Peer is defined. It can take the action of say("Hello world!") and it does this when the agent is created. An agent is therefore an active autonomous computational entity. Castes can be extended to sub-castes just as classes in object-orientation have subclasses. For example, the following is a sub-caste of Peer. ```plaintext caste GreetingPeer inherits Peer; observes all in Peer; body when exists A in Peer: say("Hello world!") do say("Welcome to the world!") end end GreetingPeer ``` An agent of GreetingPeer observes the actions taken by all agents of Peer, as described in the observes clause, which defines its collaborative agents. When there is an agent in the caste Peer that takes the action say("Hello world!")), it will react with the action say("Welcome to the world!"). In general, an agent communicates with other agents by taking observable actions to send messages and it receives messages by observing the observable actions of its collaborative agents. An action can be targeted to one or a set of specific agents. For example, if the say statement can be changed to one of the following: ```plaintext say("Welcome to the world!") to All in Peer; say("Welcome to the world!") to A; ``` If the target receiver is omitted, the default is public. In contrast to the notation of class in object-oriented programming, an agent can be a member of multiple castes at once and its membership can be changed dynamically at runtime by executing one of the caste membership statements: - **Join** casteID: to become a member of casteID; - **Quit** casteID: to quit the membership of casteID; - **Suspend** casteID: to suspend the execution of the body of casteID; - **Resume** casteID: to resume the execution of the body of casteID; - **MoveTo** casteID: to quit from the current caste and become a member of the named caste. Using castes and the inheritance relationships between them, one can encapsulate different behaviours in different contexts together with a set of related state variables, actions, and collaborative agents. The flexible casteship enables agent to have adaptability and to be easy to compose and configure. For example, the following shows how agent can adapt its behaviour according to the context by change its caste membership. ```plaintext caste CheerfulPeer inherits Peer; body when exists A in Peer: say("Hello world!") do say("Hi, good morning."); end; end CheerfulPeer caste SmartPeer inherits Peer; body when DateTime: Tick() do if DateTime.Day = Monday then Join FriendlyPeer else Join CheerfulPeer end; end; end SmartPeer ``` The above just a few key features of the agent-oriented programming language CAOPLE. Readers are referred to [34] for more details. In general, we believe that a new programming paradigm such as agent-orientation will enable the implementation of SaaS applications at a high level of abstraction. Thus, it is worth pursuing. 4.3.2 Implementation of CAOPLE Language Our approach to the implementation of the CAOPLE programming language is to translate CAOPLE source code into machine code for a virtual machine [35]. Our virtual machine, called CAVM, differs from other language specific virtual machines like JVM in that it consists of two parts: a local execution engine LEE and a communication engine CE. The LEE executes the program’s computational code, while the CE realises communication between agents distributed over a computer network. ![Figure 7 Compiling, deploying and executing CAOPLE code](image) As illustrated in Figure 7, the castes in a CAOPLE program are compiled so that one Object Code module is generated for each caste Source Code. It is deployed to a Computer node that runs a communication engine. An agent of a caste can be created on any Computer node that runs an execution engine. It will load the object code module of the caste and execute the code on the machine. For cross-machine communications between agents, the messages are send to the communication engine where the caste resides and further distributed to execution engines where the target agents executes. They may be passed through one or more other communication engine. The reader is referred to [35] for more details of the design, implementation and experiment results of CAVM. 4.4 Testing: Specification-Based Test Automation Automated testing can play at least two roles in the development of Big SaaS: it supports continuous evolution and it ensures the quality of crowdsourcing in service customization. There are a number of approaches to automated testing for software in general and for service-oriented systems in particular. In [36], we proposed a collaborative approach that realizes automated testing of composite web services through composition of test services, as illustrated in Figure 8. In this approach, each web service is accompanied by a testing service, and the framework of automated testing contains a number of general test services for test case generation, test adequacy measurement, test result correctness checking, etc. A test request for the composition of services is submitted to a test broker, which decomposes the testing task into subtasks if needed and if so, searches for and invokes appropriate test services for each sub-task. The searching and invocation of test services (and the initial registration) employs ontologies both of software testing and of the application domain. ![Figure 8. Collaborative Automated Testing of Web Services](image) This approach was devised for web services and should be applicable to Big SaaS, but we believe a formal specification language like SOFIA would make the test automation efficient without developing various test services. ![Figure 9. Architecture of ASSAT Testing Tool](image) Techniques of software test automation based on algebraic specifications have been investigated since 1980s for procedural languages [37, 38], OO software [39, 40], and component-based systems [41], etc. More recently, we have been developing an automated testing tool called ASSAT [42] for testing web services based on formal specification written in SOFIA [43]. Figure 9 shows the architecture of the tool and Figure 10 shows its GUI. Such testing tools can achieve complete automation of the whole testing process including test case generations, test invocation and test result correctness checking. Although SOFIA and ASSAT were originally developed for web services, the principles underlying the language and the implementation of the tool are applicable to Big SaaS. It is worth further research to adapt them to Big SaaS and evaluate their effectiveness. It is worth noting that there are two approaches to the quality assurance of customization. The first is brutal force approach. In this approach, all possible compositions of services and all possible configurations of the SaaS application are tested up to a certain level of combination adequacy, say the coverage of all 2-way or 3-way combinations, before the system is released to the users. This approach is viable only when the number of possible service compositions and configurations is small. Unfortunately, even for a SaaS application of modest scale, there could be a huge number of test cases even to cover 2-way or 3-way combinations of services and configurations. The second is the automated online testing approach. During the development process, testing focus at the individual services to ensure each service is correct with respect to its specification. The most popular and important combinations and configurations of the services are also tested. When a user builds his or her own customized version of the system, the customization, which is a composition and configuration of the services, is then tested automatically against the specification. In this approach, automated testing plays a crucial role to support customization of services. It requires testing to be performed with little human involvement because crowdsourcing-based customization is conducted by the users. 5 Conclusion In this paper we argue that an era of Big SaaS is emerging. It differs from existing SaaS applications in the number of tenants/users and the complexity of their relationships, as well as in the size and complexity of the program code. They will possess and utilize Big Data to provide great added value to their services. Developing Big SaaS applications will impose grave challenges to software and service engineering to reduce the societal risks to an acceptable level, to enable trustable crowdsourcing-based customization, to maintain conceptual integrity of the system and to support continuous evolution. We argued that these challenges must be met in all stages of the software development lifecycle. In particular, in the specification phase, an algebraic specification language can support formal development of service-oriented systems to improve reliability. It also helps to maintain conceptual integrity by providing a formal definition of the conceptual model. It supports crowdsourcing-based customization by linking formal specification to the ontological description of services. Moreover, testing can be automated based on algebraic specifications. This also helps with continuous evolution. Also, for the architectural design phase, a tenant-level checkpointing facility could play a significant role in reducing societal risks. In the implementation phase, a new paradigm of programming is desirable and we are exploring the potential of an agent-oriented programming language. In the testing phase, automation is essential and formal specification will make this possible. References
{"Source-Url": "https://radar.brookes.ac.uk/radar/file/38f52df9-d120-430f-83f0-1da705b308ea/1/Zhu%20et%20al%20-%202015%20-%20Big%20SaaS.pdf", "len_cl100k_base": 7760, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 41677, "total-output-tokens": 10455, "length": "2e12", "weborganizer": {"__label__adult": 0.0002472400665283203, "__label__art_design": 0.00027942657470703125, "__label__crime_law": 0.0002493858337402344, "__label__education_jobs": 0.0006117820739746094, "__label__entertainment": 5.453824996948242e-05, "__label__fashion_beauty": 0.0001080036163330078, "__label__finance_business": 0.0003941059112548828, "__label__food_dining": 0.00025844573974609375, "__label__games": 0.0003705024719238281, "__label__hardware": 0.0005145072937011719, "__label__health": 0.00031495094299316406, "__label__history": 0.0001817941665649414, "__label__home_hobbies": 5.257129669189453e-05, "__label__industrial": 0.000225067138671875, "__label__literature": 0.00022614002227783203, "__label__politics": 0.0002092123031616211, "__label__religion": 0.0002734661102294922, "__label__science_tech": 0.014373779296875, "__label__social_life": 7.033348083496094e-05, "__label__software": 0.00957489013671875, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00016582012176513672, "__label__transportation": 0.0003223419189453125, "__label__travel": 0.00014638900756835938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44780, 0.02426]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44780, 0.36908]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44780, 0.91072]], "google_gemma-3-12b-it_contains_pii": [[0, 3943, false], [3943, 8607, null], [8607, 14429, null], [14429, 18423, null], [18423, 22936, null], [22936, 26050, null], [26050, 31112, null], [31112, 34806, null], [34806, 38361, null], [38361, 44780, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3943, true], [3943, 8607, null], [8607, 14429, null], [14429, 18423, null], [18423, 22936, null], [22936, 26050, null], [26050, 31112, null], [31112, 34806, null], [34806, 38361, null], [38361, 44780, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44780, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44780, null]], "pdf_page_numbers": [[0, 3943, 1], [3943, 8607, 2], [8607, 14429, 3], [14429, 18423, 4], [18423, 22936, 5], [22936, 26050, 6], [26050, 31112, 7], [31112, 34806, 8], [34806, 38361, 9], [38361, 44780, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44780, 0.06024]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
eb0e9ef1335af88e7ceb1a94e289f742670725c0
[REMOVED]
{"len_cl100k_base": 7217, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29028, "total-output-tokens": 8743, "length": "2e12", "weborganizer": {"__label__adult": 0.0003662109375, "__label__art_design": 0.0005278587341308594, "__label__crime_law": 0.0006189346313476562, "__label__education_jobs": 0.001049041748046875, "__label__entertainment": 0.00013899803161621094, "__label__fashion_beauty": 0.00022733211517333984, "__label__finance_business": 0.0008831024169921875, "__label__food_dining": 0.0004627704620361328, "__label__games": 0.00136566162109375, "__label__hardware": 0.0014171600341796875, "__label__health": 0.0007061958312988281, "__label__history": 0.0005717277526855469, "__label__home_hobbies": 0.00021564960479736328, "__label__industrial": 0.001739501953125, "__label__literature": 0.00028443336486816406, "__label__politics": 0.0004940032958984375, "__label__religion": 0.0004837512969970703, "__label__science_tech": 0.34716796875, "__label__social_life": 0.00012767314910888672, "__label__software": 0.0178680419921875, "__label__software_dev": 0.62158203125, "__label__sports_fitness": 0.0004677772521972656, "__label__transportation": 0.0010356903076171875, "__label__travel": 0.00032448768615722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33249, 0.03534]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33249, 0.38408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33249, 0.90487]], "google_gemma-3-12b-it_contains_pii": [[0, 1685, false], [1685, 4849, null], [4849, 7656, null], [7656, 10416, null], [10416, 12752, null], [12752, 17075, null], [17075, 19697, null], [19697, 22984, null], [22984, 27109, null], [27109, 30353, null], [30353, 33249, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1685, true], [1685, 4849, null], [4849, 7656, null], [7656, 10416, null], [10416, 12752, null], [12752, 17075, null], [17075, 19697, null], [19697, 22984, null], [22984, 27109, null], [27109, 30353, null], [30353, 33249, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33249, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33249, null]], "pdf_page_numbers": [[0, 1685, 1], [1685, 4849, 2], [4849, 7656, 3], [7656, 10416, 4], [10416, 12752, 5], [12752, 17075, 6], [17075, 19697, 7], [19697, 22984, 8], [22984, 27109, 9], [27109, 30353, 10], [30353, 33249, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33249, 0.21]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
055d0afb104d6626f8fd02bbac71c17b19c9e3dc
SIMD in JavaScript via C++ and Emscripten Peter Jensen Intel Corporation peter.jensen@intel.com Ivan Jibaja Intel Corporation ivan.jibaja@intel.com Ningxin Hu Intel Corporation ningxin.hu@intel.com Dan Gohman Mozilla sunfish@mozilla.com John McCutchan Google Inc. johnmccutchan@google.com 1. Abstract Emscripten, Mozilla’s C/C++ to JavaScript compiler, can be used to port existing native C/C++ applications to the Web platform. When paired with a fast JavaScript engine, the applications run at near-native speeds. However, compute intensive C/C++ applications, such as games and media-processing, making use of SIMD intrinsics or gcc style vector code cannot achieve near-native speed, because JavaScript does not have language support for expressing SIMD operations. This paper presents SIMD.JS, an evolving new JavaScript standard, enabling SIMD operations at the JavaScript language level, and extensions to Emscripten, enabling compilation of C++ programs that make use of SIMD intrinsics or gcc style vector code. An extensive subset of available C++ SIMD x86 intrinsics is supported. Using Emscripten, we show how C/C++ SIMD programs are compiled into JavaScript resulting in equivalent speedups compared to scalar code. The new powerful combination of SIMD.JS enabled JavaScript engines and Emscripten’s newly added ability to utilize the SIMD.JS primitives for it’s code generation will allow a new set of compute intensive C/C++ applications to be ported to the Web platform, thereby enabling the next leap in performance on the web platform. 2. Introduction Games, multimedia, image and video processing, and other compute intensive applications are being ported to JavaScript and made available on the Web. A popular approach for porting existing applications, originally written in C/C++, to the web, is to use Mozilla’s Emscripten C++ to JavaScript compiler. Emscripten compiles C/C++ source code into a subset of JavaScript, known as asm.js. Modern JavaScript engines are able to execute the asm.js JavaScript subset at near-native speeds without the use of any plugins. Historically, plugins have been the source of many security and instability issues in web browsers, so avoiding those is highly desirable. Typically, compute intensive C/C++ applications will make use of SIMD/vector code to achieve the desired performance. However, because JavaScript does not have an equivalent way of expressing SIMD operations, the JavaScript code resulting from compiling the C/C++ applications via Emscripten is missing out on potentially significant performance gains. With the recent introduction of SIMD.JS; a new JavaScript language proposal, it is now possible to use SIMD JavaScript primitives to improve performance of critical code, thus making better use of the SIMD hardware’s full potential. This paper explores the implementation and evaluation of Emscripten’s ability to generate SIMD.JS code from its native C/C++ intrinsics counterparts. We present performance data for a set of SIMD/vector benchmarks written in C++ and an equivalent set of benchmarks handwritten in JavaScript. The SIMD vs. scalar speedups for the natively compiled C++ benchmarks is compared to the speedups for the handwritten JavaScript benchmarks as well as for the benchmarks compiled into JavaScript from the C++ benchmarks. Google’s V8 and Mozilla’s SpiderMonkey JavaScript engines are used for JavaScript execution. 3. SIMD.JS SIMD is short for Single Instruction, Multiple Data. It refers to CPU instruction level data parallelism. Most modern CPUs have a significant portion of their available instructions dedicated to operating on data in parallel. Typically, those instructions will perform the same operation on elements in short vectors, e.g. vectors of length 4, 8, or 16. Use of these instructions leads to increased performance, because more data processing is achieved with fewer instructions executed, and fewer instructions also means power savings, which is of utmost importance on mobile battery powered devices. Figure 1 shows how four scalar additions are combined into a single operation. JavaScript is quickly emerging as one of the most popular languages among software developers. It was originally used for simple web page scripting for creating interactive web pages. Around 2008, very efficient and high performance JavaScript engines emerged (e.g., Firefox’s TraceMonkey [? ] and Chrome’s V8 engines). Since then, JavaScript has become a viable language for things beyond just basic web page interactivity, as witnessed by it’s use in large web based applications, such as office applications; e-mail, document processing, etc. Also, large games, which were previously standalone, natively compiled programs, have been ported to JavaScript to run within the browser environment. More recently, JavaScript has been adopted as a server side scripting language (node.js), and lately, JavaScript has found it’s way to the mobile platform as a language that offers better portability between the different mobile platforms without sacrificing performance and features. For example, platform sensors (location, accelerometers, etc) are accessible from JavaScript via W3C APIs. Even with the past 7 years of JavaScript performance advances, the desire for better performing JavaScript engines has not lessened, quite the contrary. It’s a spiral that keeps on going; better performance leads to more uses, more uses require better performance. Specifically, software that uses data parallelism to achieve adequate performance have, so far, been restricted to natively compiled languages, such as C++, because such languages offer ways of utilizing the SIMD instructions available in modern CPUs. JavaScript has only one number type, Number, which is an IEEE-754 floating point number, and JavaScript offers no abstraction primitives for writing algorithms utilizing data parallelism, so it’s imperative that this shortcoming is dealt with, such that the next leap in JavaScript performance is made possible. This is what the SIMD.JS proposal addresses. SIMD.JS is an emerging standard developed collaboratively by Intel, ARM, Mozilla, Google, and Microsoft. It provides low level data types and operations that map well onto the available SIMD instructions of the underlying hardware. Currently, the defined data types and operations are a representative and useful overlap between SIMD types and operations available in most modern CPUs. The SIMD.JS proposal is structured as an object hierarchy, with SIMD being the top level global object. The immediate properties between SIMD types and operations available in most modern CPUs. Currently, the defined data types and operations that map well onto the available SIMD instructions of the underlying hardware. JavaScript provides low level data types and operations that map well onto the available SIMD instructions of the underlying hardware. Currently, the defined data types and operations are a representative and useful overlap between SIMD types and operations available in most modern CPUs. Figure 1. Replacing four scalar additions with one SIMD addition Figure 2. SIMD.JS object hierarchy Figure 3. SIMD JavaScript code for finding the average of an array of numbers Figure 4. JIT compiler generated code for the for loop in the average function The polyfill also serves as documentation for the semantics and approval process. As an example use case, Figure 3 shows the SIMD JavaScript code for finding the average of an array of floating point numbers. The numbers are held in a Float32Array typed array; data. The benefit of using SIMD operations, for computing the average, is that four numbers can be added in one operation, thereby reducing the number of iterations by a factor of 4, and achieving an equivalent speedup. The optimizing Just-In-Time (JIT) compiler in our Chrome/V8 SIMD enabled prototype is able to produce the code in Figure 4 for the body of the loop. The code shows how the compiler is able to utilize 128-bit SIMD registers (xmm) to hold the value of sum and to use the addps instruction for adding 4 single precision numbers in one instruction. The details for this code snippet are as follows: - line 1-2: Check the loop index, i, against the upper bound and exit the loop if upper bound is encountered. eax holds the loop index and edx holds the upper bound. - line 3-4: Check to enable the JavaScript engine to abort the execution, if the loop has been running for too long. It will prevent a user program from hanging the browser. - line 5-11: Bounds check for the SIMD float32x4 load operation. eax holds the index and ecx holds the upper bound. - line 12: Load the 4 32-bit float values. edi holds the base address of the data. The reason the index is multiplied by 2 and not 4 is that the V8 represents integers in bits 1-31, so the value in eax is already holding the value of the index variable times 2. - line 13-14: Add the four 32-bit float values. ```c float averageScalar(float *a, uint32_t length) { float sum = 0.0f; for (uint32_t j = 0, l = length; j < l; j = j + 4) { sum = sum + (*(a++)); } return sum/length; } ``` Figure 5. Scalar C code for the `average` function ```c function _averageScalar(&$a, $length) { $a = &a | 0; $length = $length | 0; var $addr$06 = 0, $add = 0.0, $j$05 = 0; $sum$0$lcssa = 0.0, $sum$04 = 0.0, sp = 0; sp = STACKTOP; if (($length | 0) == 0) { $sum$0$lcssa = 0.0; } else { $addr$06 = $a; $j$05 = $j$05 + 4 if (($j$05 | 0) < $length >>> 0) { $sum$0$lcssa = $add; break; } else { $addr$06 = $a$addr$06 + 4 | 0; $sum$04 = $add; } } return +(+$sum$0$lcssa / +(+$length >>> 0)); } ``` Figure 6. JavaScript code generated by Emscripten for the `averageScalar` function ``` var buffer = new ArrayBuffer(TOTAL_MEMORY); HEAP8 = new Uint8Array(buffer); HEAP16 = new Uint16Array(buffer); HEAP32 = new Uint32Array(buffer); HEAPU8 = new Uint8Array(buffer); HEAPU16 = new Uint16Array(buffer); HEAPU32 = new Uint32Array(buffer); HEAPF32 = new Float32Array(buffer); HEAPF64 = new Float64Array(buffer); ``` All of these typed arrays are views on the same array buffer, so they all access the same physical memory. Notice that the index expression `&$a$addr$06 >> 2` is shifted right by 2. This is because `$addr$06` is a byte address, and elements in the HEAPF32 are 4 bytes each. To enable the JavaScript JIT compilers to generate efficient code two type coercion tricks are used. For integers and pointers the `expr | 0` is used to guarantee that the type of the resulting expression is a 32-bit integer. JavaScript semantics of the the bitwise `|` expression dictate that the resulting expression is a 32-bit integer. A side effect of pointers being 32-bit integers is that compiled C/C++ programs are restricted to a 32-bit address space. For floating point numbers, the unary `+` operator is applied, because JavaScript semantics dictate that the resulting expression is a floating point number. Emscripten has been successfully used to compile very large C/C++ code bases (+100K lines of code). For example both Epic’s and Unity’s game engines have been ported, using Emscripten [? . Game engines are one example of software that will have optional implementations of performance critical portions of the code implemented using SIMD features. Since, JavaScript hasn’t had a way of utilizing these powerful low level SIMD features of the CPU, Emscripten has not been able to compile these highly tuned implementations of the performance critical sections of the code. However, with the introduction of SIMD.JS, Emscripten will 5. Compiling C++ with SIMD intrinsics Figure 7 shows a typical SIMD implementation of the average function in C, using x86 SIMD intrinsics. The _m128 type holds 4 32-bit float numbers. The _mm_add_ps function calls are the SIMD intrinsics, which operates on single precision _m128 values. For example, the _mm_add_ps intrinsic maps to the x86 addps instruction, which adds 4 32-bit float numbers in one operation. This allows the iteration count to be reduced by a factor of 4, resulting in an equivalent speedup. The resulting JavaScript code of Figure 8 produced by Emscripten is shown in Figure 9. The generated JavaScript code is virtually identical to the code resulting from the C code using intrinsics. If possible, developers should be encouraged to write their SIMD code using this more universal syntax. 6. Benchmarks We’ve created a set of benchmark kernels. The kernels are written in both C++ and JavaScript. Each kernel will have both a scalar implementation and a SIMD implementation. The C++ SIMD implementation is done using x86 intrinsics. The C++ kernels are compiled with both the C++ clang compiler and with the Emscripten compiler. This gives us a basis for comparing SIMD/Scalar speedups for both C++ and JavaScript as well as C++/JavaScript performance differences. For the JavaScript execution we’ve used our two SIMD enabled JavaScript engines; V8 and SpiderMonkey. The benchmarks are written such that each kernel operation is executed as many times as it takes for it to run in about a second. This guarantees that the optimizing JavaScript JIT compilers kick in, which is essential for optimal performance. When creating SIMD optimized code from scalar code, there are basically two approaches: 1) Combine a sequential sequence of scalar operations into SIMD operations, and 2) Combine multiple iterations of simple loops, by replacing the scalar operations with equivalent SIMD operations. Typical examples of 1) are matrix/vector operations, where there’s no loops involved with the computations, and a typical example of 2) is the average example shown previously. We cover both of these two types of SIMD optimizations in the benchmarks we’ve written. The benchmark kernels we’ve collected performance data for are: - **AverageFloat32x4**: Average 10,000 32-bit float number. The SIMD version of this kernel will use SIMD instructions in the loop to perform vector versions of the equivalent scalar operations, thereby achieving an iteration count reduced by the vector length (4). This kernel for the example used throughout the previous examples. - **Mandelbrot**: ? Compute how many iterations it takes for $z(i+1) = z(i)^2 + c$ to diverge for seed point c = (0.01, 0.01). Divergence is determined by $z^{*2} > 4$. Different seed points will typically diverge after different number of iterations. This results in a non-trivial SIMD/vector implementation, since the control flow in the loop will differ for different seed points. Our implementation relies on computing an increment vector, with 0 for the vector elements that have already diverged, and 1 for the elements that haven’t. However, for the chosen seed point the series never diverges, so the loop always runs up to the maximum number of iterations allowed. The maximum number of iterations for this kernel is set to 100. The scalar version of the kernel will compute the iteration count for one seed point, and the SIMD version will compute the iteration count for four seedpoints. As an example of how all the benchmark kernels are structured we’ve shown the handwritten JavaScript and C++ version of this kernel in Figure 11 and Figure 10. - **MatrixMultiplication**: Multiply two 4x4 matrices. For each element in the resulting 4x4 matrix, 4 scalar multiplications and 3 scalar additions are required. The SIMD version will rearrange the source data, using a shuffle operation, and compute an entire row in the result matrix with 4 SIMD multiplies and 3 SIMD additions. - **VertexTransform**: Multiply a 4x4 matrix with a 4 element vector. This is a common CPU side operation for creating projection matrices for webGL shaders. Typically, it’s used to compute a transformation for a point in 3D space, e.g. rotation around an axis. For each element in the resulting 4 element vector, 4 scalar multiplies and 3 scalar additions must be computed. The SIMD version will compute all 4 elements, using SIMD multiplies and adds, thereby reducing the number of multiply and add instructions. Some shuffling is required to get the input data into the right lanes. - **MatrixTranspose**: Transpose a 4x4 matrix. This is also a common operation, when doing vector/matrix algebra. Rows are made into columns. The scalar kernel, simply moves the 16 elements around one by one. The SIMD version uses 8 shuffle operations. - **MatrixInverse**: Compute the inverse of a 4x4 matrix. This is a complex operation, involving hundreds of multiplies and add operations. There are several different ways of computing the inverse of a matrix. Here we have chosen a method called the Cramer rule [7]. This kernel is the most compute intensive. The sources for the C++ benchmarks can be found here [7], and the sources for the handwritten JavaScript benchmarks can be found here: [2]. ### 7. Results We’ve collected performance results for these combinations: - Natively compiled C++ - Handwritten JavaScript executed with V8 - Emscripten produced JavaScript from C++ implementation executed with V8 - Emscripten produced JavaScript from C++ implementation executed with SpiderMonkey All performance numbers were collected while running on an Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz. Note, we’re not yet able to run ‘generic’/handwritten SIMD JavaScript code efficiently with SpiderMonkey. SpiderMonkey uses two different JIT compilers; one for ‘generic’ JavaScript code (IonMonkey) and another for the asm.js JavaScript subset (Ondin,... Only the OdinMonkey JIT compiler has been adapted to work with the SIMD operations. ### 7.1 SIMD vs. Scalar Figure 12 shows the relative SIMD vs. Scalar performance of the four combinations. Greater than 1 means that the kernel ran that much faster than the corresponding scalar kernel. Both the Average and Mandelbrot benchmarks show a 4x speedup. This is inline with expectations since the SIMD version of the kernel have four times fewer iterations. ### 7.2 Scalar C++ vs. JavaScript Figure 13 shows the relative scalar C++ vs. JavaScript performance for each of the four combinations. Less than 1 means that the JavaScript kernel ran that much slower than the corresponding C++ kernel. ### 7.3 SIMD C++ vs. JavaScript Figure 14 shows the relative SIMD C++ vs. JavaScript performance for each of the four combinations. Less than 1 means that the JavaScript kernel ran that much slower than the corresponding C++ kernel. ### 8. Future Work For future work we’re planning on adding a more complete set of SIMD.JS primitives, to better cover all the existing SIMD intrinsics being used. A study of which missing intrinsics have wider use is warranted. When adding more primitives, with direct mapping to hardware instructions on one architecture, but not on another, it might not be possible to achieve the desired performance across all architectures. ### 9. Summary This paper describes SIMD.JS and Emscripten, together these technologies allow porting of compute intensive native C/C++ applications to the Web platform with near-native performance. The contributions of our work include the changes to Emscripten enabling --- **Figure 12.** SIMD vs. Scalar relative speedups **Figure 13.** Scalar C++ vs. Javascript relative performance **Figure 14.** SIMD C++ vs. Javascript relative performance compilation of C/C++ programs that use SIMD intrinsics or gcc style vector code. Our contributions are publicly available, and our evaluation demonstrates that the SIMD vs. scalar speedup we see with the C++ kernels are equivalent for the equivalent JavaScript kernels. Acknowledgments Acknowledgments goes here.
{"Source-Url": "http://www.cs.utexas.edu/~ivan/pubs/simdjs.pdf", "len_cl100k_base": 4318, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22391, "total-output-tokens": 4746, "length": "2e12", "weborganizer": {"__label__adult": 0.0005340576171875, "__label__art_design": 0.0003495216369628906, "__label__crime_law": 0.000396728515625, "__label__education_jobs": 0.0002827644348144531, "__label__entertainment": 0.00010520219802856444, "__label__fashion_beauty": 0.00019037723541259768, "__label__finance_business": 0.0003116130828857422, "__label__food_dining": 0.00044465065002441406, "__label__games": 0.0007386207580566406, "__label__hardware": 0.004161834716796875, "__label__health": 0.0005383491516113281, "__label__history": 0.00030159950256347656, "__label__home_hobbies": 0.0001081228256225586, "__label__industrial": 0.0007061958312988281, "__label__literature": 0.00021386146545410156, "__label__politics": 0.0002560615539550781, "__label__religion": 0.0005850791931152344, "__label__science_tech": 0.05377197265625, "__label__social_life": 6.943941116333008e-05, "__label__software": 0.0069427490234375, "__label__software_dev": 0.92724609375, "__label__sports_fitness": 0.0004520416259765625, "__label__transportation": 0.0008759498596191406, "__label__travel": 0.0002682209014892578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19822, 0.03031]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19822, 0.59335]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19822, 0.87554]], "google_gemma-3-12b-it_contains_pii": [[0, 4945, false], [4945, 8990, null], [8990, 11757, null], [11757, 14476, null], [14476, 17694, null], [17694, 19509, null], [19509, 19822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4945, true], [4945, 8990, null], [8990, 11757, null], [11757, 14476, null], [14476, 17694, null], [17694, 19509, null], [19509, 19822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19822, null]], "pdf_page_numbers": [[0, 4945, 1], [4945, 8990, 2], [8990, 11757, 3], [11757, 14476, 4], [14476, 17694, 5], [17694, 19509, 6], [19509, 19822, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19822, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
26bb4a98e395d72c4a9ac66a83a1458b0b1858f7
Source Level Debug using OpenOCD/GDB/Eclipse on Intel® Quark™ SoC X1000 Application Note May 2014 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROV PROV 1 A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm Any software source code reprinted in this document is furnished for informational purposes only and may only be used or copied and no license, express or implied, by estoppel or otherwise, to any of the reprinted source code is granted by this document. Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to: http://www.intel.com/products/processor_number/ Code Names are only for use by Intel to identify products, platforms, programs, services, etc. ("products") in development by Intel that have not been made commercially available to the public, i.e., announced, launched or shipped. They are never to be used as "commercial" names for products. Also, they are not intended to function as trademarks. Intel, the Intel logo, and Quark are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. *Other names and brands may be claimed as the property of others. Copyright © 2014, Intel Corporation. All rights reserved. Contents 1 Introduction ..........................................................................................................................5 1.1 Terminology ..................................................................................................................5 1.2 References ..................................................................................................................6 2 Prerequisites .......................................................................................................................7 2.1 Supported Operating Systems .....................................................................................7 3 Setting up Hardware .........................................................................................................8 4 OpenOCD Setup – Linux Host ..........................................................................................9 4.1 Patching and building OpenOCD ...............................................................................9 4.2 JTAG USB pod access ..............................................................................................10 4.3 Kernel debug build - Galileo board example .............................................................10 4.4 Modifying bootloader ..............................................................................................13 4.5 OpenOCD ..................................................................................................................13 5 OpenOCD Setup – Windows Host ....................................................................................15 5.1 Patching and Building OpenOCD .............................................................................15 6 OpenOCD Setup – OS X Host ..........................................................................................17 6.1 Patching and Building OpenOCD .............................................................................17 7 Debugging .......................................................................................................................18 7.1 GDB ..........................................................................................................................18 7.2 Eclipse .......................................................................................................................20 7.3 GDB and kernel modules .........................................................................................23 Figures Figure 1. Debugging Setup ....................................................................................................8 Tables Table 1. Terminology ..........................................................................................................5 Table 2. References .............................................................................................................6 ### Revision History <table> <thead> <tr> <th>Date</th> <th>Revision</th> <th>Description</th> </tr> </thead> </table> | May 2014 | 003 | Updates due to release of OpenOCD 0.8.0 (supporting Intel® Quark™ SoC) and BSP Software Release Version 1.0.0. Added [Section 5, OpenOCD Setup – Windows Host](#). Added [Section 6, OpenOCD Setup – OS X Host](#). | | March 2014 | 002 | Added [Section 1.2](#) and [Section 7.3](#). | | December 2013| 001 | Initial release of document. | Introduction 1 Introduction This document explains how to use OpenOCD with Eclipse* or GDB* for source level debugging of the Linux* kernel running on the Intel® Quark™ SoC X1000. You may see references in the code to product codenames: - Intel® Quark™ SoC X1000 (formerly codenamed Clanton) - Intel® Quark™ Core (codenamed Lakemont Core) Note: This document is not a complete guide to source level debugging. It is focused on debugging the Linux kernel on the Intel® Quark™ SoC X1000 at source level using OpenOCD with GDB or Eclipse. 1.1 Terminology Table 1. Terminology <table> <thead> <tr> <th>Term</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Eclipse</td> <td>An integrated development environment (IDE) comprising a base workspace and an extensible plug-in system for customizing the environment.</td> </tr> <tr> <td>GDB</td> <td>GNU* Debugger is the standard debugger for the GNU operating system.</td> </tr> <tr> <td>GNU*/Linux*</td> <td>Linux is the kernel, one of the essential major components of the system. The system as a whole is basically the GNU system, with Linux added. See article here: <a href="https://www.gnu.org/gnu/linux-and-gnu.html">https://www.gnu.org/gnu/linux-and-gnu.html</a></td> </tr> <tr> <td>JTAG</td> <td>Joint Test Action Group (JTAG) is the common name for the IEEE 1149.1 Standard Test Access Port and Boundary-Scan Architecture. Debuggers communicate on chips with JTAG to perform operations like single stepping and breakpointing.</td> </tr> <tr> <td>OpenOCD</td> <td>Free and Open On-Chip Debugger.</td> </tr> </tbody> </table> 1.2 References Table 2. References <table> <thead> <tr> <th>Title / Location</th> <th>Doc ID</th> </tr> </thead> <tbody> <tr> <td>Source Level Debug using OpenOCD/GDB/Eclipse on Intel® Quark™ SoC X1000</td> <td>330015 (this document)</td> </tr> <tr> <td>Intel® Quark™ SoC X1000 Debug Operations User Guide</td> <td>329866</td> </tr> <tr> <td>Intel® Quark™ SoC X1000 Datasheet</td> <td>329676</td> </tr> <tr> <td>OpenOCD User Guide</td> <td>N/A</td> </tr> <tr> <td>GDB* documentation</td> <td>N/A</td> </tr> </tbody> </table> Other useful documents about the Intel® Quark™ SoC X1000 and the Intel® Galileo board may be found at: https://communities.intel.com/community/makers/documentation # Prerequisites Refer to the Intel® Quark™ SoC X1000 Board Support Package (BSP) Build Guide before attempting the steps outlined in this document. **Required software:** - GNU*/Linux* host system - OpenOCD - GDB - Eclipse (Indigo tested) with CDT Plugin Installed (Main + Optional Features) - Quark Kernel compiled with debug symbols - Git **Required hardware:** - OpenOCD supported JTAG debugger. - TinCanTools* FLYSWATTER2 - Olimex* ARM-USB-OCD-H [https://www.olimex.com/Products/ARM/JTAG/ARM-USB-OCD-H/](https://www.olimex.com/Products/ARM/JTAG/ARM-USB-OCD-H/) The following pin adapter was used to connect the JTAG debugger to the Quark board: [https://www.olimex.com/Products/ARM/JTAG/ARM-JTAG-20-10/](https://www.olimex.com/Products/ARM/JTAG/ARM-JTAG-20-10/) ## 2.1 Supported Operating Systems The steps in this document have been validated against an Ubuntu 12.04 LTS 64 bit setup, but should work on any recent GNU/Linux distribution with minor adaptations. Pre-built binaries of OpenOCD for Windows* are available for download. See Section 5 for details. Intel has successfully used OpenOCD commands with Windows but has not tested gdb/Eclipse on top of the binaries. Intel has not fully validated OpenOCD on OS X*, however, simple tests have been successful. See Section 6 for details. 3 Setting up Hardware The figure below shows a recommended setup for debugging. 1. Host System running OpenOCD, GDB, and Eclipse 2. USB 2.0 male-male A-B cable 3. Flyswatter 2 4. ARM-JTAG-20-10 Adapter 5. JTAG Port 6. Intel® Galileo Board 7. Serial Cable to view boot process 8. Power Supply Figure 1. Debugging Setup Note: Flyswatter2 and many JTAG adapters support JTAG and Serial concurrently. If you source a serial cable that connects (7) to (3) as shown above, then you will have JTAG and Serial console data arriving at your host system (1) via USB (2). For example, this cable has been used: http://www.sfcable.com/D935-06.html 4 OpenOCD Setup – Linux Host 4.1 Patching and building OpenOCD To enable Quark support, you must obtain the OpenOCD source code and then build it. Follow the steps in this section. Dependencies: • git • libtool • automake In addition, to use a JTAG pod with an FTDI/FT2232 chip (like the Flyswatter2) you must install the related USB development library, using a command like: $ sudo apt-get install libusb-1.0-0-dev Check out the OpenOCD source code, create a branch checking out the validated 0.8.0 version, using the following commands: $ git clone git://git.code.sf.net/p/openocd/code openocd-code $ cd openocd-code $ git branch quark v0.8.0 $ git checkout quark Configure and build OpenOCD: $ ./bootstrap $ ./configure --enable-ftdi $ make It is not strictly necessary to install OpenOCD every time it is rebuilt. The binary and configuration files can be used from the build/source tree directly if desired. However, it is recommended to perform this additional step the first time or when modifying configuration files: $ sudo make install 4.2 JTAG USB pod access By default, non-root users won’t have access to the JTAG pods connected via USB. You must grant write access to the proper /dev/bus/usb entry every time a device is connected to be able to run OpenOCD using a non-root account. The process can be automated by adding a udev rule. Simply create a text file in the rules directory: ```bash $ sudo vim /etc/udev/rules.d/99-openocd.rules ``` The IDs depend on the JTAG pod. For example, for the Flyswatter2 and the Olimex-ARM-USB-OCD-H, the rules file must have the following content: ```bash SUBSYSTEM=="usb", ATTR{idVendor}=="0403", ATTR{idProduct}=="6010", MODE="0666" SUBSYSTEM=="usb", ATTR{idVendor}=="15ba", ATTR{idProduct}=="002b", MODE="0666" ``` 4.3 Kernel debug build - Galileo board example To debug the kernel at source level (for example, using C language sources), you must rebuild the kernel and enable the option to generate debugging information. Next, the newly built kernel and modules have to be installed on the system. This section describes how to build a debug-enabled kernel for the Galileo board. An SD card is required for this process. Dependencies: - git - texinfo - gawk - diffstat - chrpath Building the kernel and the system for the SPI flash is not covered in this example. The following steps require fetching packages from the Internet. If the build fails because missing information, check your proxy settings, git configuration, and try to rerun the build. Also note that in this document, `<version>` is used as a placeholder string for the BSP software version. Steps: 1. Get the Quark Board Support Package (BSP) software as described in the BSP Build Guide. Download the latest package (Board_SupportPackage_Sources_for_Intel_Quark_<version>.7z) and unpack it. It contains several archives. You will use two of them in the steps below: 2. **Build image-full and create a bootable SD card** You can skip this step if you have already built the `image-full` and have set up an SD card using the steps described in the Quark™ BSP Build Guide. In the steps that follow, you must reference the toolchain that you built previously (that is, to set the correct environment variables). Create a working directory of your choice to build the BSP and go there: ``` $ cd /PATH/TO/MY_BSP_WORK_DIR $ tar zxf meta-clanton_<version>.tar.gz $ cd meta-clanton_<version> $ ./setup.sh -e meta-clanton-bsp $ . poky/oe-init-build-env yocto_build $ bitbake image-full ``` The step above can take as long as several hours, because all packages need to be fetched from the internet and then built. If you have already downloaded all the files previously (they will be stored in `yocto_build/downloads`), you can execute a build without doing sanity checks on the network to save time. Disable sanity checks by adding this line in the `yocto_build/conf/local.conf` file: ```text CONNECTIVITY_CHECK_URIS = "" ``` At the end of the build, a message similar to this will be displayed: ``` NOTE: Tasks Summary: Attempted 1209 tasks of which 269 didn't need to be rerun and all succeeded. ``` After the image build is completed successfully, you must copy the files below to the root of the SD card to be able to boot the system on the Galileo board: - `image-full-clanton.ext3` - `core-image-minimal-initramfs-clanton.cpio.gz` - `grub.efi` - `boot` (directory) The files can be found in: ``` /PATH/TO/MY_BSP_WORK_DIR/meta-clanton_<version>/yocto_build/tmp/deploy/images/ ``` To make a fully bootable SD card, the kernel file itself (`bzImage`), must be copied as well. The kernel file produced by the BSP build does not contain debug information and **cannot** be used for source level debugging. The following steps will create a proper kernel file. 3. **Get the kernel** Open a new shell. (The shell used for the BSP build of the previous steps contains changes to the environment which are no longer needed.) Create a new directory of your choice to rebuild the kernel and go there: ``` $ cd /PATH/TO/MY_KERNEL_BUILD_DIR $ tar zxf quark_linux_<version>.tar.gz $ cd quark_linux_<version> ``` Make sure you have `git` configured with a username and email (can be false values), otherwise the command will fail. For details, use the command `man git-config`. ``` $ locate git-config $ PATH_TO/git-config --global user.name $ PATH_TO/git-config --global user.email ``` Enter the following command to fetch the proper kernel version from the internet and patch it with the appropriate Quark changes. ``` $ ./gitsetup.py ``` 4. Specify the correct toolchain Export binaries of the toolchain built in step 2 to your `$PATH` as follows: ``` export PATH=/PATH/TO/MY_BSP_WORK_DIR/meta-clanton_<version>/yocto_build/tmp/sysroots/x86_64-linux/usr/bin/i586-poky-linux-uclibc:$PATH ``` Also, all `make` commands that deal with the kernel must be specified using the proper architecture (`ARCH`) and crosscompiler (`CROSS_COMPILE`) switches as described below. 5. Configure and build the kernel Starting from the directory created in step 3 above, select the proper kernel configuration and enable debug information generation. ``` $ cd /PATH/TO/MY_KERNEL_BUILD_DIR $ cd quark_linux_<version> $ cd work $ cp meta/cfg/kernel-cache/bsp/quark/quark.cfg .config $ ARCH=x86 CROSS_COMPILE=i586-poky-linux-uclibc- make menuconfig ``` The kernel configuration screen is launched. Go to the `General setup` group and find the `Local version` item. Edit it with the following content: ``` -yocto-standard ``` Go back to the initial menu by clicking `<tab>` and `<ok>` and go to the `Kernel hacking` group. Scroll down the list and find Compile the kernel with debug info, and enable it (when enabled, [*] will be shown). You can now optionally choose any other desired kernel options, then exit and confirm saving the configuration. Create a file using: ``` $ touch .scmversion ``` Issue the command below to build the kernel: ``` $ ARCH=x86 CROSS_COMPILE=i586-poky-linux-uclibc- make ``` If you have a multicore machine, add the `-jN` switch to the `make` command to speed up the build. If the build is successful, the message below is displayed: ``` Kernel: arch/x86/boot/bzImage is ready (#1) ``` This `bzImage` file must be copied to the root of your SD card. Once copied, the Galileo board fully boots Yocto Linux and the kernel can be debugged at source level. 4.4 Modifying bootloader To make debugging easier around the kernel idle function, it is recommended to add the `idle=poll` parameter in the bootloader entry corresponding to the kernel that is being debugged. The screenshot below shows a typical `/boot/grub/grub.conf` file, which is found in the folder copied to the SD card. If this modification is not added, you cannot assembly-step away or set hardware breakpoint and watchpoints when sitting on a HLT instruction. However, software breakpoints and high level source stepping using software breakpoints will work. In addition, you can change the default boot to your configuration. 4.5 OpenOCD The first step to enable source level debug is to connect your JTAG pod to the board and run OpenOCD selecting the correct interface and board configuration files. The example below uses a Flyswatter2 JTAG debugger. ``` openocd -f interface/ftdi/flyswatter2.cfg -f board/quark_x10xx_board.cfg ``` It is possible to use OpenOCD as a standalone tool for basic debugging. You can connect to the OpenOCD session using telnet on port 4444 and issue commands (this step is not required for source level debug). This can be seen in the following screenshot. ``` ~ $ telnet localhost 4444 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Open On-Chip Debugger > halt target state: halted target halted due to debug-request at 0xc1009437 in protected mode > reg EAX EAX (/32): 0x00000000 > reg EIP EIP (/32): 0xC1009437 > step step done from EIP 0xc1009437 to 0xc1009430 target state: halted target halted due to single-step at 0xc1009430 in protected mode ``` Enter `help` in the telnet console to return a list of available commands and their description. For complete details, see the OpenOCD user guide here: Even if you are in GDB, you can still run OpenOCD commands by prefixing the command name with the GDB `monitor` command. For example, to halt the core CPU from the GDB command line, issue the `monitor halt` command. To resume the core CPU, issue the `monitor resume` command. 5 OpenOCD Setup – Windows Host 5.1 Patching and Building OpenOCD You can find the updated sources in the git repository and the Source Forge download area: http://sourceforge.net/projects/openocd/files/openocd/0.8.0/ There are many online forums and discussions about building OpenOCD on Windows. Pre-built binaries of OpenOCD for Windows can be found here: http://www.freddiechopin.info/en/download/category/4-openocd To install the FTx232 drivers for the Flyswatter2, follow the instructions in this file (in the package): drivers\libusb-1.0 drivers.txt Note: It may appear that the drivers install correctly when the USB cable is inserted into the PC, however, the file above still needs to be followed. One possible workaround is shown below (used on Windows 7 and completed for both Interface 0 and 1). ![Zadig Device Manager](image) After completing the steps in the drivers.txt file, the Flyswatter is listed in the Universal Serial Bus devices in Device Manager as shown below. GNU/Linux is required for building the full BSP and kernel, however, using a pre-built binary in combination with a gdb/Eclipse build for Windows allows you to complete the debugging steps in this guide. The screenshot below shows the binaries running on a Windows 7 64-bit system. ``` C:\OpenOCD\openocd-0.8.0\bin-x64\openocd-x54-0.8.0.exe -f ..\scripts\interface\tdi\flyswatter2.cfg -f ..\scripts\board\quark_x10xx_board.cfg Open On-Chip Debugger 0.8.0 (2014-04-28-08:42) Licensed under GNU GPL v2 For bug reports, read Info : only one transport option; autoselect 'jtag' adapter speed: 40000 bps trst only separate trst push pull info : clock speed 4000 kHz info : JTAG tap: quark_x10xx.cl tap/device found: 0x0e681013 (mgf: 0x009, part: 0xe681, ver: 0x0) enabling core tap info : JTAG tap: quark_x10xx.cpu enabled ``` Intel has successfully used the OpenOCD commands with the Windows OS but has not tested gdb/Eclipse on top of the binaries. Please post any questions or issues in the Maker Community Support Forum: https://communities.intel.com/community/makers 6 OpenOCD Setup – OS X Host 6.1 Patching and Building OpenOCD Using OpenOCD on OS X is the same as Linux except: - The names of the devices are different. - You must install http://www.macports.org/ You can get a prebuilt OpenOCD from macports or use the ported packages to build a more recent version. Note that the openocd version 0.8 binaries that include Quark support are not currently available on macports. Intel recommends that you build from source. Intel has not fully validated the OpenOCD commands or gdb/Eclipse on OS X, but simple testing has been successful. 7 Debugging 7.1 GDB GDB documentation is available here: http://www.gnu.org/software/gdb/documentation/ It is possible to perform source level debug using GDB by connecting to the OpenOCD internal GDB server, which answers on port 3333 by default. OpenOCD must be running as shown in the previous section. Run GDB and connect to the OpenOCD internal GDB server. Load the debug info of a debug compiled Quark Kernel vmlinux file. For the kernel built in Section 4.3, the commands are: $ gdb (gdb) target remote localhost:3333 (gdb) monitor halt (gdb) symbol-file /PATH/TO/MY_KERNEL_BUILD_DIR/quark_linux_<version>/work /vmlinux The screenshot below shows these steps in operation. After they are completed, the board is ready to be source level debugged using GDB. Note: Even if you are in GDB, you can still run OpenOCD commands by prefixing the command name with the GDB `monitor` command as shown in the screenshot. Use the command `monitor help` to see if OpenOCD supports a particular command. For example, `monitor mdw phys` allows the physical memory to be read. It is also possible to perform source level debug using Eclipse with the C/C++ GDB Hardware Debugger plug-in. The following configuration is required to enable source level debugging of the board in the Eclipse environment. Install the C/C++ GDB Hardware Debugging plug-in, if missing, using the standard Install Software from the Eclipse Help menu. Create a new project and switch to the C/C++ perspective. From the Run menu, open the **Debug Configurations** dialog, and add a new launch configuration under GDB hardware debugging, as shown below. Set application to the debug symbol enabled vmlinux kernel file, as follows. Enable **Use remote target** and set the host name and port number. Select **Halt** and add the commands: **set remotetimeout 20** and **monitor halt**. Be sure that **Reset and Delay** and **Load image** are **not** selected, as shown in the screenshot below. Eclipse is now set up to perform source level debug on the board as shown below. Note it is still necessary to first launch OpenOCD in a separate shell, as described in Section 4.5. 7.3 GDB and kernel modules Debugging kernel modules requires additional steps. The load address of the module’s different sections is chosen by the kernel at runtime and thus it is necessary to find out this information and pass it over to GDB. In this section, an example kernel module built “out-of-tree” (generating all output in a separate directory) is used to show the debugging approach. For additional information, see https://www.kernel.org/doc/Documentation/kbuild/modules.txt 1. Create a new directory where the module files will be stored. In this example, it is called simple_timer $ mkdir simple_timer 2. In this directory, create a file named `Makefile` having the following content: ```bash obj-m := simple_timer.o ccflags-y := -g -O0 KDIR := /opt/galileo/meta-clanton_<version>/yocto_build/tmp/work/clanton-poky-linux-uclibc/Linux-yocto-clanton/3.8-r0/linux-clanton-standard-build all: $(MAKE) -C $(KDIR) M=$(PWD) modules clean: $(MAKE) -C $(KDIR) M=$(PWD) clean ``` 3. Modify KDIR to point to the actual kernel build directory, as shown in Section 4.3 of this document. 4. Create the actual module and name the file `simple_timer.c` having this content: ```c #include <linux/kernel.h> #include <linux/module.h> #include <linux/timer.h> #include <linux/jiffies.h> MODULE_DESCRIPTION("Simple timer example module, ~1 call per second, ~1 log per minute."); MODULE_LICENSE("GPL"); static struct timer_list simple_timer; static unsigned long times_called; static void simple_timer_function(unsigned long ptr) { unsigned long minutes; times_called++; minutes = times_called / 60; if(times_called % 60 == 0) printk(KERN_INFO "simple_timer: ~%ld minute(s) and counting\n", minutes); mod_timer(&simple_timer, jiffies + HZ); } static int __init simple_timer_init(void) { printk(KERN_INFO "simple_timer: loading - %d HZ\n", HZ); times_called = 0; init_timer(&simple_timer); simple_timer.function = simple_timer_function; simple_timer.expires = jiffies + HZ; add_timer(&simple_timer); return 0; } static void __exit simple_timer_exit(void) { printk(KERN_INFO "simple_timer: unloading\n"); del_timer(&simple_timer); } module_init(simple_timer_init) module_exit(simple_timer_exit) ``` This example module sets up a kernel timer that expires and is set again roughly every second by calling the `simple_timer` function. In addition, every minute a new info kernel message will be logged. This type of message can be examined in several ways depending on the kernel settings: using the `dmesg` command, logged to files, or appearing directly on the console (as with the Galileo board). 5. To build the module, the path to the cross-compile toolchain has to be in the path, as shown in Section 4.3, step 4, and the make command has to be invoked as shown in the screenshot below. A few files are produced and `simple_timer.ko` is the kernel module itself. ![Screenshot of make command output](image) The module can now be copied to the target system. 6. For a Galileo board, copy the file to the SD card and insert it onto the board. After the board is booted, the SD card is mounted automatically under `/media` and the kernel module can be inserted and removed as shown below, where the terminal program is connected to the Quark board serial port: ![Terminal output showing module usage](image) 7. After the module is loaded, the corresponding `/sys/module` entry on the board can be queried as shown below: ``` root@clanton:~$ ls -la /sys/module/simple_timer/sections/ -rw-r--r-- 1 root root 0 Jan 1 06:03 .text -rw-r--r-- 1 root root 0 Jan 1 06:03 .rodata -rw-r--r-- 1 root root 0 Jan 1 06:03 .bss -rw-r--r-- 1 root root 0 Jan 1 06:03 .init.text -rw-r--r-- 1 root root 0 Jan 1 06:03 .gnu.linkonce.thin_module -rw-r--r-- 1 root root 0 Jan 1 06:03 .init.text -rw-r--r-- 1 root root 0 Jan 1 06:03 .note.gnu.build-id -rw-r--r-- 1 root root 0 Jan 1 06:03 .rodata -rw-r--r-- 1 root root 0 Jan 1 06:03 .strtab -rw-r--r-- 1 root root 0 Jan 1 06:03 .symtab -rw-r--r-- 1 root root 0 Jan 1 06:03 .sym root@clanton:~$ cat /sys/module/simple_timer/sections/.text 0xe0717000 root@clanton:~$ cat /sys/module/simple_timer/sections/.rodata 0xe0718024 root@clanton:~$ cat /sys/module/simple_timer/sections/.bss 0xe0719140 root@clanton:~$ cat /sys/module/simple_timer/sections/.init.text 0xe0717080 ``` The address of `.text` and the other relevant sections are now known and can be passed to the `add-symbol-file` GDB command when loading the debug information for the module. The `.text` address is the first, mandatory parameter, the other sections are optional and can be specified using the `-s` switch. The command for loading the debug information of the module in gdb is: ``` (gdb) add-symbol-file PATH_TO/simple_timer.ko 0xe0717000 -s .rodata 0xe0718024 -s .bss 0xe0719140 -s .init.text 0xe0717080 ``` The setup is complete and the kernel module can be source-level debugged. If the module is unloaded and then loaded, the section addresses must be checked again. Typically the addresses are different each time.
{"Source-Url": "http://download.intel.com/support/processors/quark/sb/sourcedebugusingopenocd_quark_appnote_330015_003.pdf", "len_cl100k_base": 7381, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 45962, "total-output-tokens": 8724, "length": "2e12", "weborganizer": {"__label__adult": 0.000957012176513672, "__label__art_design": 0.0007739067077636719, "__label__crime_law": 0.0007352828979492188, "__label__education_jobs": 0.0006060600280761719, "__label__entertainment": 0.0001837015151977539, "__label__fashion_beauty": 0.0005135536193847656, "__label__finance_business": 0.0011310577392578125, "__label__food_dining": 0.0005712509155273438, "__label__games": 0.0024967193603515625, "__label__hardware": 0.173583984375, "__label__health": 0.0005536079406738281, "__label__history": 0.00041866302490234375, "__label__home_hobbies": 0.0003561973571777344, "__label__industrial": 0.003314971923828125, "__label__literature": 0.0003688335418701172, "__label__politics": 0.0004992485046386719, "__label__religion": 0.001140594482421875, "__label__science_tech": 0.0794677734375, "__label__social_life": 7.176399230957031e-05, "__label__software": 0.0216522216796875, "__label__software_dev": 0.70849609375, "__label__sports_fitness": 0.0005578994750976562, "__label__transportation": 0.0012521743774414062, "__label__travel": 0.00025963783264160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31260, 0.03905]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31260, 0.47395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31260, 0.74455]], "google_gemma-3-12b-it_contains_pii": [[0, 100, false], [100, 3251, null], [3251, 6206, null], [6206, 7160, null], [7160, 9092, null], [9092, 9627, null], [9627, 11047, null], [11047, 11687, null], [11687, 12746, null], [12746, 14606, null], [14606, 16851, null], [16851, 18958, null], [18958, 20241, null], [20241, 21242, null], [21242, 22239, null], [22239, 23395, null], [23395, 23975, null], [23975, 24763, null], [24763, 25068, null], [25068, 25620, null], [25620, 25766, null], [25766, 26143, null], [26143, 26769, null], [26769, 28829, null], [28829, 29545, null], [29545, 31049, null], [31049, 31260, null]], "google_gemma-3-12b-it_is_public_document": [[0, 100, true], [100, 3251, null], [3251, 6206, null], [6206, 7160, null], [7160, 9092, null], [9092, 9627, null], [9627, 11047, null], [11047, 11687, null], [11687, 12746, null], [12746, 14606, null], [14606, 16851, null], [16851, 18958, null], [18958, 20241, null], [20241, 21242, null], [21242, 22239, null], [22239, 23395, null], [23395, 23975, null], [23975, 24763, null], [24763, 25068, null], [25068, 25620, null], [25620, 25766, null], [25766, 26143, null], [26143, 26769, null], [26769, 28829, null], [28829, 29545, null], [29545, 31049, null], [31049, 31260, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31260, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31260, null]], "pdf_page_numbers": [[0, 100, 1], [100, 3251, 2], [3251, 6206, 3], [6206, 7160, 4], [7160, 9092, 5], [9092, 9627, 6], [9627, 11047, 7], [11047, 11687, 8], [11687, 12746, 9], [12746, 14606, 10], [14606, 16851, 11], [16851, 18958, 12], [18958, 20241, 13], [20241, 21242, 14], [21242, 22239, 15], [22239, 23395, 16], [23395, 23975, 17], [23975, 24763, 18], [24763, 25068, 19], [25068, 25620, 20], [25620, 25766, 21], [25766, 26143, 22], [26143, 26769, 23], [26769, 28829, 24], [28829, 29545, 25], [29545, 31049, 26], [31049, 31260, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31260, 0.04296]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
3312b701646e080b4baecfacb8d9b5bec6716c0b
An enriched view of decision making and information systems Linda Sau-ling LAI Department of Information Systems, City University of Hong Kong (islllai@cityu.edu.hk) Abstract The development of information technology (IT) over the last two decades has been very rapid indeed. In an IT-dominated field like information systems (IS), there tends to develop a kind of mismatch between practice and theory where yesterday’s theory is always trying to catch up with and make sense of practice which has already moved on. This paper takes a fresh look at some of the fundamental concepts relevant to the orderly provision of information within an organisation using IT. It discusses the subjective nature of organisational decision-making and proposes a holistic view of IS development. The main role of an information system is that of support. Such systems exist to serve, help or support people taking action in the real world. Systems analysis aiming at information systems design, if it is to make much impact, must therefore first concentrate on the activity system of an organisation which the information system is to serve. It follows that any good IS development method should meet two criteria simultaneously: 1) it should provide mechanisms to make sense and understand the behaviour of human activities which an information system is developed to serve; 2) it should render a seamless transition between its resulting information requirements model and the process to design a technology based system to satisfy those requirements. Soft systems methods (SSM), being perception-driven, helps users understand what information they need and how to use the information. Object-oriented analysis (OOA), on the other hand, provides the base for building a data structure capable of satisfying the identified information needs. The two approaches could be complementary to each other in IS work. An integrated framework which encapsulates both methods has been developed and applied by the author in a requirements analysis project of the loan department of a commercial bank in Hong Kong. This paper gives a full account of the integrated framework in action. The described case puts the interpretive view of IS development and decision making in practice. The learning and lessons drawn from the undertaken IS requirements analysis projects are discussed in context and in detail. Each lesson is argued by describing and reasoning the path from the actual practice to the lesson. The generalisations of this research refer to practice. This has reinforced the argument that the interpretive views of decision making and IS development is an experience-based knowledge. 1. An enriched model of the process of decision making Information systems are seen as giving service to decision-makers of an organisation. The most common definitions of information and information systems are based on the support that they may give to decision-making. Here are two examples from IS textbooks: - ‘Information is data that has been processed for a purpose. That purpose is to aid some kind of decision’ [1], p. 5; - ‘Information is useful to a manager if it helps the manager’s choice of a course of action. To be precise, information is said to have value, and only if, it reduces the uncertainty in the manager’s decision problem’ [2], p. 14. Thinking about decision-making within the field of IS has been dominated by the work of H.A. Simon [3]. In a survey of introductory texts, 84% of those texts that provided any discussion of the nature of decision-making included Simon’s three-phase model of decision-making and 50% presented this as the sole conceptual framework through which to understand decision-making [4]. Simon’s model represents decision-making as an explicit and consciously rational process, enacted by a neutral decision-maker, which involves the organisation and processing of information in order to carry out an intentional and rational act of choice. Such a rational model of decision-making is a prescriptive account of how a decision should be conducted but does not sit well with much experience of real world decision-making. making, in which the decision-makers are influenced by political and social factors. According to [4], the simplistic way in which Simon’s work is employed in IS thinking, gives rise to three practical dangers (pp. 90-91): - First, there is the danger of developing information systems to serve the decision-making that is thought to be happening (value free, politically neutral and objectively rational) rather than the decision-making which actually occurs. - Secondly, there arises the danger of ignoring the important differences in organisational culture and history, which make organisations unique. This might lead to the provision of inappropriate information systems. - Thirdly, IS analysis will ignore the important contribution which norms and standards make to the organisation’s understanding of the world and to the way in which mere data becomes perceived as meaningful and relevant to a decision. Simon’s work has been the dominant model in IS so far, but it appears to give too little attention to the political and social conflicts and complexities of organisations. There is a growing interest in enriching the thinking of decision making through the concept of “appreciation” based on the work of Vickers [5]. A consistent thread of Vickers’ work is that the management of an organisation is primarily concerned not with goal-seeking but relationship managing. In order to maintain relationships, an organisation is constantly required to adapt to changing circumstances. Central to this adaptation is the “appreciation setting” of an organisation, which is defined as “readiness to see and value things in one way rather than another” [6], p. 160. [7] explain the operation of an appreciation system as a number of recursive loops as shown in Figure 1, where the organisation exists within a constantly changing and interacting flux of events and ideas. The process of appreciation is an on-going one through which the organisation perceives some part of this flux at a point of time, making judgements about what is perceived and, where necessary, attempting to maintain or elude relationships by actions. ![Figure 1: Appreciation, decision-making & action](image_url) Vickers’ work does not provide an alternative model of decision-making of the same kind as Simon’s. ‘It does, however, allow us to understand the more intentional and prescriptive models in a way that is enlightening; for regarding individual decision-making as occurring within a context of appreciation leads us to an essentially different and more humanistic interpretation of the decision-making process’ [4] p. 95. The appreciative system provides a more useful model for understanding decision-making behaviour since it provides an explanation for the subjective content of decision-making. It guides ‘decision-makers to recognise particular aspects of a situation as relevant, towards a particular view of what data is needed and to a particular view of how the decision should be made’ [4] p. 97. With an enriched, appreciative model of decision-making, human beings are recognised as autonomous rather than purely functional components of information systems. This implies the need for a shift in emphasis in IS work away from the purely technical towards the social and political environment. 2. An enriched view of information systems Any and every organisational information system can always be thought about as entailing a pair of systems, one of which is served (people taking actions in an organisation), the other serving (meaning attribution and data processing) as shown in Figure 2 [8]. Whenever one system serves or supports another, it is a very basic principle that the necessary features of the system which serves can be worked out only on the basis of a prior account of the system served. This must be so because the nature of the system served – the way it is thought about – will dictate what counts as “service” and hence what functions the systems which provide that service must contain [9]. ![Figure 2: The served-server concept](image) The notion of a served-serving relationship between the organisation and its information systems suggests that both systems in Figure 2 are of equal importance. To understand information systems, we need to understand organisations, what they are, and how they work. The development of organisational information systems should be a two-stage process: 1) a business analysis, to make sense of the human activities performed in organisations; followed by 2) a technology-oriented analysis to define what technological facilities might support the organisational activities. Soft systems methods [10], being perception-driven, helps users understand what information they need and how to use the information. Object-oriented analysis [11], on the other hand, provides the base for building a data structure capable of satisfying the identified information needs. An integrated framework is thus developed for complementary application of both methods in IS requirements analysis as shown in Figure 3. Its elements are organised under the principles of good IS development (ISD) process implied by the enriched model of information systems (see Figure 2). 3. An application of the integrated framework – a case study The following case present an illustrative use of the integrated framework (Figure 3) in real-life information needs analysis. It is written using a clear conceptual framework (see Figure 3) rather than a narrative. This helps to relate theory to the literature and aids generalisation. The generalisation from this single case study is about theoretical propositions not about population. The emphasis is not on methods or data but on understanding processes as they occur in their context. 3.1. Background of the case The Alpha Bank (name disguised) is a licensed bank in Hong Kong. It operates cheque and savings accounts, and accepts deposits of different sizes and maturity dates. In the last decade, the bank has devoted a great deal of effort in producing innovation and broadening the scope of services, so deposits have grown rapidly. With a high degree of liquidity, the bank has taken a rather aggressive approach towards its lending business. Unfortunately, after the onset of the Asian financial crisis in the latter half of 1997, Hong Kong’s economy deteriorated drastically in the first half of 1998. Property and share prices fell sharply against the backdrop of high interest rates and tight liquidity. Volatile interest rates, tight liquidity, contracting credit extensions and increasing non-performing loans exerted tremendous pressure on the bank’s operation in the first half of 1998. Most loan officers of Alpha Bank also do not have sufficient credit-analysis skills to properly evaluate risk incurred in loans forwarded to customers. As a result, loan defaults have been on the increase since early 1998. In September 1998, the bank invited a team of analysts (with the author as a member) to reengineer its loan operation. One pioneer project was the setting up of a systematic procedure for overdraft approval. 3.2 Familiarisation with the organisational context The organisational context of the systems analysis was the loan department of Alpha Bank. Because of persistent poor economic conditions in Hong Kong, the bank adhered strictly to a “progress with prudence” philosophy. The loan department was instructed to participate actively but cautiously in lending business. Loans or overdrafts should only be approved after careful assessment of the applicant’s financial creditability by the loan officers. The problematic situation of the organisation was summarised in a rich picture (Figure 4). 3.3. Formulation of relevant systems Figure 4 not only reflected the richness of the situation of the loan department of Alpha Bank, it also allowed the project team to identify human activity systems that appeared relevant to that particular situation. Each identified human activity system was defined by a root definition and then expanded into conceptual models. After several discussions, the loan officers were able to come up with a shared meaning for the purposeful action relevant to their job duties. Figures 5 and 6 exhibit the root definition and conceptual model of the overdraft approval system respectively. 3.4. Determination of information needs The accommodated conceptual model (Figure 6), once constructed, forms a cogent basis for an information model upon which the information system design process itself can be related. The conceptual information requirement of Alpha Bank was derived by analysing each activity in the conceptual model and examining what information categories should be available to enable the concerned staff to take that action. This activity model was thus converted into an information requirements specification as illustrated in Figure 7. 3.5. SSM based object-oriented analysis What seemed to be beneficial to the loan department was the development of a technologically based data system that would yield the information flow required by the set of activities relevant to its overdraft approval operation. The Wider Intervention Framework adopted object-oriented analysis [11] as a step to move from information categories to data modelling. The approach taken was to embed the object-oriented analysis (OOA) techniques within soft systems methods (SSM) in order to preserve the philosophy and richness of the latter methodology. The OO diagrams, such as the event diagram (Figure 8), object-flow diagram (Figure 9) and object diagrams (Figure 10) were derived to a large extent from the activities in the SSM models. 3.6. Change, action and exit Stage four of the integrated framework (Figure 3) is a linkage between conceptual systems analysis and real world systems design. The object models exhibited from Figure 8 to Figure 10 provided a base for the construction of IT systems which are meaningful to the loan officers of Alpha Bank. The enquiry process of SSM also constituted a learning system which has guided the staff members of the loan department to take a fresh look at the problem situation of the lending business and the bank’s operation as a whole. Such a fresh look has alerted the participants that the bank can no longer rely solely on traditional ways of doing business. Technologically-oriented products, for example, an expert system that performs credit analysis, would be needed to make the bank more efficient. The loan department should also upgrade its cash-management services to help corporate borrowers make the most of their money - and to earn the bank more fees. The management of Alpha Bank in general regarded changes as inevitable. Once logically desirable and culturally feasible changes were identified for the loan department of the Alpha Bank, practical actions were taken to implement the changes. The end point of the project was marked (arbitrarily) by the introduction of an IT-based decision support system for overdraft approval to the loan department. Root Definition: A bank owned and operated system to process applications for overdrafts from current account customers. The approval or denial of the application is subject to the bank’s credit assessment of the applicant. Customers of the system: <table> <thead> <tr> <th>Advantaged: Loan officer</th> <th>Disadvantaged:</th> <th>Other Stakeholders: Current account customers</th> </tr> </thead> <tbody> <tr> <td>Actors</td> <td>Loan officer and other staff of the bank</td> <td></td> </tr> <tr> <td>Transformation</td> <td>From: Needs of making a decision on overdraft applications from current account customers</td> <td>To: Those needs satisfied by performing proper credit assessment of the applicants</td> </tr> <tr> <td>Weltanschauung</td> <td>It is appropriate to approve or deny an overdraft application based on the loan officer’s credit assessment of the applicant</td> <td></td> </tr> <tr> <td>Owner:</td> <td>The bank</td> <td></td> </tr> </tbody> </table> Environmental Constraints Constraints imposed by environment: - The bank should endeavour to ensure that the applicant understands the principal terms and conditions of an overdraft Constraints accepted in modelling: - Only customers of the bank with a current account of at least 6 months can avail of the facility of overdrafts against their current accounts - The loan officer may approve, deny or defer an overdraft application XYZ analysis: The system expressed as ‘a system to do X by Y in order to achieve Z’ X: to process overdraft applications Y: performing credit assessment Z: to promote the leading business of the bank Three ‘E’s analysis: Declaration of the measures of performance Efficacy: Does the output count as ‘process overdraft application’? Efficiency: Was resources use minimum? Effectiveness: Does the provision of overdraft facilities serve to promote the leading business of the bank? Figure 5: The root definition of ‘a system to process applications for overdrafts from current account customer’ ### Table 1: Information requirements analysis for the overdraft approval system of Alpha Bank <table> <thead> <tr> <th>Activities from the model</th> <th>Input information needed</th> <th>Source of input information</th> <th>Output information generated</th> <th>Recipient of output information</th> </tr> </thead> </table> | 1 Check eligibility of the applicant | • Overdraft application • Customer particulars • Account record • Court Writ • Loan officer | • Environment • Control & monitor system • Control & monitor system • Environment • Control & monitor system | • Valid application • Decision on the acceptance of the application | • Activity 2 • Environment | | **Checklist for the completion of the activity** | • The customer status of the applicant is validated • The ages of the applicant’s current accounts are checked • The applicant is confirmed with no court writ restriction • Decision on the acceptance/denial/deferral of the application is made • Suggestions are made to non-eligible applicants | | 2 Assess the application | • Valid application • Transaction record • Customer particulars • Account record • Borrowing regulation | • Activity 1 • Control & monitor system • Control & monitor system • Control & monitor system • Environment | • Assessment record • Applicant’s portfolio | • Activity 3 • Activity 3 | | **Checklist for the completion of the activity** | • Transaction records of the applicant’s accounts are obtained and reviewed • Interviews with the applicant are schedule and held • General borrowing regulations of the bank are explained to the applicant | | 3 Make decision on the overdraft application | • Assessment record • Applicant’s portfolio • Borrowing regulation | • Activity 3 • Activity 3 • Environment | • Conclusion of credit assessment • Overdraft limit • Interest rate for the overdraft amount • Decision on the application | • Activity 4 • Activity 4 • Activity 4 • Activity 4 | | **Checklist for the completion of the activity** | • Applicant’s portfolio is studied • Credit analysis is performed • Results of credit assessment are concluded • Overdraft limited is assigned • Interest rate for the overdraft amount is assigned • Decision on the approval/denial/deferral of the overdraft application is made | | 4 Take related administrative procedures | • Conclusion of credit assessment • Overdraft limit • Interest rate for the overdraft amount • Decision on the application • Account record | • Activity 4 • Activity 4 • Activity 4 • Activity 4 • Control & monitor system | • Decision on the application • Assigned terms of borrowing • Updated account record | • Environment • Environment • Control & monitor system | | **Checklist for the completion of the activity** | • Decision on the approval/denial/deferral of the overdraft application is confirmed to the applicant • Further information is requested from the applicant in case of application deferral • The assigned overdraft limit and interest rate is confirmed to the applicant • The applicant’s current account is updated the overdraft limit and interest rate in an approval case | Figure 8: An event diagram of overdraft approval of Alpha Bank Figure 6: The conceptual model of ‘a system to process applications for overdrafts from current account customer’ Figure 10: An object diagram of overdraft approval Figure 9: An object-flow diagram of overdraft approval 4. Conclusion The key features of the information systems development process implied by the above discussion are now clear. From an analysis of the information systems support appropriate for whoever is concerned with taking the intentional actions, and it is now – and only now – legitimate to turn attention to the system that will provide the support. We can think of the support system as containing a data processing element, a data storage element as well as those who operate, maintain and modify it. This simple model of the support system can then accommodate various hardware and software configurations and the selection of a suitable one becomes an issue calling for the expertise of the IT professional. References
{"Source-Url": "http://gebrc.nccu.edu.tw/proceedings/APDSI/2000/list/pdf/P-199.pdf", "len_cl100k_base": 4208, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23253, "total-output-tokens": 5078, "length": "2e12", "weborganizer": {"__label__adult": 0.0005507469177246094, "__label__art_design": 0.0016317367553710938, "__label__crime_law": 0.001567840576171875, "__label__education_jobs": 0.0643310546875, "__label__entertainment": 0.00019276142120361328, "__label__fashion_beauty": 0.00042057037353515625, "__label__finance_business": 0.0816650390625, "__label__food_dining": 0.000843048095703125, "__label__games": 0.001140594482421875, "__label__hardware": 0.0018310546875, "__label__health": 0.0025005340576171875, "__label__history": 0.0010404586791992188, "__label__home_hobbies": 0.0004858970642089844, "__label__industrial": 0.002117156982421875, "__label__literature": 0.0015630722045898438, "__label__politics": 0.0008645057678222656, "__label__religion": 0.0008678436279296875, "__label__science_tech": 0.269287109375, "__label__social_life": 0.0004801750183105469, "__label__software": 0.0771484375, "__label__software_dev": 0.487548828125, "__label__sports_fitness": 0.0003924369812011719, "__label__transportation": 0.0011930465698242188, "__label__travel": 0.0004324913024902344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22900, 0.01407]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22900, 0.29327]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22900, 0.93579]], "google_gemma-3-12b-it_contains_pii": [[0, 4157, false], [4157, 7166, null], [7166, 9935, null], [9935, 12500, null], [12500, 15234, null], [15234, 17279, null], [17279, 20577, null], [20577, 20640, null], [20640, 21582, null], [21582, 22900, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4157, true], [4157, 7166, null], [7166, 9935, null], [9935, 12500, null], [12500, 15234, null], [15234, 17279, null], [17279, 20577, null], [20577, 20640, null], [20640, 21582, null], [21582, 22900, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22900, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22900, null]], "pdf_page_numbers": [[0, 4157, 1], [4157, 7166, 2], [7166, 9935, 3], [9935, 12500, 4], [12500, 15234, 5], [15234, 17279, 6], [17279, 20577, 7], [20577, 20640, 8], [20640, 21582, 9], [21582, 22900, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22900, 0.07947]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
14583bb3ce386b8bf17627ef6b8387fd6a3edfed
METHODOLOGICAL CONSTRUCTION OF RELIABLE DISTRIBUTED ALGORITHMS by L. Shrir and M. Rodeh* Technical Report #361 March 1985 * IBM Israel Scientific Center, Haifa, Israel METHODOLOGICAL CONSTRUCTION OF RELIABLE DISTRIBUTED ALGORITHMS by Liuba Shira Computer Science Dept., Technion, Haifa, Israel and Michael Rodeh IBM Israel Scientific Center, Haifa, Israel March 1985 ABSTRACT The problem of designing distributed algorithms which operate in the presence of undetectable link failures is considered. A methodology is suggested for transforming distributed algorithms into reliable algorithms. The transformation is considered at two levels: reliable transmission through every single link, and as an alternative a reliable implementation of high level communication primitives. Application of the suggested methodology yields reliable implementation for distributed termination and k-selection. 1. Introduction In the context of distributed computing, fault tolerance can either deal with the failure of processing elements, or with the failure of communication links or with both. We concentrate on link failures. In an asynchronous system, it is in general unpredictable when an operation will end. In particular, a very slow link is sometimes hard to distinguish from a malfunctioning one. To model such a behavior we assume, that failures are undetectable. Under such an assumption, the communication network must have redundancy by way of bi-connectivity or otherwise communication cannot be guaranteed. Thus we are after passive redundant systems, in which there is no need to observe malfunction [Hop80]. Notice that under such circumstances, no recovery can take place as there is no way to initiate any recovery procedures. Failures may be circumvented in various levels of granularity. Some systems have redundancy at the component level [Hop80]. Looking at the individual links as the basic system components this approach leads to redundancy in the routing protocols so as to ensure message transfer through each of the individual links. Securing peer-to-peer communication is another common approach which lends itself naturally to dynamic-routing techniques (e.g. [MS79]). Unfortunately, all these schemes assume that failures are detectable. We do not know of any approach to reliability other than the one described in [S85] and [R84], which has a higher granularity level than peer-to-peer communication. Fortunately enough, the scheme described there is passive, so that it is applicable to our setup. We define two natural communication primitives, called announcement and consultation. These primitives have efficient reliable implementation. The primitives are shown to be useful for the design of reliable algorithms for the problem of distributed termination (e.g. see [Fra80]) and k-selection (e.g. see [AHU]). The alternative of securing communication on the individual link level is also discussed. 2. The computation model Consider a network $N = (P, L)$ of $n = |P|$ asynchronous processors connected by $l = |L|$ links. The processors are engaged in solving some computational problem by executing their individual programs and by exchanging messages. We also assume that: 1. Each processor has a unique name known to its immediate neighborhood. Also, one of the processors, the leader, is distinguished from the others. 2. The time required for a message to pass through a link is unpredictable and unbounded. 3. Links have unbounded buffers. 4. Messages pass through links in a FIFO order. 2.1. The model of faults The model of faults assumes that at most one link may fail. A malfunctioning link may recover and fail again. The impact of failures is the total loss of some of the messages which were waiting in the link buffer at the time of failure. Even if the link will later recover, lost messages never reappear. Consider the graph defined by the non-faulty links and processors. The link connectivity of this graph is essential for the implementation of the algorithms we deal with. Thus, in the sequel we assume that the network is at least bi-connected. 2.2. The complexity measure The complexity measure we are referring to in evaluating the efficiency of algorithms is the total number of messages of "reasonable size" which are transmitted through the non-faulty links during the execution of an algorithm. The size of the additional processor memory dedicated to deal with faults is also considered. 3. Reliable algorithms We are interested in algorithms that operate correctly even if during their execution a link may fail. In the sequel we refer to such algorithms as reliable algorithms. Designing reliable algorithms is a work of art. Transforming an existing algorithm and making it reliable may be easier. We propose a scheme for transforming an arbitrary distributed algorithm (called source algorithm) into a reliable algorithm. This is done by securing the algorithm at a single link level. Then, we define two useful high level communication primitives and suggest an efficient securing scheme for them. In this way a source algorithm which is expressed in terms of the defined primitives may be transformed mechanically into a reliable algorithm. 3.1. Securing communication at a single link level Consider a distributed algorithm \( A \) which is executed by the processors of a network \( N = (P,L) \). Let \( u \) and \( v \) be two processors connected by a link and assume that, during the execution of \( A \), \( u \) sends a message \( m \) to \( v \). In the presence of a faulty link the message \( m \) may get lost. Therefore, at least two copies of \( m \) must be sent via link disjoint routes. A straightforward approach is to broadcast \( m \) to \( v \). To broadcast \( m \), \( u \) adds the identity of \( v \) to \( m \) and passes it to all its neighbours which, upon receiving the message, pass it further. As the network is at least bi-connected, at least one of the copies of \( m \) will eventually arrive to \( v \). Special provisions must be taken to ensure that every processor \( u \) sends \( m \) to every neighbour at most once. If the above provisions are taken then the cost of sending \( m \) reliably is \( O(|L|) \) messages. Securing all the messages of \( A \) in this way yields a reliable implementation for \( A \). The message complexity of this reliable implementation is \( O(f \cdot |L|) \), where \( f \) is the message complexity of the source algorithm \( A \). In order to ensure that during a broadcast of a message \( m \), every processor sends this message to every neighbour at most once, the initiator of \( m \) may append a sequence number to it, while every processor remembers the highest sequence number of a message it has already sent. In cases where several broadcasts may proceed simultaneously in a reliable implementation of an algorithm \( A \) (meaning that in the source algorithm \( A \) messages are initiated simultaneously), this scheme requires \( O(n) \) local storage per process. Achieving reliability via broadcasting is applicable to any source algorithm. A natural question is whether the efficiency of this scheme may be improved. It turns out that broadcasting is a rather common pattern of communication in distributed computing and may be viewed as a high-level communication primitive. In the next section this primitive (called \textit{announcement}) and another one, called \textit{consultation}, are discussed and an efficient reliable implementation for them is suggested. These results are applicable to the link reliability problem as well. 3.2. \textbf{High level communication primitives} High level communication primitives may serve as building blocks for the construction of distributed algorithms. The following communication primitives implicitly appear in many distributed algorithms. 1. \textit{Announcement} - a message sent by one processor to all the other processors of the network. We deliberately refrain from using the word \textit{broadcasting} to distinguish the announcement abstraction from any specific algorithm which performs a broadcast operation. 2. **Consultation** - a global computation of the network. More formally, a basic function is a non-trivial associative and commutative binary function \( f : D \times D \rightarrow D \); \( f_n \) is an \( n \)-consultation function built upon \( f \) if, for every \( a_0, a_1, \ldots, a_{n-1} \in D \): \[ \begin{align*} \forall n & \in \mathbb{N}, \quad f_n(a_0, a_1, \ldots, a_{n-1}) = \\ & \quad f (a_0, f (a_1, f (a_2, \ldots, f (a_{n-2}, f (a_{n-1}, \ldots))))), \end{align*} \] Consultation is the task of computing distributedly the \( n \)-consultation function \( f_n(a_0, a_1, \ldots, a_{n-1}) \), such that \( a_i \) is stored at \( P_i \), and the resulting value is obtained at the leader. Consultation may be viewed as a computation over a set of values distributed in the network which can be implemented by a single pass which starts at the leaves of any spanning tree and ends at its root. Many distributed algorithms use announcement and consultation as subroutines. Therefore, providing a reliable efficient implementation of these primitives (without a direct reliable support of the more basic communication primitives underlying them) could be of great help towards the development of reliable, efficient distributed algorithms. ### 3.2.1. Implementing the primitives in a reliable environment Consider first the implementation of announcement and consultation in an environment in which no failures occur. By employing a single spanning tree rooted at the leader, both can be implemented rather efficiently: Announcement is easy as the leader may broadcast a message to each of its neighbors asking them to send that message on towards the leaves. Consultation is easy too as it is initiated at the leaves and proceeds towards the leader. It can be implemented very efficiently by collecting the accumulated partial computations from all the children of a processor \( p \), computing an intermediate result at \( p \) and then sending it to the parent processor. Obviously this scheme breaks down when links fail. In the sequel we suggest an implementation of the two communication primitives which resist a single link failure. First the simple case of a ring network is considered. Then general networks are discussed. Finally, several application of the reliable implementations are proposed. ### 3.2.2. Faulty rings On a ring, announcement is easy - the leader sends a message to both directions; every processor gets at least one of the copies. Consultation is more complicated and is done by binary search. First a high level description of the implementation is given and then the algorithm is presented. Initially, the whole ring is unconsulted; (the consulted part of the ring is the interval of processors that have already participated in the consultation). Invariantly, the yet unconsulted part is a continuous section of the ring. The algorithm proceeds in phases. Each phase reduces by half the size of the yet unconsulted part of the ring. In a phase \( i \), the leader sends a start command to the midst vertex \( v \) of the unconsulted part of the ring, via both directions. Upon receiving a copy of the start command in a phase \( i \), \( v \) initiates a consultation towards both directions. Upon receiving a consultation message from one direction the leader enters a new phase, re-computes the midst of the currently unconsulted part of the ring and issues a next start command. The messages issued during the \( n \)-th phase of the consultation algorithm are therefore \( \text{<start \( i,v \)>} \) and \( \text{<consult \( i,v \)>} \), where \( v \) specifies the destination of the start message and \( v' \) is the partial result of the consultation built over \( f \) computed in the processors which this message has passed. All types of messages are forwarded by every processor from their initiator to their destination. To formally describe the consultation algorithm the processors are numbered in a clockwise order starting at the leader which is denoted by \( P_0 \). For the sake of simplicity, we assume that \( n = 2^k \) for some natural number \( k \). Detailed local programs for the processors on the ring are presented in figure 1. It can be easily verified that during the execution of the algorithm at each phase \( \pi \): 1. at least one \texttt{consult} message reaches \( P_0 \), reducing by half the \textit{yet unconsulted part} of the ring. Thus after \( \log n \) phases the entire ring is covered. 2. at most one processor can receive the two \texttt{start} messages designated to it. Furthermore, at most two \texttt{consult} messages reach \( P_0 \). Thus at most \( 2n \) messages are transmitted. The message complexity of the algorithm is at most \( 2n \log n \). ### 3.2.3. Networks with arbitrary shape In this section the reliable implementation of announcement and consultation in general networks is discussed. In the case of general networks the reliable announcement requires \( O(|L|) \) message transmissions, while the straightforward implementation of consultation obtained by each processor performing an announcement, requires \( O(n \cdot |L|) \) message transmissions. A more efficient solution may be obtained by preparing a reliable communication scheme beforehand. A ring is a sparse bi-connected graph. Therefore, developing an efficient announcement algorithm is easy. To reduce the cost of announcement in general networks, we can build a \textit{sparse bi-connected spanning subgraph} first: Itai and Rodeh [IR78] showed that a bi-connected spanning subgraph with \( O(n) \) links is contained in every bi-connected graph with \( n \) vertices. This subgraph is obtained by augmenting any depth first search (DFS) tree with one lowest comment: The program for the leader $P_0$: $l := 1$; $cl := \text{undefined}$; $r := n-1$; $cr := \text{undefined}$; $\pi := 1$; while $l < r - 1$ do $$m := \frac{(r+l)}{2};$$ send $\langle \text{start}, \pi, m \rangle$ to both $P_1$ and to $P_{n-1}$; wait until a message $\langle \text{consult}, \pi, g \rangle$ arrives; if the message has arrived from $P_1$ then begin $cl := g$; $l := m$ end; else begin $cr := g$; $r := m$ end; $\pi := \pi + 1$ end; if $cl = \text{undefined}$ then return $(f(a_0, cr))$ else if $cr = \text{undefined}$ then return $(f(a_0, cl))$ else return $(f(a_0, f(cr, cl)))$; comment: The program for a processor $P_i$ other than the leader; Note: only the first $\text{start}$ message in a phase is responded. Initially: $\pi := 0$; wait until a message arrives; assume that a message arrived from the neighbour $P_s$; if the message is $\langle \text{start}, j, m \rangle$ then if $j > \pi$ then begin $\pi := j$; if $i = m$ then send $\langle \text{consult}, j, a_i \rangle$ to both $P_s$ and $P_{2i-s}$; else send $\langle \text{start}, j, m \rangle$ to $P_{2i-s}$; end else skip else if the message is $\langle \text{consult}, j, g \rangle$ then send $\langle \text{consult}, j, f(g, a_i) \rangle$ to $P_{2i-s}$; Figure 1: Detailed local programs for consultation on a ring. Technion - Computer Science Department - Technical Report CS0361 - 1985 frond for every vertex for which fronds exist. We call this sparse subgraph DFSF. Reducing a general network to the sparse DFSF reduces the cost of reliable announcement on it to \(O(n)\) messages and the cost of reliable consultation to \(O(n^2)\) messages. Obviously, a DFSF is useful also for securing transmission through individual links. 3.2.4. Constructing DFSF reliably. A reliable distributed DFSF algorithm may be derived from the original DFS algorithm [HT73,T72]. This algorithm adjoins vertices, one by one, to the DFS tree assigning a sequence number to a vertex as it joins the tree. These numbers are used to establish the lowest frond for every vertex that has one. There is a single center of control which moves from a vertex to one of its neighbors. When the control is at some vertex \(v\), \(v\) inspects its neighbors to find a vertex which does not yet belong to the tree. If such a vertex exists, the center of control is moved there, the direct link to it is marked, this vertex joins the structure and is assigned a sequence number. Otherwise, the control is moved (backtracks) to the parent of \(v\) and a lowest frond from \(v\) is established and marked. If the control cannot backtrack, since \(v\) has no parent in the tree (i.e., \(v\) is the root) the algorithm terminates successfully. *Moving* the control reliably from a vertex \(v\) to another vertex \(u\) is done by \(v\) announcing the identity of \(u\). *Establishing* a lowest frond is done by \(v\) announcing the identity of the vertex \(u\) with the smallest DFS number. Then both \(u\) and \(v\) mark the direct link between them as being the lowest frond. *Inspecting* the neighbors reliably by a vertex \(v\) is done by \(v\) sending an inquiring message to all its \(d\) neighbors and waiting for \(d-1\) answers. These answers are the DFS numbers for vertices already in the structure and negative answers for others. After \(d-1\) answers have arrived, \(v\) announces the inquiring message designated to the vertex $u$ whose answer is missing and waits for an answer from $u$. $u$ also sends its answer by announcement. The message complexity of the DFSFLF algorithm is derived as follows: The message complexity of announcement is $|L|$. Thus moving control, establishing the lowest frond, and inspection each take $O(|L|)$ messages per processor. Each pass of the center of control results either in a new vertex joining the DFSFLF or in a backtrack through which a lowest frond is established. There are $n$ vertices and at most $n - 1$ lowest fronds in the DFSFLF. Thus, the overall message complexity is $O(n \cdot |L|)$. Note, that there is only a single center of activity in the DFSFLF construction algorithm. Therefore only one processor at a time may initiate a broadcast. As a result, constant local space is sufficient for efficient broadcast provision. ### 3.2.5. Further improvements In a recent paper [IR84] Itai and Rodeh have given a generalization of the scheme described in Section 3.2.2. For a spanning tree $T$ rooted at $r$ let $T[p]$ denote the path in $T$ from $p$ to $r$. We say that the spanning trees $T$ and $S$ satisfy the 2-tree condition for links if for every processor $p$, $T[p]$ and $S[p]$ are link-disjoint. Itai and Rodeh have shown that every bi-connected network contains two such trees with respect to every root $r$. For peer-to-peer communication the following 2-tree protocol for links is suggested: For a processor $u$ to send a message to another processor $v$, $u$ first sends the message to the root on both trees. At least one of the two copies will arrive. Upon arrival of a message the root creates two copies and sends them to the target processor $v$ again through the two trees. As before, at least one of the two messages will arrive. In [IR84] s-t numbering techniques [ET76] (see also [E]) are used to impose a linear order among the processors of the network. This linear order is somewhat similar to the linear clockwise order of the processors of a ring. We omit the details here and only state that by employing s-t numbering in a way similar to the implementation of consultation on a ring, consultation functions may be computed in $O(n \log n)$ messages. 4. Applying the reliable primitives In this section we derive reliable algorithms from known distributed algorithms by using reliable implementation of consultation and announcement. 4.1. Reliable distributed termination detection The problem of distributed termination is that of transforming a globally stable state (a state in which every processor is 'idle') to a termination state (in which all the processors know that no message is ever going to arrive). This problem has been posed and solved by Francez [FRABO] and has been reworked many times since then. Consider Francez and Rodeh's solution to the distributed termination problem [FRB2]. Their algorithm consists of phases. In every phase three waves of messages flow through an arbitrary spanning tree. 1. W1 finds out if all the processors are locally stable. 2. W2 enables W3. 3. W3 verifies that the processors are still locally stable. Francez and Rodeh describe their algorithm in terms of communication on a spanning tree. However, what they really do is to use announcement for W2 and consultation for W1 and W3. Thus their scheme may become reliable for the price of a factor of $O(\log n)$. 4.2. Reliable $k$-selection $k$-selection is a problem that has attracted many researchers. First we concentrate on the algorithm by Blum et. al. [BFPR] (also in [AHU, Algorithm 3.63]). The algorithm has a bag $B$ of elements as input. 4.2.1. The algorithm due to Blum et. al. 1. If the bag $B$ is small (say no more than 50 elements); find its $k$-th element by sorting and return it. Otherwise execute the following steps. 2. Divide the elements of $B$ into bags of size 5, so that at most one bag has less than 5 elements. The elements in this special bag are called leftovers. 3. Find the median of each of the 5-element bags by sorting and form a set $M$ of all the medians. 4. Obtain the median $m$ of the set $M$ by applying the algorithm recursively to $M$. 5. Partition $B$ into 3 subbags $BL$, $BE$, $BG$ which contain the elements of $B$ which are smaller, equal or larger than $m$, respectively. Let $B'$ be defined as follows: $$ B' = \begin{cases} BL & \text{if } |BL| > k \\ BE & \text{if } |BL| < k < |BL| + |BE| \\ BG & \text{if } |BL| + |BE| < k \\ \end{cases} $$ if $B' = BE$ then return $m$ as a result. Otherwise, compute the value $k'$ defined by: $$ k' = \begin{cases} k & \text{if } B' = BL \\ k - |BL| - |BE| & \text{if } B' = BG \\ \end{cases} $$ 6. Apply the algorithm recursively to find the $k'$-th element of $B'$, returning it as a result. Assume now that the elements of $B$ are distributed among the processors of a network. Various distributed implementations of the algorithm in this setup appear in [SFR83]. To assure reliability let us discuss each step at a time. 1. Counting the number of elements in $B$ is a simple consultation operation. If the number of elements is small they can all be shipped to the leader (another consultation operation) and the median is computed there. 2. Dividing the elements of $B$ into bags of size 5 is a consultation operation by itself. Note that it is associative and commutative. 3. This step is done locally in each of the processors. 4. Applying the algorithm recursively requires announcement (for the sake of synchronization). The distribution of $m$ among the processors is done by another announcement. The other details of the recursive invocation are identical to the ones described in [SFR83]. 5. Partitioning $B$ to yield $B'$ is done locally, while finding $k'$ requires counting which is done by consultation. 6. Distributing $k'$ and the recursive invocation of Step 8 is done by announcement. We hope that this sketchy description is sufficient to convince that a distributed and reliable implementation of the algorithm by Blum et. al. is indeed easy. Still, it leads to non-trivial complexity results: By using the complexity analysis of [SFR83] and the suggested reliable implementation of consultation and announcement, an $O(\sqrt{|B|} \cdot \log n)$ message complexity may be derived. 4.2.2. Probabilistic $k$-selection Finding the $k$-th element of a bag can be done probabilistically by partitioning the bag with respect to a randomly chosen element, and proceeding in a fashion very similar to that described in Section 4.2. Choosing at random an element $m$ out of a bag of $b = |B|$ elements in a non-reliable network is the crucial problem here. We concentrate on the distributed random choice procedure. suggested in [STR83] for the case of a reliable network. The algorithm assumes the existence of a spanning tree constructed over the processors of the network. Each processor assigns a fixed order to its own elements and to its children. Assume that a processor knows $t_1, t_2, ..., t_d$ - the respective numbers of elements which reside in each of its $d'$ sub-trees (where $d'$ being its degree in the spanning tree). The random choice proceeds as follows: The root randomly chooses an integer $i$ in the range $1$ to $b$. To find the corresponding element, it checks whether this element is one of the $t$ elements which reside in its own memory. This happens if $i \leq t$. Otherwise, the root sends a message LOCATE $(j)$ to the $j$-th child, where $j$ is the smallest positive integer of the form $i - t - t_1 - ... - t_{j-1}$. Upon reception of a message LOCATE $(j)$, the receiving processor acts similarly to the root. When an element has been located, it is sent to the root, to serve as the partitioning element. The above distributed random choice procedure cannot be directly expressed as a composition of consultation and announcement operations because the element location procedure is not commutative and associative. For such and similar cases the methodological approach described in section 3.1 suggests securing communication at link transmission level. Unfortunately in the case of the $k$-selection algorithm this introduces significant message complexity degradation. We therefore take an alternative approach: redesign the element location procedure and give it a direct reliable implementation. We first consider the special case of a ring network. To locate the element, instead of using the order defined by the unreliable spanning tree, the algorithm uses the clockwise order defined by the ring. The processors are assigned sequence numbers in this order by the root initiating a numbering message in both directions. A following binary search type procedure is then carried out by the root over the ring to locate the processor containing the $i$-th number in the total order defined by the local orders and the ring order. The procedure operates in phases. At each phase the root *splits* the ring into two parts and *counts* the number of elements at each part. The *splitting* is done by the root announcing the sequence number of the split point. The *counting* is done by the two vertices near the split point initiating a counting message towards the root. At least one of the countings succeeds in reaching the root. Thus the root knows in what part of the ring the processor containing the $i$-th number resides and places the next split point in the middle of this part. $\log n$ phases are sufficient to locate the processor containing the $i$-th element. At each phase $O(n)$ messages are transmitted. The message complexity of the reliable random choice procedure is thus $O(n \log n)$. For the case of a general network, s-t numbering can be used to determine a linear order among the processors, and then binary search is applicable. Again, at most $O(n \cdot \log n)$ messages are sent. 5. **Conclusion** In this paper a transforational approach is suggested to the design of reliable algorithms. We start with non-reliable algorithms, identify certain high level communication patterns and use reliable implementing for them to obtain reliable algorithms. The resulting algorithms are in many cases efficient and nontrivial. For the cases were the available repertoire of reliable high level primitives is not sufficient to express the algorithm, low level reliable announcement may be used. 6. **Acknowledgements** We would like to thank Danny Dolev, Nissim Francez, Oded Goldreich and Alon Itai for very helpful discussions. References [AHU] [BFPRT73] [ET76] [ET73] [FR82] [FR80] [HOP78] [HT73] [IR78] [IR84] [MS79] [SS85] [SFR83] [T72]
{"Source-Url": "http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/1985/CS/CS0361.pdf", "len_cl100k_base": 6460, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 60924, "total-output-tokens": 8004, "length": "2e12", "weborganizer": {"__label__adult": 0.00043129920959472656, "__label__art_design": 0.0003561973571777344, "__label__crime_law": 0.0005426406860351562, "__label__education_jobs": 0.0006914138793945312, "__label__entertainment": 0.00010967254638671876, "__label__fashion_beauty": 0.0002028942108154297, "__label__finance_business": 0.00034308433532714844, "__label__food_dining": 0.0004715919494628906, "__label__games": 0.0007905960083007812, "__label__hardware": 0.0030956268310546875, "__label__health": 0.0011796951293945312, "__label__history": 0.0004286766052246094, "__label__home_hobbies": 0.00015854835510253906, "__label__industrial": 0.0007672309875488281, "__label__literature": 0.00034809112548828125, "__label__politics": 0.0003883838653564453, "__label__religion": 0.00080108642578125, "__label__science_tech": 0.1689453125, "__label__social_life": 0.00011539459228515624, "__label__software": 0.00888824462890625, "__label__software_dev": 0.80908203125, "__label__sports_fitness": 0.0004374980926513672, "__label__transportation": 0.0009465217590332032, "__label__travel": 0.0002803802490234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29600, 0.03894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29600, 0.45255]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29600, 0.90819]], "google_gemma-3-12b-it_contains_pii": [[0, 173, false], [173, 908, null], [908, 2854, null], [2854, 4298, null], [4298, 6203, null], [6203, 8161, null], [8161, 10208, null], [10208, 12056, null], [12056, 13926, null], [13926, 15335, null], [15335, 17325, null], [17325, 19297, null], [19297, 20739, null], [20739, 22356, null], [22356, 24069, null], [24069, 26228, null], [26228, 27852, null], [27852, 29600, null]], "google_gemma-3-12b-it_is_public_document": [[0, 173, true], [173, 908, null], [908, 2854, null], [2854, 4298, null], [4298, 6203, null], [6203, 8161, null], [8161, 10208, null], [10208, 12056, null], [12056, 13926, null], [13926, 15335, null], [15335, 17325, null], [17325, 19297, null], [19297, 20739, null], [20739, 22356, null], [22356, 24069, null], [24069, 26228, null], [26228, 27852, null], [27852, 29600, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29600, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29600, null]], "pdf_page_numbers": [[0, 173, 1], [173, 908, 2], [908, 2854, 3], [2854, 4298, 4], [4298, 6203, 5], [6203, 8161, 6], [8161, 10208, 7], [10208, 12056, 8], [12056, 13926, 9], [13926, 15335, 10], [15335, 17325, 11], [17325, 19297, 12], [19297, 20739, 13], [20739, 22356, 14], [22356, 24069, 15], [24069, 26228, 16], [26228, 27852, 17], [27852, 29600, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29600, 0.0]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
75fafbbf254c1fdb4b04200fec9267782ca8a0d4
The *qdap* package (Rinker, 2013) is an R package designed to assist in quantitative discourse analysis. The package stands as a bridge between qualitative transcripts of dialogue and statistical analysis and visualization. The *tm* package (Feinerer and Hornik, 2014) is a major R (R Core Team, 2013) package used for a variety of text mining tasks. Many text analysis packages have been built around the *tm* package’s infrastructure (see CRAN Task View: Natural Language Processing). As *qdap* aims to act as a bridge to other R text mining analyses it is important that *qdap* provides a means of moving between the various *qdap* and *tm* data types. This vignette serves as a guide towards navigating between the *qdap* and *tm* packages. Specifically, the two goals of this vignette are to (1) describe the various data formats of the two packages and (2) demonstrate the use of *qdap* functions that enable the user to move seamlessly between the two packages. 1 Data Formats The qdap and tm packages each have two basic data formats. qdap stores raw text data in the form of a data.frame augmented with columns of demographic variables whereas tm stores raw text as a Corpus and annotates demographic information with Meta Data attributes. The structures are both lists and are comparable. The second format both packages use is a matrix structure of word frequency counts. The qdap package utilizes the Word Frequency Matrix (wfm function) whereas the tm package utilizes the Term Document Matrix or Document Term Matrix (TermDocumentMatrix and DocumentTermMatrix functions). Again the structure is similar between these two data forms. Table 1 lays out the data forms of the two packages. <table> <thead> <tr> <th>Package</th> <th>Raw Text</th> <th>Word Counts</th> </tr> </thead> <tbody> <tr> <td>qdap</td> <td>Dataframe</td> <td>Word Frequency Matrix</td> </tr> <tr> <td>tm</td> <td>Corpus</td> <td>Term Document Matrix/Document Term matrix</td> </tr> </tbody> </table> Table 1: qdap-tm Data forms Figure 1 provides a visual overview of the qdap functions used to convert between data structures. Many of these conversion could be achieved via the tm package as well. Figure 1: Converting Data between qdap and tm One of the most visible differences between \texttt{qdap-tm} data forms is that \texttt{qdap} enables the user to readily view the data while the \texttt{tm} utilizes a print method that provides a summary of the data. The \texttt{tm::inspect} function enables the user to view \texttt{tm} data forms. The \texttt{qdap} package provides \texttt{qdap::qview} and \texttt{qdap::htruncdf} functions to view more digestible amounts of the data. Let’s have a look at the different data types. We’ll start by loading both packages: \begin{verbatim} library(qdap); library(tm) \end{verbatim} Now let us have a look at the raw text storage of both packages. ### 1.1 Raw Text #### 1.1.1 qdap’s Raw Text \begin{verbatim} DATA qview(DATA) htruncdf(DATA) \end{verbatim} ```r ## > DATA ## ## #> person sex adult state code ## #> 1 sam m 0 Computer is fun. Not too fun. K1 ## #> 2 greg m 0 No it's not, it's dumb. K2 ## #> 9 sally f 0 What are you talking about? K9 ## #> 10 researcher f 1 Shall we move on? Good then. K10 ## #> 11 greg m 0 I'm hungry. Let's eat. You already? K11 ``` ```r ## > qview(DATA) ## ## nrow = 11 ncol = 5 DATA ## ## person sex adult state code ``` ```r # person sex adult state code # sam m 0 Computer i K1 # greg m 0 No it's no K2 # sam m 0 I distrust K8 # sally f 0 What are y K9 # researcher f 1 Shall we m K10 ``` ### 1.1.2 tm’s Raw Text ```r data("crude") crude ispect(crude) ``` ```r # A corpus with 20 text documents # # > crude[[1]] # Diamond Shamrock Corp said that effective today it had cut its contract prices for crude oil by # 1.50 dlrs a barrel. The reduction brings its posted price for West Texas ## Intermediate to 16.00 dlrs a barrel, the copany said. ## "The price reduction today was made in the light of falling ## . ## . ## . ## Diamond is the latest in a line of U.S. oil companies that ## have cut its contract, or posted, prices over the last two days ## citing weak oil markets. ## Reuter ### 1.2 Word/Term Frequency Counts Now we'll look at how the two packages handle word frequency counts. We’ll start by setting up the raw text forms the two packages expect. ```r tm_dat <- qdap_dat <- DATA[1:4, c(1, 4)] rownames(tm_dat) <- paste("docs", 1:nrow(tm_dat) Both qdap_dat and tm_dat are storing this basic information: <table> <thead> <tr> <th></th> <th>person</th> <th>state</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>sam</td> <td>Computer is fun. Not too fun.</td> </tr> <tr> <td>2</td> <td>greg</td> <td>No it's not, it's dumb.</td> </tr> <tr> <td>3</td> <td>teacher</td> <td>What should we do?</td> </tr> <tr> <td>4</td> <td>sam</td> <td>You liar, it stinks!</td> </tr> </tbody> </table> ``` ### 1.2.1 qdap's Frequency Counts ```r with(qdap_dat, wfm(state, person)) ``` ``` # greg sam teacher ## computer 0 1 0 ## do 0 0 1 ## dumb 1 0 0 ## fun 0 2 0 ## is 0 1 0 ## it 0 1 0 ## it's 2 0 0 ## liar 0 1 0 ## no 1 0 0 ## not 1 1 0 ## should 0 0 1 ## stinks 0 1 0 ## too 0 1 0 ## we 0 0 1 ## what 0 0 1 ## you 0 1 0 ``` ### 1.2.2 tm’s Frequency Counts ```r TermDocumentMatrix(tm_dat, control = list( removePunctuation = TRUE, wordLengths=c(0, Inf) ) ) ``` ``` <<TermDocumentMatrix (terms: 16, documents: 4)>> ## Non-/sparse entries: 17/47 ## Sparsity : 73% ## Maximal term length: 8 ``` Now we’ll look at the tm output using inspect. ```r docs terms 1 2 3 4 computer 1 0 0 0 do 0 0 1 0 dumb 0 1 0 0 fun 2 0 0 0 is 1 0 0 0 it 0 0 0 1 its 0 2 0 0 liar 0 0 0 1 no 0 1 0 0 not 1 1 0 0 should 0 0 1 0 stinks 0 0 0 1 too 1 0 0 0 we 0 0 1 0 what 0 0 1 0 you 0 0 0 1 ``` The two matrices are essentially the same, with the exception of column order and names. Notice that by default tm removes words with fewer characters (word length) and does not discard punctuation (we made the matrices equal by specifying removePunctuation = TRUE and `wordLengths=c(0, Inf)` for tm’s control argument). qdap takes the opposite approach, removing punctuation and utilizing all words, by default. Likewise, the tm package stores demographic information as meta data within the Corpus, whereas, qdap incorporates the demo- graphics with the text into a single data.frame structure. These differences arise out of the intended uses, audiences, and philosophies of the package authors. Each has strengths in particular situations. The \texttt{qdap} output is an ordinary matrix whereas the \texttt{tm} output is a more compact \texttt{simple_triplet_matrix}. While the storage is different, both packages can be made to mimic the default of the other. Also note that the \texttt{qdap summary} method for \texttt{wfm} provides the user with information similar to the \texttt{TermDocumentMatrix}/\texttt{DocumentTermMatrix} functions’ default print method. ``` summary(with(qdap_dat, wfm(state, person))) ``` ```r ## <<A word-frequency matrix (16 terms, 3 groups)>> ## ## Non-/sparse entries : 17/31 ## Sparsity : 65% ## Maximal term length : 8 ## Less than four characters : 56% ## Hapax legomenon : 13(81%) ## Dis legomenon : 3(19%) ## Shannon's diversity index : 2.73 ``` Now we’ll look at some \texttt{qdap} functions that enable the user to move between packages, gaining the flexibility and benefits of both packages. ### 2 Converting Data Forms We’ll again use the following preset data: ```r tm_dat <- qdap_dat <- DATA[1:4, c(1, 4)] rownames(tm_dat) <- paste("docs", 1:nrow(tm_dat)) tm_dat <- Corpus(DataframeSource(tm_dat[, 2, drop=FALSE])) qdap_wfm <- with(qdap_dat, wfm(state, person)) tm_tdm <- TermDocumentMatrix(tm_dat, control = list( removePunctuation = TRUE, wordLengths= c(0, Inf) ) ) ``` 8 1. qdap_dat – is a **qdap** raw text form 2. tm_dat – is a **tm** raw text format 3. qdap_wfm – is a **qdap** word frequencies count 4. tm_tdm – is a **tm** word frequencies count The reader is encouraged to view each of the data formats: ```r qdap_dat; qview(qdap_dat) ``` ``` tm_dat; inspect(tm_dat) ``` ``` qdap_wfm; summary(qdap_wfm) ``` ``` tm_tdm; inspect(tm_tdm) ``` 2.1 Corpus to data.frame To move from a Corpus to a data.frame the `as.data.frame` function is used as follows: ```r as.data.frame(tm_dat) ``` ``` ## docs text ## 1 1 Computer is fun. Not too fun. ## 2 2 No it's not, it's dumb. ## 3 3 What should we do? ## 4 4 You liar, it stinks! ``` 2.2 data.frame to Corpus To move from a data.frame to a Corpus the `as.Corpus` function is used as follows: ```r with(qdap_dat, as.Corpus(state, person)) ``` *Note the 3 text documents; one for each grouping variable. To get one for each row use: ```r with(qdap_dat, as.Corpus(state, id(person))) ``` ## 2.3 TermDocumentMatrix/DocumentTermMatrix to wfm To move from a TermDocumentMatrix to a wfm the `as.wfm` function is used as follows: ```r as.wfm(tm_tdm) ``` ## 2.4 wfm to TermDocumentMatrix/DocumentTermMatrix To move from a wfm to a TermDocumentMatrix or DocumentTermMatrix the `as.tdm` and `as.dtm` functions can be used as follows: ```r as.tdm(qdap_wfm) as.dtm(qdap_wfm) ``` ## TermDocumentMatrix (terms: 16, documents: 3) ## Non-/sparse entries: 17/31 ## Sparsity : 65% ## Maximal term length: 8 ## Weighting : term frequency (tf) ## DocumentTermMatrix (documents: 3, terms: 16) ## Non-/sparse entries: 17/31 ## Sparsity : 65% ## Maximal term length: 8 ## Weighting : term frequency (tf) ### 2.5 Corpus to wfm One can also move directly from a \texttt{tm Corpus} to a \texttt{qdap wfm} with the \texttt{as.wfm} function. \begin{verbatim} as.wfm(tm_dat) \end{verbatim} ```r ## 1 2 3 4 ## computer 1 0 0 0 ## do 0 0 1 0 ## dumb 0 1 0 0 ## fun 2 0 0 0 ## is 1 0 0 0 ## it 0 0 0 1 ## it's 0 2 0 0 ## liar 0 0 1 1 ## no 0 1 0 0 ## not 1 1 0 0 ## should 0 1 0 0 ## stinks 0 0 0 1 ## too 1 0 0 0 ## we 0 0 1 0 ## what 0 0 1 0 ## you 0 0 0 1 ``` 3 Stemming, Stopwords, and Choosing n-Character Words/Terms from a wfm Many of the qdap and tm functions have means of stemming, removing stopwords, and bounding, that is filtering rows (greater than, equal to or less than) meeting min/max criteria. qdap also offers two external functions to address these issues directly. 3.1 stemming qdap takes the approach that the user stems the dataframe upon creation (using sentSplit(..., stem = TRUE)) or after (using the stem2df function), maintaining a column of stemmed and unstemmed text for various analyses. ```r sentSplit(qdap_dat, "state", stem = TRUE) ``` <table> <thead> <tr> <th>#</th> <th>person</th> <th>tot</th> <th>state</th> <th>stem.text</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>sam</td> <td>1.1</td> <td>Computer is fun.</td> <td>Comput is fun.</td> </tr> <tr> <td>2</td> <td>sam</td> <td>1.2</td> <td>Not too fun.</td> <td>Not too fun.</td> </tr> <tr> <td>3</td> <td>greg</td> <td>2.1</td> <td>No it's not, it's dumb. No it not it dumb.</td> <td></td> </tr> <tr> <td>4</td> <td>teacher</td> <td>3.1</td> <td>What should we do? What should we do?</td> <td></td> </tr> <tr> <td>5</td> <td>sam</td> <td>4.1</td> <td>You liar, it stinks! You liar it stink!</td> <td></td> </tr> </tbody> </table> 3.2 Filtering: Stopwords and Bounding qdap’s Filter function allows the user to remove stopwords and bound a Word Frequency Matrix (wfm). First we’ll construct a minimal Word Frequency Matrix: ```r dap_wfm <- with(qdap_dat, wfm(state, person)) ``` <table> <thead> <tr> <th>#</th> <th>greg</th> <th>sam</th> <th>teacher</th> </tr> </thead> <tbody> <tr> <td>computer</td> <td>0</td> <td>1</td> <td>0</td> </tr> <tr> <td>do</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>dumb</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <td>fun</td> <td>0</td> <td>2</td> <td>0</td> </tr> <tr> <td>is</td> <td>0</td> <td>1</td> <td>0</td> </tr> <tr> <td>it</td> <td>0</td> <td>1</td> <td>0</td> </tr> </tbody> </table> ## it's 2 0 0 ## liar 0 1 0 ## no 1 0 0 ## not 1 1 0 ## should 0 0 1 ## stinks 0 1 0 ## too 0 1 0 ## we 0 0 1 ## what 0 0 1 ## you 0 1 0 Now we’ll move through a series of examples demonstrating the usage of `Filter` on a `wfm` object. **Filter(qdap_wfm, min = 5)** ```r ## greg sam teacher ## computer 0 1 0 ## should 0 0 1 ## stinks 0 1 0 ``` **Filter(qdap_wfm, min = 5, max = 7)** ```r ## greg sam teacher ## should 0 0 1 ## stinks 0 1 0 ``` **Filter(qdap_wfm, 4, 4)** ```r ## greg sam teacher ## dumb 1 0 0 ## it's 2 0 0 ## liar 0 1 0 ## what 0 0 1 ``` 4 Apply Functions Intended for TermDocumentMatrix to \texttt{wfm} Object At times it is convenient to apply a function intended for a \texttt{tm} TermDocumentMatrix or DocumentTermMatrix directly to a \texttt{qdap wfm} object. \texttt{qdap}'s \texttt{apply\_as\_tm} function enables these functions to be used directly on a \texttt{wfm}. 4.1 A Minimal wfm Object Let us begin with a slightly larger wfm minimal example: ```r a <- with(DATA, wfm(state, list(sex, adult))) ``` ``` ## <<A word-frequency matrix (41 terms, 4 groups)>> ## ## Non-/sparse entries : 45/119 ## Sparsity : 73% ## Maximal term length : 8 ## Less than four characters : 49% ## Hapax legomenon : 32(78%) ## Dis legomenon : 7(17%) ## Shannon's diversity index : 3.62 ``` 4.2 A Small Demonstration Here we will use the tm package's `removeSparseTerms` to remove sparse terms from a wfm object and return a Word Frequency Matrix object (wfm class). ```r out <- apply_as_tm(a, tm::removeSparseTerms, sparse=0.6) summary(out) ``` ``` ## <<A word-frequency matrix (3 terms, 4 groups)>> ## ## Non-/sparse entries : 7/5 ## Sparsity : 42% ## Maximal term length : 4 ## Less than four characters : 67% ## Hapax legomenon : 0(0%) ## Dis legomenon : 1(33%) ## Shannon's diversity index : 1.06 ``` 4.3 Further Examples to Try Here are some further examples to try: ```r apply_as_tm(a, tm::findAssocs, "computer", .8) apply_as_tm(a, tm::findFreqTerms, 2, 3) apply_as_tm(a, tm::Zipf_plot) apply_as_tm(a, tm::Heaps_plot) apply_as_tm(a, tm:::plot.TermDocumentMatrix, corThreshold = 0.4) ``` ```r class(out) ## [1] "wfm" "true.matrix" "matrix" ``` library(proxy) ```r apply_as_tm(a, tm::weightBin) apply_as_tm(a, tm::weightBin, to.qdap = FALSE) apply_as_tm(a, tm::weightSMART) apply_as_tm(a, tm::weightTfIdf) ``` 5 Apply Functions Intended for qdap Dataframes to tm Corpus While the \texttt{tm} package (and other packages used on \texttt{tm} objects) tends to conduct analysis by feeding functions a \texttt{TermDocumentMatrix} or \texttt{DocumentTermMatrix} \texttt{qdap} generally feeds functions raw text directly. There are advantages to both approaches (e.g., the matrix is a mathematical structure while raw text maintains word order). Many \texttt{qdap} functions can be used on the \texttt{Corpus} structure via the \texttt{apply_as_df} function. 5.1 A Small Demonstration Here we will use the \texttt{qdap} package’s \texttt{trans_cloud} function, on our minimal \texttt{tm} Corpus, to produce a word cloud with particular words highlighted: ```r matches <- list( good = "fun", ``` bad = c("dumb", "stinks", "liar") apply_as_df(tm_dat, trans_cloud, grouping.var=NULL, target.words=matches, cloud.colors = c("red", "blue", "grey75")) 5.2 Further Examples to Try Here are some further examples to try: library(tm) reut21578 <- system.file("texts", "crude", package = "tm") reuters <- Corpus(DirSource(reut21578), readerControl = list(reader = readReut21578XML)) apply_as_df(reuters, word_stats) 6 Conclusion This vignette described the various data formats for the `qdap` and `tm` packages. It also demonstrated some of the basic functionality of the `qdap` functions designed to navigate between the two packages. For more information on the \texttt{tm} package (Feinerer et al. 2008) use: \begin{verbatim} browseVignettes(package = "tm") \end{verbatim} Likewise, the user may view additional information about the \texttt{qdap} package (Rinker 2013): \begin{verbatim} browseVignettes(package = "qdap") \end{verbatim} \section*{Acknowledgments} \texttt{qdap} relies heavily on the \texttt{tm} package. The \texttt{tm} package has extended text analysis to the R platform. Thank you to Ingo Feinerer and Kurt Hornik for their work on this and many other R packages. This document was produced with \texttt{knitr} (Xie 2013). Thank you to Yihui Xie for the \texttt{knitr} package and his many other contributions to the R community. \section*{References} Feinerer I, Hornik K (2014). \textit{tm: Text Mining Package}. Version 0.5-10, URL \url{http://CRAN.R-project.org/package=tm}.
{"Source-Url": "http://trinker.github.io/qdap/vignettes/tm_package_compatibility.pdf", "len_cl100k_base": 5367, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 36113, "total-output-tokens": 6248, "length": "2e12", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.000820159912109375, "__label__crime_law": 0.0006046295166015625, "__label__education_jobs": 0.003692626953125, "__label__entertainment": 0.00020015239715576172, "__label__fashion_beauty": 0.00017559528350830078, "__label__finance_business": 0.0006551742553710938, "__label__food_dining": 0.00033783912658691406, "__label__games": 0.000579833984375, "__label__hardware": 0.0007686614990234375, "__label__health": 0.0004398822784423828, "__label__history": 0.0004839897155761719, "__label__home_hobbies": 0.00017178058624267578, "__label__industrial": 0.0006361007690429688, "__label__literature": 0.0008664131164550781, "__label__politics": 0.0005011558532714844, "__label__religion": 0.0004780292510986328, "__label__science_tech": 0.1151123046875, "__label__social_life": 0.00039887428283691406, "__label__software": 0.1748046875, "__label__software_dev": 0.69677734375, "__label__sports_fitness": 0.00028252601623535156, "__label__transportation": 0.0003082752227783203, "__label__travel": 0.0002579689025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16761, 0.06971]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16761, 0.56617]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16761, 0.73136]], "google_gemma-3-12b-it_contains_pii": [[0, 970, false], [970, 2196, null], [2196, 3364, null], [3364, 3833, null], [3833, 4747, null], [4747, 5359, null], [5359, 6175, null], [6175, 7687, null], [7687, 8514, null], [8514, 9045, null], [9045, 9816, null], [9816, 11388, null], [11388, 11952, null], [11952, 12291, null], [12291, 13259, null], [13259, 14559, null], [14559, 14983, null], [14983, 15194, null], [15194, 16761, null]], "google_gemma-3-12b-it_is_public_document": [[0, 970, true], [970, 2196, null], [2196, 3364, null], [3364, 3833, null], [3833, 4747, null], [4747, 5359, null], [5359, 6175, null], [6175, 7687, null], [7687, 8514, null], [8514, 9045, null], [9045, 9816, null], [9816, 11388, null], [11388, 11952, null], [11952, 12291, null], [12291, 13259, null], [13259, 14559, null], [14559, 14983, null], [14983, 15194, null], [15194, 16761, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16761, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16761, null]], "pdf_page_numbers": [[0, 970, 1], [970, 2196, 2], [2196, 3364, 3], [3364, 3833, 4], [3833, 4747, 5], [4747, 5359, 6], [5359, 6175, 7], [6175, 7687, 8], [7687, 8514, 9], [8514, 9045, 10], [9045, 9816, 11], [9816, 11388, 12], [11388, 11952, 13], [11952, 12291, 14], [12291, 13259, 15], [13259, 14559, 16], [14559, 14983, 17], [14983, 15194, 18], [15194, 16761, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16761, 0.06083]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
e92fe7ffe8538ae555499aa6c1d4466d54007812
POSIX threads parallelization for example of Particle-In-Cell density calculations in plasma computer simulations Anna Sasak∗, Marcin Brzuszek Institute of Computer Science, Maria Curie Skłodowska University, pl. M. Curie-Skłodowskiej 1, 20-031 Lublin, Poland. Abstract – The TRQR program [1–4] simulates trajectories of charged particles (electrons or ions) in the electromagnetic field. TRQR is based on the Particle-In-Cell method whose basic guideline is the use of computational particles (called macro particles) that represent a large number of real particles of the same kind moving in the same direction. The program calculates particles charge density distribution and potential distribution for chosen ion sources, analyses particles behaviour in the electromagnetic field, describes the process of beams from the source extraction. A number of factors influences simulation results. In order to improve efficiency the program has been parallelized. This paper presents the process of converting chosen parts of the TRQR program into the multi-thread version. In the first step the program was moved from Fortran 77 to C++. Then it was parallelized using the Pthread library with the standard API for C++ contained in the POSIX IEEE 1003.1c standard. Each of threads has its own stack, set of registers, program counter, individual data, local variables, state information. All threads of particular process share one address space, general signal operations, virtual memory, data, input and output. The Mutex functions were used as a synchronization mechanism. This paper presents the analysis of a particular piece of main program that implements computations of particles density distribution. The paper presents execution time dependencies for different simulation parameters such as: the number of macro particles, size of the simulation mesh and the number of used threads. 1 Introduction Due to the complexity of physical processes, computer simulations of plasma behaviour in ion sources are still a great challenge for programmers. One of the methods of computing the Trajectories of charged particles in the electromagnetic field is the Particle-In-Cell method. In the PiC method a large number of particles such as ions or electrons in plasma or beam is represented by a smaller, numerically tractable number of so called ‘macro-particles’. Each macro-particle behaves like a single particle of certain kind, but carries a charge large enough to represent all real particles. This paper presents the results from migration of one piece of TRQR program to parallel mode. First, the program was moved from Fortran 77 to C++ and then parallelized using the Pthread library. The paper presents the results of simulations for different parameters such as a number of used threads, a number of macro particles, mesh size. 2 TRQR - principle of operation The TRQR program was developed in order to study plasma behaviour as well as the process of extraction and formation of the ion beams emitted from the plasma ion sources. The method implemented for computer simulation consists of the following steps: 1) Setting the systems geometry such as a number of particles etc. and generating initial distribution for all kinds of particles. 2) Calculations of particles density distributions for chosen ion sources using the PiC method. 3) Solving the Poisson equation for the charge density obtained in the previous step and the boundary conditions imposed by electrodes. 4) Calculation of electrical field in the grid points. 5) Solving the Lorentz equations of motion for each particle. 6) Generating new particles if it is needed due to hits on electrodes and plasma chamber walls. This procedure, steps from 2 to 6, continues until a final state is achieved[3]. The special subject of interest for this paper is the particle-in-cell (PiC) method the second step of simulation is based on. In the PiC method a large number of particles such as ions or electrons in plasma or beam is represented by a smaller, numerically tractable number of so called ‘macro particles’. Each macro particle behaves like a single particle of certain kind, but carries a charge large enough to represent all real particles. The simulation space is divided into small regions creating a spatial mesh. The method weights particles to grid points using a particle shape factor to obtain charge on the grid. This distribution process is carried out with one of two possible schemes. The first method called nearest grid point (NGP) assigns the macro-particle charge to the point of grid that is the nearest to the particles position. In the second one called cloud-in-cell (CiC) fractions of macro-particle charge are assigned to 8 (in the case of 3D calculations) nearest in the mesh grid points. Even better charge distribution is obtained if in the CiC method the macro particle charge is distributed among 27 nearest grid points [4]. 3 POSIX threads API In architectures with shared memory threads can be used to implement parallelism. For the Unix systems, a standardized C language threads programming interface has been specified by the IEEE POSIX 1003.1c standard. The already mentioned POSIX standard from 1995 is included in the Unix system distributions. Technically, a thread is defined as an independent stream of instructions that can be scheduled to run as such by the operating system. The comparison between threads and processes is presented in Table 1. What needs to be emphasized is that in the case of threads - reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer. The subroutines which comprise the Pthreads API can be informally grouped into three major classes (included in the library Pthreads): 1. Thread management – the group of functions that work directly on threads - creating, detaching, joining, etc. Here are also included the functions that set thread attributes. 2. Mutexes (abbreviation for ‘mutual execution’) – the functions that deal with synchronization. The Mutex functions provide for creating, destroying, locking and unlocking mutexes and also setting or modifying mutex attributes. 3. Condition variables – the functions that address communications between threads that share a mutex. They are based upon programmer specified conditions. This class includes the functions to create, destroy, wait and signal based upon specified variable values. In this paper condition variables are only mentioned without further analysis as they were not implemented in the pthread parallelization presented in this paper. Table 1. Process and thread features comparison. <table> <thead> <tr> <th>PROCESS</th> <th>THREAD</th> </tr> </thead> <tbody> <tr> <td>• Created by the operating system</td> <td>• Use and exist within the process-creator resources</td> </tr> <tr> <td>• Requires a fair amount of overhead</td> <td>• Duplicate only the bare essential resources that enable them to exist as executable code</td> </tr> <tr> <td>• Contains information about program resources and program execution state</td> <td>• Share with other threads in the same process:</td> </tr> <tr> <td>– Process, process group, user and group IDs,</td> <td>– Global and static variables,</td> </tr> <tr> <td>– environment,</td> <td>– heap and dynamic variables (Two pointers having the same value point</td> </tr> <tr> <td>– working directory,</td> <td>to the same data),</td> </tr> <tr> <td>– program instructions,</td> <td>– operating system resources (files),</td> </tr> <tr> <td>– registers,</td> <td>– process instructions.</td> </tr> <tr> <td>– stack,</td> <td>• Each thread has a unique:</td> </tr> <tr> <td>– heap,</td> <td>– Set of registers, stack pointer,</td> </tr> <tr> <td>– file descriptors,</td> <td>– automatic variables,</td> </tr> <tr> <td>– signal actions,</td> <td>– Stack for local variables,</td> </tr> <tr> <td>– shared libraries,</td> <td>– priority,</td> </tr> <tr> <td>– inter-process communication tools</td> <td>– thread ID.</td> </tr> </tbody> </table> 4 Thread creation Initially `main()` program comprises a single thread. All other threads must be created explicitly by the programmer. Once created, threads are peers and may create other threads. There is no implied hierarchy or dependency between them. A new thread is created by calling `int pthread_create(pthread *thread, const pthread_attr *attr, void *(*start_routine)(void *), void *arg)` subroutine. The arguments of this function in order of appearance stand for: unique identifier for the new thread returned by the subroutine, attribute object that may be used to set thread attributes, the C routine that will be executed by thread once it is created, a single argument that may be passed to `start_routine`. Attribute parameter set to NULL means that default attributes are used, otherwise it defines members of struct `pthread_attr_t` that includes: detached state, scheduling policy, stack address and size etc. As it was mentioned before `pthread_create()` routine permits a programmer to pass only one argument to the thread start routine. To overcome this limitation a structure should be created which contains all of the arguments to be passed. Then just a pointer to that structure should be passed to `pthread_create()` routine. There is presented below the fragment of code, which creates NTH threads with a default set of parameters which will execute routine \textit{thread\_fun\_dens} with the parameters from the proper cell of matrix \textit{tab\_th\_data}. \begin{verbatim} struct th_data { long idoms; // starting cell of global density matrix long idome; // ending cell of global density matrix long NNion; // number of ions per thread }; pthread_t th_ids[NTH]; // matrix that contains threads ids th_data tab_th_data[NTH]; // matrix of threads specific data, passed as a structure pointer to the executed routine void *thread_func_dens(void *ptr) { ... pthread_exit(NULL); } void main(...) { ... for (int w=0; w < NTH; w++) pthread_create(&th_ids[w], NULL, thread_func_dens, (void *)&tab_th_data[w]); ... } \end{verbatim} 5 Threads synchronization and termination There are several ways in which a thread may be terminated. The most common is either when the thread returns from its starting routine or when the thread makes call to the \textit{pthread\_exit()} subroutine. Typically, the \textit{pthread\_exit()} routine is called after a thread has completed its work and is no longer required to exist. If main() finishes before the threads it has created, and exits with \textit{pthread\_exit()}, the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes. The programmer may optionally specify a termination status, which is stored as a void pointer for any thread that may join the calling thread. One way to accomplish synchronization between threads is so called ‘joining’. The \textit{int pthread\_join(pthread\_t th, void **thread\_return)} subroutine blocks the calling thread until the thread specified by \textit{th} argument terminates. The programmer is able to obtain, via the second argument, the target threads termination status. It is possible though only if it was explicitly specified in the target thread call to \textit{pthread\_exit} routine. A joining thread can match only one `pthread_join()` call. It is a logical error to attempt multiple joins on the same thread. In the following figure the scheme of program course is presented, which after creating two worker threads waits for them to exit and then resumes its execution. ![Threads synchronization diagram](image) **Fig. 2.** Threads synchronization. The fragment of main function that stops program execution until all created threads exit would have the following form: ```c void main (...) { ... for (int ii = 0; ii < NTH; ii++) pthread_join(th_ids[ii], NULL); // execute as much pthread_joins as pthread_create // were execute before ... } ``` 6 Mutual execution Mutex variables are one of primary means of implementing thread synchronization and for protecting shared data when multiple writes occur. A mutex variable acts as a ‘lock’ or a semaphore protecting access to a shared data resource – critical section. With the basic mutex concept only one thread can own – which means lock – a mutex variable at any given time. Thus, even if several threads try to lock a certain mutex only one of them will succeed, booking access to the protected resource for himself. The shared data resource is available again not till then mutex owner unlocks that mutex. The presented operation is a safe way to ensure that when several threads update the same variable, the final value is the same as what it would be if only one thread performed the update. The typical sequence of steps in the use of a mutex is as follows: 1. a mutex variable is created and initialized, 2. several threads attempt to lock the mutex, 3. only one of them succeeds and that thread owns the mutex, 4. the owner thread performs a set of actions, the owner unlocks the mutex, (6) another thread acquires the mutex and repeats the process, (7) finally the mutex is destroyed. The mutex variable must be declared with the type `pthread_mutex_t` and initialized before it can be used. Initialization can take two forms: 1) static with the instruction \[ \text{pthread_mutex_t mutex} = \text{PTHREAD_MUTEX_INITIALIZER}, \] 2) dynamic with `int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *mutexattr)` routine. Initially mutex is unlocked. To establish different from default (specified as `NULL`) properties for the mutex the second argument of the `pthread_mutex_init` routine should be used. Mutex that is no longer needed should be released with `pthread_mutex_destroy(pthread_mutex_t *mutex)` routine. Three standard routines are used to manage mutex access. The `pthread_mutex_lock(pthread_mutex_t *mutex)` routine is used to acquire lock on the specified mutex variable. If the mutex is already locked by another thread, this call will block the calling thread until the mutex is unlocked. The `pthread_mutex_trylock(pthread_mutex_t *mutex)` will attempt to lock a mutex. However, if the mutex is already locked, the routine will return with 'busy' error code. The `pthread_mutex_unlock(pthread_mutex_t *mutex)` will unlock a mutex if called by owning thread. An error will be returned if the mutex has already been unlocked or if the mutex is owned by another thread[5]. The following example presents the way mutexes were used in our simulation. ```c pthread_mutex_t ***tab_mutex; ... for (int x=1; x<=Nxx; x++) for (int y=1; y<=Nyy; y++) for (int z=1; z<=Nzz; z++) { int res = pthread_mutex_init(&tab_mutex[x][y][z],NULL); } ... ``` //creating threads with pthread_init routine ... // a piece of code somewhere in the thread startRoutine ```c int err = pthread_mutex_lock( &tab_mutex[Nx][Ny][Nz] ) ; density_q[Nx][Ny][Nz][kj] += is ; int err2 = pthread_mutex_unlock( &tab_mutex[Nx][Ny][Nz ] ) ; ``` ... for (int x=1; x<=Nxx; x++) for (int y=1; y<=Nyy; y++) for (int z=1; z<=Nzz; z++) { int res = pthread_mutex_destroy(&tab_mutex[x][y][z]); } 7 Parallel mode calculations The environment for simulations was the Intel Xeon processor 4cores x 2, 16BG RAM, Mandriva operating system and gcc 4.1.2 compiler. In the first step the program was moved from Fortran 77 to C++. Then it was parallelized using the Pthread library with the standard API for C++ contained in the POSIX IEEE 1003.1c standard. During the simulation process the measure that was analysed was the simulation time. It is a formal but very relative measure as sometimes the process of creating parallel version may not be cost effective contrary to the gained reduction in the simulation time. The second performance criterion that was adopted for plasma density thread parallelization is speedup that is described by the formula \[ S(p) = \frac{T(1)}{T(p)}, \] where \( p \) stands for a number of threads, \( T(1) \) and \( T(p) \) - the simulation time with one or \( p \) threads (adequately) [6]. 8 Results of simulations As it was presented in paper [7] using the simplest charge density distribution technique and a large number of macro particles is the best solution as far as charge density calculations are concerned. For example, using NGP and 100 mill of macro particles gives better results (i.e. more homogeneous distributions) in less time than using the CIC method and 20 mill of macro particles. That is why all results presented in this paper are calculated for the NGP method with a different number of macro particles, different sizes of spatial mesh and a different number of threads used in the parallelization process. Fig. 3 presents the simulation time for the NGP method with different numbers of macro particles and the mesh of size 100x100x100. Red line in each picture stands for the execution time of the sequential version of the algorithm. Analyzing the above graphs one can conclude that using only two threads gives the execution time close to the sequential version and that using eight threads, which equals the number of available processors, gives the best reduction of execution time. Further improvement of a number of threads, nine and above does not give further reduction of execution time. As the graphs obtained for simulations with a different number of macro particles show similar results, Fig. 4 presents speedup calculated only for one of them, the one for 200 mill macro particles. It confirms that speedup close to 1 (which means close to the sequential execution time) is for 2 threads and the highest speedup is gained for 8 threads. In the next step the size of the mesh was changed to 50x50x50. Two simulations were done. First for 200mill of macro particles – Fig. 5(a). In the second one – Fig. 5(b) - the number of particles was changed proportionally to the change in mesh size, which gave the number of approximately 25mill macro particles. For both simulations speedup factors were calculated and presented in Fig. 6(a) and 6(b) respectively. Analyzing Fig. 5 and 6 it can be noticed that the maximum speedup gained with the parallelization changed dropped by about 40% compared to the previous simulation. Also the number of threads required to gain the execution time close to sequential changed from 2 to 4. Further tests were carried out for different sizes of mesh from 200x200x200 down to 15x15x15. For each of them the parallel version run for 200mill macro particles and 8 threads were executing calculations. The red line stands for the execution of sequential version of the algorithm. Fig. 4. Speedup for NGP parallel run, for 200mln. macro particles mesh of size 100x100x100. Fig. 5. Time of charge density calculations versus the number of threads used for the parallel run, using the NGP method, mesh of size 50x50x50 and different number of macro particles: a)200mill, b)25mill. Fig. 7 presents that for the meshes of size 80x80x80 and bigger ones give quite good execution time reduction while parallelized. In the case of meshes of size 40x40x40 and smaller running the parallel version of algorithm gives no benefit of reduction of execution time. Final tests were carried out for the asymmetrical mesh of dimensions 128x64x128 and 100 mill macro particles. The aim of this test was to examine if the geometry of the mesh has any influence on the algorithm performance. Fig. 8 presents the results of that simulation – both simulation time and speedup. The environment of this simulation is similar to the one presented in Fig. 3(c). The results for both mentioned simulations are very close which gives a conclusion that only a number of cells influences the simulation time whereas the mesh geometry has no influence on POSIX thread parallelization performance. POSIX threads parallelization for example of Particle-In-Cell density … (a) (b) Fig. 8. Time of charge density calculations and speedup for the parallel run, using the NGP method, the mesh of size 128x64x128 and 100mill. of macro particles. 9 Conclusion A direct advantage of program parallelization is more effective time use which relates to the time assigned to the simulation process. This paper presents the POSIX Pthread library as one of the available methods of parallelization. So far Pthread parallelization is implemented only for a part of TRQR program which is charge density calculations, but it gives quite acceptable results encouraging for further research. References
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3282/2476", "len_cl100k_base": 4498, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24365, "total-output-tokens": 5350, "length": "2e12", "weborganizer": {"__label__adult": 0.0004150867462158203, "__label__art_design": 0.00039887428283691406, "__label__crime_law": 0.0005259513854980469, "__label__education_jobs": 0.0009636878967285156, "__label__entertainment": 0.00010907649993896484, "__label__fashion_beauty": 0.00022804737091064453, "__label__finance_business": 0.0002338886260986328, "__label__food_dining": 0.0005655288696289062, "__label__games": 0.0008058547973632812, "__label__hardware": 0.004444122314453125, "__label__health": 0.0009589195251464844, "__label__history": 0.0004074573516845703, "__label__home_hobbies": 0.00018358230590820312, "__label__industrial": 0.0014410018920898438, "__label__literature": 0.00023686885833740232, "__label__politics": 0.0004448890686035156, "__label__religion": 0.0007138252258300781, "__label__science_tech": 0.322021484375, "__label__social_life": 0.00012969970703125, "__label__software": 0.01003265380859375, "__label__software_dev": 0.65283203125, "__label__sports_fitness": 0.0006194114685058594, "__label__transportation": 0.0009021759033203124, "__label__travel": 0.0002218484878540039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22734, 0.01537]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22734, 0.64468]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22734, 0.89823]], "google_gemma-3-12b-it_contains_pii": [[0, 2093, false], [2093, 4941, null], [4941, 6639, null], [6639, 10345, null], [10345, 12423, null], [12423, 14153, null], [14153, 16182, null], [16182, 19056, null], [19056, 19845, null], [19845, 20874, null], [20874, 21034, null], [21034, 22734, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2093, true], [2093, 4941, null], [4941, 6639, null], [6639, 10345, null], [10345, 12423, null], [12423, 14153, null], [14153, 16182, null], [16182, 19056, null], [19056, 19845, null], [19845, 20874, null], [20874, 21034, null], [21034, 22734, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22734, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22734, null]], "pdf_page_numbers": [[0, 2093, 1], [2093, 4941, 2], [4941, 6639, 3], [6639, 10345, 4], [10345, 12423, 5], [12423, 14153, 6], [14153, 16182, 7], [16182, 19056, 8], [19056, 19845, 9], [19845, 20874, 10], [20874, 21034, 11], [21034, 22734, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22734, 0.10127]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
4703995f59276371d0506ae42cd4c11f3f557c1d
Automating the Evaluation of Usability Remotely for Web Applications via a Model-Based Approach Nouzha Harrati †‡, Imed Bouchrika †, Abdelkamel Tari * and Ammar Ladjailia †‡ * Department of Computer Science, University of Bejaia, Algeria † Faculty of Science and Technology, University of Souk Ahras, Algeria ‡ Department of Computer Science, University of Annaba, Algeria n.harrati@univ-soukahras.dz Abstract—Usability for software systems has emerged as an integral part of the continuous commercial success of IT companies. This is partly due to the vital need to satisfy customers' goals for systems becoming pervasive and ubiquitous within our daily life. In this research study, we have explored the use of task models to define how the user should interact with a given system. Based on empirical data collected from end-users participating within the usability evaluation of a web application, data analysis is conducted to infer the usability degree. This is carried out in compliance with the defined task model and usability metrics describing efficiency of use. The proposed approach is a milestone towards automating usability evaluation as most of the studies are reporting manual-based methods to assess the usability of software systems. Experimental results performed to assess the usability of a website shows the potency of the system to discover usability setbacks that can be addressed to improve the user experience. Index Terms—remote testing, task modeling, usability evaluation. I. INTRODUCTION Positive user experience is of prime importance for software development playing vital role for the continuous commercial success of software companies. In fact, the increase of customers base and loyalty are totally related to the better design of products. Usability of software products is a key characteristic to achieve the acceptance of users regardless of their background, experience or orientation. Usability is defined as the extent to which a product can be easily used by specified users to achieve certain goals with effectiveness, efficiency and satisfaction. In practice, the usability aspect of software products is marginalized during the classical stages of software development life-cycles pushing and devoting more efforts resources into the software back-end to address the functional requirements. In fact, regardless of how software are neatly coded or sophisticated, recent studies of software sales reports that software failures are due to usability reasons where simply the user does not know how to use the purchased product [1]. It is no doubt that usability is now recognized as an important software quality attribute, earning its place among more traditional attributes such as performance, robustness and security. The process of usability evaluation (UE) consists of methodologies for measuring the ease-of-use aspects of the user interface for a given software system and identifying specific problems. In fact, Usability evaluation plays a vital role within the overall user interface design process which undergoes continuous and iterative cycles of design, prototyping and testing. Evaluating the usability of interactive systems is itself a process involving various activities depending on the method utilized [2]. Empirical-based usability methods require the participation of end users who are instructed to interact with the software system. Meanwhile, their behavior and interaction with the system are recorded and observed by an expert. results are obtained from the users through interviews and questionnaires where they are asked for their opinions and concerns in addition to possible suggestions of how to better improve the interface design and its usability. In fact, one of the challenges in software development is to involve end users in the design and development stages so as to observe and analyze their behavior in order to collect feedback in effective and efficient manner. Alternatively, usability evaluation can be carried out through inspection methods which aim to identify interaction problems within the interface or a prototype [12] without the involvement of end users. The interface is assessed by an expert or usability consultant for compliance to a set of predefined usability guidelines or conventional set of heuristics [3]. Because of the dearth of approaches devoted to the automated evaluation of the usability aspect for web applications, we explore in this research study the use of a task descriptor to define how the user should interact with a given system. Based on empirical data collected from end-users participating within the usability evaluation of the system, data analysis is conducted to infer the usability level. This is carried out in compliance with the defined task model and usability metrics describing efficiency of use. Experimental results performed to inspect the usability of a website shows the potency of the system to discover usability issues that can be addressed to improve the user experience. The proposed approach is a milestone towards the automation of usability evaluation as most of the studies are based on manual methods to assess the usability of software systems. This paper is organized as follows. The next section outlines the previous approaches for automated usability evaluation of software systems. The theoretical description of the presented approach is described in sections 3. Section 4 is designed to show the results attained for the usability evaluation using the proposed approach on a real case scenario. II. RELATED WORK Usability evaluation of web applications has received considerable attention since the advent of the web. This is partly due to the vital need to satisfy customer's goals for systems becoming pervasive within our daily life. Ivory and Hearst [2] presented a survey of tools for usability evaluation according to a taxonomy based on four dimensions: method class, method type, automation type and effort level. Ivory argued that the automation of usability evaluation would help to increase the coverage of testing as well as reduce significantly the costs and time for the evaluation process. Fernandez [3] surveyed the recent studies related to usability evaluation where they have categorized the different methods into broadly two main classes; empirical and inspection methods. However, the majority of the surveyed research studies are purely based on the manual or statistical analysis of recorded activity data for the participants. Automated testing of interactive systems enables the possibility of remote evaluation. Tullis [4] conducted a comparative experiment between remote and laboratory-based testing where they emphasized the advantages of remote evaluation in terms of costs and effectiveness. Paganelli [5] worked on developing a desktop-based application for recording and analysing interaction logs for website systems based on a predefined task model. The activities to be performed on a website are specified using the notations for the ConcurTaskTrees environment [6] which provides a graphical representation for the hierarchical logical structure of the task model. Tiedtke [7] described a framework implemented in Java and XML for automated usability evaluation of interactive websites combining different techniques for data-gathering and analysis. Their system uses a task-based approach and incorporates usability issues. Atterer [8] presented an implementation of UsaProxy which is an application that provides website usage tracking functionality using an HTTP proxy approach. Recently Vasconceols [9] implemented an automated system called USABILICS for remote evaluation based on interface model. Tasks to be performed by a user are predefined using an intuitive approach that can be applied for larger web systems. The evaluation is based on matching a usage pattern performed by the user against the one conducted by an expert of the system providing a usability index for the probed application. Muhi [10] proposed a general framework for usability evaluation that can be tested in production systems. The framework takes as input an XML configuration file describing the positioning of the different interface elements of an application whilst user activities are logged into a separate XML file. A validator module is deployed to check the log-files according to semantic rules that are defined within the usability data model. Andrica et al. [11] presented the WaRR which is an automated tool that records and replays with high fidelity the interaction between users and modern web applications in this tool the recording functionality is embedded in the web browser, it has direct access to user keystrokes and clicks. There are a number of commercially available tools that are used for recording user traces for usability purposes. CrazyEgg logs mouse events with the ability to visualize activity maps of the more popular locations of clicks on a page. Web Criteria Site Profile is another tool used to assess simple attributes of usability including page loading time and ease of finding content. This is based on automated agents browsing the website to retrieve data making use of the GOM model. Web TANGO is a software that employs the Monte Carlo simulation and information retrieval methods to predict the user's behavior and navigation paths. This is based on data acquired from extensive experiments conducted against websites nominated as successful having received higher user ratings. Figure 1: Proposed Framework for Remote Usability Evaluation III. PROPOSED APPROACH The acceptability of interactive systems is usually based on their utility and usability. The utility refers that the application offers a service or a functionality for the user to achieve some goal. Meanwhile the usability factor concerns how easy and efficient the task is performed to achieve such utility. In order to assess the usability aspect of a given web application, the proposed system consists of three main phases: i) Task Modeling ii) Usage Tracking, iii) Data Analysis. An overview describing the proposed approach is shown in Figure (1). During the first stage, a task model is laid out to describe how to interact with the system. The modeling which is based on a newly proposed tree-based graphical notation, is usually performed by a usability expert. In the following phase, usage data is tracked and recorded from users who are usually invited to test the system remotely. Finally, automated analysis of the collected usage data is carried out to assess how users data adheres and complies well with the defined task model. Based on usability metrics, the system can be trained to infer how usable the system is. A. Task-Based Descriptor Traditionally, usability is measured through monitoring the completion of certain goals or a task provided by an interactive system. The satisfaction level can be evaluated by a usability expert monitoring user's activities or through asking users to fill in questionnaires. However, automated verification and usability evaluation for achieving a defined set of tasks are proven to be a difficult task especially for web applications. This is partly due to the complex nature of web systems involving many interaction styles that can vary with different display hardware in addition to a large number of UI components and events rendering formal modeling of user behavior a challenging process. In this study, a fully automated system for formalizing user interaction with a given system guided through a set of rules describing certain goals to be achieved by the end user. This is done through defining a *textit{task model}* by an expert to describe how the user should interact with the system. The task model is mainly utilized to capture all the interactions to be carried out by user. The compliance with the defined model by users infers that the system model believed to be the optimal use set by a usability expert matches the user model. This can reflect better usability. There are several approaches and notations for defining a task model for usability evaluation such as ConcurTaskTree (CTT) [6], Goals Operator Method and Selection rules (GOMS) [13] and Hierarchical Task Analysis (HTA) [14]. The tree-based graphical representation introduced by Harms et al. [15] for creating a task model from collected usage traces of users is being adopted as the basis throughout this research. In the same way as the CTT notation, task models should offer the designers only with high level details in order to focus on the overall interaction and flow of a user interface without becoming distracted by the low-level details by which the user interface is presented on various platforms and styles of interaction. In this research study, we propose a tree-based graphical representation for defining a task model that should describe the tasks, actions and goals to be performed by the user. The resulting task model tree represents all interactions a user can perform on given software interface. Tasks can be combined to describe higher level tasks. Using the tree-based visual notation, the task model is an ordered hierarchy of tasks or other elements to be performed in order to satisfy a specific goal for a task. In order to enable automation at later stages, goals for actions should have a way to infer automatically whether a task is completed successfully based on conditions and events. Consideration is made towards the expressiveness of the visual notation which defines the capability of the model to express user activities [16]. The proposed task modeler is implemented as online application for usability experts to create a task model for evaluating the designed interface of their software systems. A task consists of actions to be performed to achieve a specific goal. This can be a basic task consisting mainly of simple actions such as clicking a submit button, page scrolling or typing a text into a text field. For each basic action, there should be a mapping to an event caused by performing the action. In addition, it can be a complex task composed from other subtasks and advanced control blocks such as filling a payment checkout form for an online shopping cart containing many widgets with a number of options and conditions to be verified. Various control blocks are employed for expressing the temporal relationship for task children which determines the number and order in which the subtasks must be performed by the user to achieve a goal. Control blocks include sequence, iteration and choice. The different notations used to describe visually the different modeling blocks are explained as follows: - **Task**: refers to a complex or basic task to be performed by a user to achieve a goal. The syntax for creating a task is given as: Task : Goal Name - **Sequence**: it describes an aggregated set of tasks that must be performed by the user through the specified order in which they appear. - **NoOrder**: As opposed to the Sequence clause, this is used to define that the subtasks can be executed regardless of the specified order. - **Iteration**: This refers to the case where the enclosed set of tasks must be executed by the user zero or more times depending on the specified cardinality. - **Choice**: This is to specify that the user must choose a task among a list of given tasks. - **Success**: This control block is employed to deduce that a task is completed successfully by the user. Criteria for inference include different checking conditions including simple event triggers to advanced verification as the use of regular expression matching. The syntax for using the Success clause is given as: Success : KeyUp : Enter To show that the parent task is performed successfully when the *enter key* is released after being pressed. Other event triggers include scroll, keyPress, mouseClick, mouseOver ... etc. The following example illustrates the use of regular expression using the pattern keyword to verify the validity of an email address: Success : pattern : [a-zA-Z0-9]+[@a-zA-Z]+\.[a-zA-Z][2,4] - **Action**: This is the leaf of the hierarchical task tree referring to simple events. - **ElementName**: It is used to specify the HTML component name that can be mapped to a given task where the Success can use to verify completion of the event. Other equivalent clauses can be used including ElementID or ElementType. The same operators defined in the CTT are implemented within the proposed task modeling platform to add further flexibility and control for the defined task model. In the same way as the CTT tool, the same icons are added within the graphical notion. This includes as an example Task Enabling which refers that a task cannot be started until another task is completed successfully. For better expressiveness, the cardinality for a task can be specified indicating the possible repetition of a task using the conventional syntax using for UML modeling. For instance, \(1..*\) refers to at least one or more. The placement of cardinality condition is done at the left side of the task box. As opposed to the work described by Long [17] where an alike of programming language is presented for task modeling, procedures can be created within this study to encapsulate certain business logic. However, the main focus is devoted towards simplicity and usability hence avoid the necessity to re-invent a fully programming language that needs further training. Figure (2) shows an example for a task model creating a for login page. ![Tree-based Task Descriptor Example](image) **Figure 2: Tree-based Task Descriptor Example** The graphical diagram for the tree-based task model can be exported to XML format for portability, openness and interoperability reasons as an initial step for a transformation process to other models. Therefore, the XML file can be parsed to assist with for automation of the usability evaluation (UE). The Document Type Definition (DTD) of an XML document describing the structure or format is considered for its simplicity. Listing (1) shows the structure for the generated XML document of the proposed task descriptor. ``` <!DOCTYPE Usability [ <!ELEMENT Usability (Name, Website, Task*) > <!ELEMENT Name (#PCDATA) > <!ELEMENT Website (#PCDATA) > <!ELEMENT Task (Sequence*,NoOrder*,Choice*,Iteration*,Action*)> <!ATTLIST Task URL CDATA #IMPLIED > <!ATTLIST Task Description CDATA #IMPLIED > <!ELEMENT Sequence (Task*,NoOrder*,Choice*,Iteration*,Action*)> <!ELEMENT NoOrder (Task* Sequence*,Choice*,Iteration*,Action*)> <!ELEMENT Choice (Task*,Sequence*,NoOrder*,Iteration*,Action*)> <!ELEMENT Iteration (Task*,Sequence*,NoOrder*,Choice*,Action*)> <!ELEMENT Action Success > <!ATTLIST Action name CDATA #REQUIRED > <!ELEMENT Success Trigger, (ElementName|ElementType|URL)?,CondArg*) <!ELEMENT Trigger (#PCDATA) > <!ELEMENT ElementName (#PCDATA) > <!ELEMENT ElementID (#PCDATA) > <!ELEMENT ElementType (#PCDATA) > <!ELEMENT Condition (#PCDATA) > <!ELEMENT CondArg (#PCDATA) > ]> ``` **Listing 1: DTD for the proposed task descriptor** Once the web page is loaded, the JavaScript tool is invoked registering event handlers which are called for all events of interest triggered by the user when interacting with the interface. The events include typing, cursor movement and mouse clicking. Recorded events should be always associated with the browser timestamps that describes the date and time information as timing is considered important for understanding the order of events performed by the user. For the events of typing and mouse clicking, identifying attributes for the HTML element of interest that triggered the event are **B. Usability Data Collection** Because of the setback that computer applications have not been designed with an eye to user modeling [18], it becomes vital and crucial to gain access to the stream of user actions to get an insight for their experience. Consequently, various research studies and software tools were proposed to devise ways to extract and analyze useful usability information from user interfaces. For usability analysis, it is typical to automatically collect clicks, page views and visit duration in order to determine conversion rates and website traffic. For the course of this research, a JavaScript program is implemented to log all user activities performed when browsing a given website to get its usability assessed. To avoid the need to install third party software on the client machine such as Java virtual machine. This is one of merits of the approach to move towards unintrusive automated testing of web applications. The proposed tool is integrated by appending a single line of JavaScript code into the web page without the need for the website programmer to modify their existing application code. The appending can be done either from the server side or by using a custom browser plugin to automatically add the script code into the browsed website. recorded for every action to ease later matching between the task descriptor and user data. These attributes include the node name, id, name and type. The type attribute is used to distinguish between form components such as radiobox, button and input text. Example of logged data for a mouse click is shown in Listing 2. The choice to record cursor movements is to measure the traveled distance of the mouse. For a continuous cursor motion, the starting and end points are recorded along with their timestamps in order to estimate the traveled Euclidean distance. ![Listing 2: Example of logged data for a mouse click](https://www.usability.ws) The logged data is collected by the browser without interfering with any existing JavaScript code. Because the data is stored centrally on a remote server, the data is encoded in JSON format and transmitted back at regular intervals to the server for permanent storage. The submission into the server is based on a buffer of a particular size to avoid bandwidth bottleneck problems. A client session key is created for every user with an expiry time of ten minutes. This is used to map received data to their respective user. The IP address is also recorded for geographical analysis in case is needed. The data is stored into a relational database so that it can be exported easily to other formats. **C. Automated Analysis of Usability Data** Despite longstanding research in data extraction and mining, there is a dearth of automated methods for usability evaluation based on user interaction traces. The described approach falls under the category of benchmark usability test. A set of benchmark tasks are predefined within the task based descriptor which is created by a usability expert to describe the goals to be achieved by the user. The task model is thereafter compared to the collected usage data which includes all user actions such as recorded mouse clicks and cursor movement. The matching process is based on well-defined and conventional metrics that reflect better usability. The chosen metrics are chosen on the basis they can be quantified automatically without the cooperation of the participants. The usability metrics considered in this study include: - **Time spent per task**: is defined as the total time taken to achieve a particular task by a user. This metric is usually used to measure the efficiency rate. Based on the task descriptor, task duration is approximated through sequential search within the user traces for the Success condition being met corresponding to the defined task. - **Completion rate**: is also called the success rate which is considered one of the most fundamental usability metrics. The completion rate is typically measured as a binary value for task success (coded as 1) or task failure (coded as 0). Although, it is possible to define criteria for partial task success, but for simplicity reasons, binary values are considered. This is estimated in the same way as the task duration. - **Mouse Clicks and Movement**: This is to measure the efforts undertaken by the user reflected through the use of hand to moving or clicking the mouse. In practice, larger number of clicks or longer distances of the cursor are indication of poor usability and lower satisfaction level. - **Errors**: defined as Unintended actions or fail actions made by a user while doing a task in order to attempt a specific goal. The automated process for discovering error is to search the data log for non-compliance against the task-descriptor. This provides a good criteria to evaluate usability of the interactive system and infer the correlation between the user and task model. In fact, recent studies [9] have proposed to produce a usability index. However, we believe that producing a number that reflect the degree of good usability is a complex and intricate task that should involve many factors. However, automated evaluation can be achieved through statistical analysis of data measuring the intra-correlation of estimated metrics for collected data against optimal data by an expert. Statistical analysis can be sufficient to identify major usability problems. This includes cases where there is higher variance of metrics among users. **IV. EXPERIMENTAL RESULTS** In order to explore the effectiveness of the proposed approach for evaluating usability of web applications, experiments are conducted using a newly developed website where users are invited to use the site remotely. The web application is an interactive online quiz containing questions related to the tourism sector for the City of Souk Ahras. The task descriptor is made to contain 4 consecutive tasks. During the first task, the user is presented with a landing welcome page containing a button to start the quiz. Subsequently, participants would be taken throughout 5 different questions with a single question on each page. Choices of multiple answers are provided with each question. Thirdly, the user is taken to a page to show them the score they have attained when answering the questions with a button to continue the quiz. In the last task, a form asking anonymously the user for personal information such as gender, age range and their opinion regarding how easy to use the website. The script for logging user activities is hosted on an Amazon Cloud Services EC2 to account for faster access. For legal and privacy concerns, users are being told in advance that their traces are recorded for improving user experience and analyzing website usability. During the usability evaluation process, 44 participants agreed to take part of the experiment. The rest of users did not want to disclose their gender and age. Upon testing the application, users are not required to install any software apart from using their preferred browser to test the interface. All actions and events performed by the users are recorded... automatically and non-intrusively into the log data-set. To assess the usability evaluation, the discussed metrics are computed automatically based on reading the task descriptor and user traces. Metrics include number of clicks, duration and cursor distance. This is computed individually for every higher level task. Figure 3: Estimated Usability Metrics for the four Tasks Figure (3) shows the summative results obtained based on the derived metrics for the four tasks. The user data is estimated as the mean of measurements derived automatically of all participants for the three shown dimensions: Task duration, cursor distance and mouse clicks. The error bars in the plot on the users data correspond to the standard deviations of the measurements. It is observed that there is always a considerable gap between the expert and users logged data with the expert having always lower values compared to the average user. For the case of task 2, there is a high variance among users in terms of time in addition to the fact that there is a remarkable difference between the expert and users which is the same for task 4. This can be an indicative to a usability drawback of the designed interface at this phase of the application that needs to be addressed. Conversely, the number of clicks seems to be consistent between the two parties for most of the cases. V. CONCLUSIONS Usability which concerns the easiness of use for interactive systems, is recognized as an important software quality attribute, earning its place among more traditional attributes. Because of the scarce nature of methods devoted to the automated evaluation of the usability of web applications, this research study is carried out to demonstrate the use of a newly proposed task descriptor for automated remote evaluation. Empirical data recorded from end-users participating in a case study shows that usability metrics can be easily derived and analyzed to infer further insights about the usability of interactive systems. REFERENCES
{"Source-Url": "http://www.univ-soukahras.dz/eprints/2015-1-46009.pdf", "len_cl100k_base": 5414, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20028, "total-output-tokens": 6885, "length": "2e12", "weborganizer": {"__label__adult": 0.00031495094299316406, "__label__art_design": 0.0008559226989746094, "__label__crime_law": 0.000293731689453125, "__label__education_jobs": 0.002696990966796875, "__label__entertainment": 9.328126907348631e-05, "__label__fashion_beauty": 0.00015056133270263672, "__label__finance_business": 0.0002605915069580078, "__label__food_dining": 0.0003457069396972656, "__label__games": 0.0008525848388671875, "__label__hardware": 0.0007948875427246094, "__label__health": 0.0005178451538085938, "__label__history": 0.0003025531768798828, "__label__home_hobbies": 8.45193862915039e-05, "__label__industrial": 0.00030303001403808594, "__label__literature": 0.00040793418884277344, "__label__politics": 0.000164031982421875, "__label__religion": 0.0003418922424316406, "__label__science_tech": 0.04449462890625, "__label__social_life": 9.006261825561523e-05, "__label__software": 0.0240478515625, "__label__software_dev": 0.921875, "__label__sports_fitness": 0.0002396106719970703, "__label__transportation": 0.0003921985626220703, "__label__travel": 0.0002123117446899414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32499, 0.01201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32499, 0.42558]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32499, 0.92594]], "google_gemma-3-12b-it_contains_pii": [[0, 5451, false], [5451, 10755, null], [10755, 17216, null], [17216, 21033, null], [21033, 26929, null], [26929, 32499, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5451, true], [5451, 10755, null], [10755, 17216, null], [17216, 21033, null], [21033, 26929, null], [26929, 32499, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32499, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32499, null]], "pdf_page_numbers": [[0, 5451, 1], [5451, 10755, 2], [10755, 17216, 3], [17216, 21033, 4], [21033, 26929, 5], [26929, 32499, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32499, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
9449ab0f080cd2696ac21914ee7522aa9cca533f
Tactive, a Framework for Cross Platform Development of Tabletop Applications Ombretta Gaggi¹ and Marco Regazzo² ¹Department of Mathematics, University of Padua, via Trieste, 63, Padova, Italy ²Anyt1me S.r.L., via Siemens 19, Bolzano, Italy gaggi@math.unipd.it, marco.regazzo@anyt1me.com Keywords: Tabletop applications, touch interfaces, webkit engine. Abstract: The number and types of applications developed for multi-touch tabletops are dramatically increased in the last years, mainly due to the fact that interactive tabletops allow a more natural interaction with the user through their multi-touch interfaces. Despite many applications share a big set of common features, e.g., gestures recognition, interface orientation, etc., almost all applications implement their home made software solutions. In this paper we present Tactive, a software layer for fast development of portable applications for multi-touch interactive tabletops. Tactive allows to abstract from hardware and software equipment and to embed a web application into an application for multi-touch surfaces. Our framework supports up to five fingers gestures recognition and communication between different windows, and allows to save more than 60% of developing time. 1 INTRODUCTION In the last few years, the market of multi-touch tables is experiencing a situation very similar to what happens in the mobile applications market. The number of developed applications is dramatically increased, interactive tabletop surfaces are used to improve learning activities (Rick et al., 2011), inside museums (Geller, 2006), where the diversity of visitors create a natural laboratory for testing this kind of interface, to help the management of emergency (Qin et al., 2012), and in many other collaborative activities like, e.g., photoware (Pedrosa et al., 2013), etc. Consequently, also the number of hardware solutions increased, each one requiring a particular SDK, programming language, etc. Like smartphones, interactive tabletops allow a more natural interaction with the user through their multi-touch interfaces (Forlines et al., 2007) and, unlike mobile devices, they allow the interaction of more than one user at the same time. Tabletops promote collaboration and social experiences, and can act as a meeting point. The possibility to interact with multiple user at the same time requires an important set of new features for the user interface: e.g. the user are placed around the table, therefore they need different orientations of the interaction widgets, they can collaborate using the same space and objects or can compete for them. Therefore all applications developed for this kind of interface have to face a common set of problems, e.g., the recognition and management of particular gestures and the orientation of the interactive widgets. Despite these common features, almost all the applications implement all these features starting from scratch, since no software solution exists. In this paper we present our tabletops solution, which can be applied to different size of surface (42” or more) and can work upon different operating system. We have developed a software layer which is able to abstract from hardware and software constraints of the device in which it is installed (screen size, operating system, etc) and allows the developer to easily manage common features of the user interface discussed above. Our solution provide a Javascript API which allows a developer to build an entire tabletop application only using web technologies, in particular HTML5, CSS3 and Javascript, without the need to know anything about the underlying hardware and software. Moreover, our solution allows a very easy reuse of web applications already developed for non touch interfaces, and personalization of the final application. Finally, we provide our multi-touch tabletop with a strong interior design (see Figure 1) on its shape and materials while most of the hardware available on the market is little more than a “big iPad mounted on four legs”. The paper is organized as follows: Section 2 discusses the related works and the need for a framework for the development of portable multi-touch applications. Section 3 presents Tactive, a software layer which provides a set of features and gestures to speed up the design process of multi-touch interactive applications. A set of success stories about applications developed with the framework is discussed in Section 4. Finally, we conclude in Section 5. 2 RELATED WORKS AND BACKGROUND There are a lot of applications for interactive tabletops and surfaces described in literature. Correia et al (Correia et al., 2010) described an application for museum setting. A tabletop is used to enhance user experience presenting semantic information about all the artworks of the exhibition. The authors realized a tabletop setup based on the Frustrated Total Internal Reflection system. More than one user can interact with the tabletop at the same time in a collaborative way. The user interface is an ad-hoc application, build from scratch with the help of some open framework. uEmergency (Qin et al., 2012) is a emergency management system based on a very large multi-touch surface. The users can collaborate in a very intuitive way around a table which displays available information on the ongoing emergency. It allows people to carry out face-to-face communication based on a horizontal map. Users can also analyzed real-time situation with fingers or digital pens. A study shows that the use of interactive surfaces improves efficiency in decision making and collaboration for coping with an emergency. Pedrosa et al, used tabletops to explore home videos and photos (Pedrosa et al., 2013). A set of 24 users evaluates an application which displays photos and videos on a horizontal touch surface to allow storytelling and random exploration. The authors show that, among collaborative tools also personal spaces within the tabletop were useful for allowing independent navigation. The application described so far have many common features: they are highly visual systems, mainly controlled by touches and some common gestures performed on the surface of the system, e.g., browsing a collection of items, selecting a particular item, accessing a documents, and so on. All these applications can interact with several users at the same time, and each user requires a different orientation of the interface, according to his/her position. Despite many common requirements, the developers of all these applications need to implement the majority of these features from scratch and the available frameworks provide only very low level features. As a general remark, designer of multi-touch applications do not have a reference model to model user interface and interaction, but often rely on best practice and intuition rather than on a systematic development process (Wigdor et al., 2009). For this reason, many works in literature address the problem of designing user interface for interactive surfaces (Anthony et al., 2012; Hesselmann et al., 2011; Luyten et al., 2010; Nielsen et al., 2004; Seto et al., 2012; Urakami, 2012). Urakami (Urakami, 2012) has shown that the user choice of gestures was affected by the size of the manipulated object, expertise, and nature of the command (direct manipulation of objects vs. assessment of abstract functions), therefore it is essential to involve the user in the development of gesture vocabularies. The same approach is followed by Hesselmann et al, that proposed an iterative process of five steps tailored to the development of interactive tabletops and surfaces applications, called SCiVA, Surface Computing for Interactive Visual Applications. The key idea of SCiVA is to strongly involve the user in the design process to improve the usability of the final product (Hesselmann et al., 2011). Luyten et al, try to reach a consensus on a set of design patterns that aid in the engineering of multi-touch interfaces and transcend the differences in available platforms (Luyten et al., 2010). Seto et al, investigate the problem of how to manage menus displacement in multi-user surfaces(Seto et al., 2012). In particular they focus on the discoverability of system menus on digital tabletops designed for public settings. This study presents a set of design recommendations to improve menu accessibility: e.g., discernible and recognizable interface elements, such as buttons, supported by the use of animation, can effectively attract and guide the discovery of menus. This analysis of the literature shows that some steps toward the definition of design patterns for the development of interactive multi-touch interfaces have been done, but there are not already built off-the-shelf components to create these interfaces, but each application build from scratch its user interface. Native frameworks, like Microsoft Surface 2.0 SDK and Runtime (Microsoft, 2013a), Windows Presentation Foundation + Native Touch recognition by Microsoft Windows 8 (Microsoft, 2013b) and Smart Table SDK (SMART Technologies, 2013), help to develop multi-touch applications but require a particular hardware/software configuration. Our goal is the creation of a software layer, portable on each operating system and hardware solution, which provides this set of features and gestures to speed up the design process of multi-touch interactive applications by avoiding to re-invent the wheel each time (Gaggi and Regazzo, 2013). Other solutions exist which addresses a similar problem. Glassomium (Toffanin, 2013) is a project based on web technologies which aims to port web applications to multi-touch surface. Even if the key idea is quite the same, it allows for rotations, scaling and dragging even through an unstable beta and it is not able to identify gestures which involve the whole hand, Glassomium can be considered a windows manager, which allows to recognize the user gestures and to manage them, but it does not implements cross-windows communication, therefore, it lacks of a proper mechanism to change the user experience on the base of the interaction of other users. To the best of our knowledge this feature is implemented only by our solution. GestureWorks (Ideum, 2013) and Arena (Unedged, 2013) are frameworks which provide generic and cross platform functionalities, like gestures recognition, to develop touch applications, but they are not able to manage more than one application being launched at the same time or multiple application enclosed in different windows. 3 DESCRIPTION OF THE FRAMEWORK In this section we discuss the design issue and the implementation details of the developed software layer, called Tactive. Tactive is a framework, which allows to speed up the development of applications for multi-touch surfaces. This goal is reached since: - Tactive provides a way to encapsulate web applications into widgets suitable for multi-touch surfaces, therefore already developed web applications can be easily adapted to multi-touch interactive surfaces; - Tactive allows to abstract from hw/sw details: an entire application can be developed using web technologies, therefore we do not ask the developer to know any particular language or technology bound to the particular hw/sw equipment, he or she only needs to know how to use the Javascript API provided by our framework; - applications developed with Tactive are able to adapt themselves to different size of the surface (Tactive helps to realize the so-called fluid applications) and - Tactive provides a set of features common to multi-touch applications like windows disposition, gestures recognition and interface orientation. 3.1 System Architecture The architecture of our system is depicted in Figure 2. Tactive is organized in two levels. The lower one, called the O.S. Layer guarantees the independence from the underlying hardware: it contains the operating system (MS Windows 7, MS Windows 8 and Linux are supported), and a set of protocol and libraries to manage touch gestures if the chosen operating system does not support them natively. The Application Layer manages the applications, their windows and the interaction between the applications and the user or between different applications. Tactive clearly separates the contents, displayed to the users, from the interaction widgets and the software components used to display the contents. For this reason, the architecture of our framework contains a content manager, called Application Container, which manages how to display the contents, and a windows manager. The Application Container allows the division between contents and interaction widget using WebViews, i. e., components that display web pages. A WebView allows to embed HTML pages inside an application. This component uses the WebKit rendering engine to display web pages inside a window of the application, and includes methods to navigate forward and backward through a history, to zoom in and out, to perform text searches, etc. The Application Container is the underlying component that encapsulate all the functionalities needed to interact with the user and with other components. within the table, i.e., the Touch Manager that allows gestures management and recognition, and the Window/Widget Manager that provides the stack of visible objects (see Figure 2). It is also responsible to collect and enumerate application specific contents (e.g., images, videos, web pages or multimedia items) that are stored as web pages and rendered through the WebView. The frameworks supports both on-line and off-line content/pages but usually the second option (a local web server) is preferred to let the application works and displays contents even in absence of an Internet connection. Widgets for the visualization of media items like videos and images have been implemented using WebViews. Tactive has been designed to be extendible: an expert developer may create a new component extending the widget component (or one of its sub-classes), automatically taking advantage of all the features already implemented and described above. Using our framework, content can be created by a web developer (that designs the structure) and update by a content editor. The mechanism of WebView is used to developed hybrid applications for mobile devices, i.e., applications based on the HTML5 languages which are wrapped with a webkit engine and rendered as native mobile applications. PhoneGap (Apache Software Foundation, 2013), also known as Apache Cordova, is a framework for cross-platform mobile development which create hybrid applications. Our approach is very similar: the idea is to take advantage of the portability of web technologies to develop portability of applications for multi-touch interactive surfaces. Using a WebView, the developer only need to specify which is the web page to render. Therefore contents has to be enclosed into web pages to be displayed to users. At this point, our framework allows the visualization of contents into a window on the tabletop. Contents can be arranged (and personalized, e.g., a particular layout) using the CSS standard language like what happens for web sites. But the provided interaction is very poor, since the user can touch the interface, but the touch is interpreted like a movement of a mouse pointer. No gestures like pinch, rotation or drag are supported, but only tap and double tap. Since people do not use their hand and fingers like a mouse pointer, we need the Touch Manager component to manages concurrent touches and gestures of many users. This software component manages portability of touches and gestures recognition and implements the TUIO protocol (Kaltenbrunner et al., 2013) which allows the transmission of an abstract description of interactive surfaces, including touch events and tangible object states. This protocol encodes control data from a tracker application (e.g. based on computer vision) and sends it to any client application that is capable to decode the protocol. Technically TUIO is based on Open Sound Control (OSC) - an emerging standard for interactive environments not only limited to musical instrument control - and can be therefore easily implemented on any platform that supports OSC. The recognition of the gestures is managed extending qTUIO (Belleh and Blankenburgs, 2013), a library which implements a TUIO listener on a local UDP socket and forwards the events into the internal event system of Qt. qTUIO is able to recognize gestures, e.g., dragging of an object, made with one finger, two fingers are allowed only for the zoom in and out management. Since the user usually move windows and objects with the whole hand, qTUIO is only a first step through the realization of a portable software for the complete management of multi-touch interaction. For this reason, the Touch Manager extends this library to recognize and manage also gestures which involves more than one finger, e.g., multi-touch pan and pinch, scroll, drag and rotation using --- 1We must note here that the framework development is almost complete, therefore, even if Tactive is extendible, it is very difficult that a developer of applications needs to implement a new type of widget. up to five fingers. Since Tactive allows to launch more than one applications at the same time, another problem arise, i.e., the management of application audio. In fact, if many applications use contemporarily the audio interface, the result can be a big uproar, and it could be very difficult for the users to understand the audio messages. Consider, as an example, the case in which two users play contemporarily two demonstrative videos, what happens is that the audio messages are overlapped and none of the users is able to easily follow the video. The situation is even worse when dealing with more users. For this reason, Tactive implements the component called Audio Manager, which is able to manage contemporaneous audio. Audio messages are classify by the content editor according to their nature, i.e. soundtrack or spoken comment. More soundtracks can plays together, two spoken comments cannot, so one of the two audio (and video if it is the audio comment of a video) is suspended till the end of the first one. To decide which audio is paused, the Audio Manager allows to define priority classes, or use a first-in, first-served policy if no priority was defined by the content editor. 3.2 Communication between different windows An important component of our architecture is the Windows Manager. Given the dimension of the tabletop, concurrent interactions by more than one users is an important issue to consider. As an example, the users can compete for space on the surface. For these reason, when a new window is opened (even by a new user or not), this operation can require the resize of all the other windows already present on the table. Otherwise, actions from a particular user may affect the behavior of the windows of other users. To allow the easily implementation of applications with this kind of features, Tactive implements a windows manager and communication protocol between windows provided by the Message Dispatcher. Let us consider as example, an application with a map, e.g., a map of a city with the list of its museums, or a map of an exhibition with the position of the stands. The map can be rendered with HTML5 on a WebView (see Figure 3). If the user touches a museum or a stand the application opens a new window, with the web site of (or a page dedicated to) the museum/stand, and the user can interact with this window, resize it, or move across the table. If the user touches the “go to the map” button on the new windows, the initial window with the map is moved over the current window of the user. Figure 3 shows a screenshot from an application developed for a local fair. To implement this behavior, a communication protocol between windows has been developed. The communication protocol allows the developer to change the content or the behavior of a window on the base of the behavior, or user interaction with, another windows. Each WebView communicates with the software layer Tactive, which acts as a windows manager. We need a windows manager instead of a simple communication protocol between windows, widgets or WebViews because only the windows manager knows how many windows are currently open in the surface, where they are, and how they are interacting with the user, each window knows only the information about itself, and nothing about the other. Moreover, the use of a windows manager allows an easy recover from the failure of a single window, since the manager records a set of information for each window and is able to stop, suspend or restart it. Tactive implements the windows management using the C++ language to address performance issues. Moreover, it offers to developers of multi-touch application a Javascript API to manage events triggered by Tactive inside their web applications which use our framework to work on multi-touch interactive surfaces. The Javascript API allows to enlarge, resize, minimize, close or move a window, in response to a user interaction, also on other windows. Moreover, using this API, it is possible to send a message to a widget active on another window through the Message Dispatcher. Consider as example an advergame: the user gains coins to play with a slot machine, answering to a questionnaire. When he/she completes the questionnaire, the window with the questions sends a message to the slot machine, enabling the user to play. This communi- Figure 4: Screenshot from an application developed for doctors training. The communication between windows is enabled by the Javascript API, which is used to compose the message and trigger the event through the Message Dispatcher to the Windows Manager, which is in charge of triggering the right response to the right window. 4 CASE STUDIES AND DISCUSSION The framework Tactive has been used to develop six applications in completely different contexts, ranging from fair exposition to the launch of a new product. In this section we describe two success stories and we report some data about how the use of this framework deeply impacts the development of a multi-touch application. The first success story is an application to improve learning activities developed for a local company. The context of use was the training of physicians. The application puts around a table four physicians, two per side. Each physician has different materials and documents, i.e., medical records, laboratory diagnosis, x-rays, etc., about a single patient with a particular disease. No physician has enough material to understand which is the disease which affects the patient without the help of data held by other doctors. Figure 4 shows a screenshot of the interface. Different content is delivered to each workspace. The goal is to improve communication strategies and the ability to work together of the physicians. The doctors can create new windows to share the content, can drag the window around the table surface, rotate, zoom in and out to better understand a picture, e.g., an x-ray, or a video, e.g., an ultrasound scan. When a doctor puts in common his own material dragging it on the center of the table, the other windows are minimized, to better focus the other doctors’ attention on that particular medical data. The application was created using Tactive, therefore the developer only needs to assemble the content into web pages. The Javascript API was used to implement the communication between windows, i.e., to minimize all the windows when a physician puts some data on the center of the table. Thanks to our software layer, the development process is reduced to content creation which requires 45 man-days of a developer for its realization. The development of the same application using the C++ language on a TouchWindow Slice Table Multi-Touch (Touchwindow S.r.l., 2013) required one man-year, therefore our framework allows to save about 86% of time as reported in Table 1. The same application was used during 15 different one-day courses for physicians, using the same structure, and changing only the content, i.e., the text in the web pages, but not the structure of the pages. This adaptation process required only one day of work of a web content editor. Moreover, the application can run on any tabletop, independently from the operating system or the size of the surface. The second case study is an application developed for the launch of a new product of a leading company in the car market. In this case, the application was used by a single speaker who, during his presentation, switched between an interactive slideshow, several videos and some online demos on a web site. Figure 5 shows the menu which allows to choose a video for the presentation. The main issue for this application was to mix both off-line and online content: the “traditional” software building blocks used for tabletop UI would have required to develop the application from scratch, loading it with the off-line content (videos and slideshows) and linking online content into a webview or a browser. Such application would have required --- 2This information has been extracted from a previous realization of the same application, which was not independent from the chosen hardware. 3Microsoft Windows 7 or superior, Linux and Apple iOS Lion are supported. Table 1: Impact of the use of Tactive in the developing time for the 6 application developed using this framework. The developing time is expressed in man-days. We suppose that a man-week is equal to 5 man-days, a man-month correspond, on average, to 20 man-days, and finally, a man-year corresponds, on average, to 220 man-days. <table> <thead> <tr> <th>App</th> <th>Tactive</th> <th>Flash</th> <th>Saving</th> <th>C++</th> <th>Saving</th> </tr> </thead> <tbody> <tr> <td>Success Story 1</td> <td>30</td> <td>–</td> <td>–</td> <td>220</td> <td>86%</td> </tr> <tr> <td>Success Story 2</td> <td>3</td> <td>10</td> <td>70%</td> <td>20</td> <td>85%</td> </tr> <tr> <td>Sculptor Exhibition</td> <td>3</td> <td>10</td> <td>70%</td> <td>15</td> <td>80%</td> </tr> <tr> <td>Innovation Festival</td> <td>5</td> <td>20</td> <td>75%</td> <td>60</td> <td>91%</td> </tr> <tr> <td>Job Event</td> <td>2</td> <td>5</td> <td>60%</td> <td>20</td> <td>–</td> </tr> <tr> <td>Learning App</td> <td>2</td> <td>5</td> <td>60%</td> <td>20</td> <td>–</td> </tr> </tbody> </table> four weeks of a FTE (Full Time Equivalent) software developer, and an implementation using Flash would have required ten man-days, as reported in Table 1. Using the Anytable framework, any piece of content was linked into a different web page and published online, included the main menu page: the overall activity required 3 days of a FTE web developer, therefore it saves 85% of time with respect to an implementation using a native SDK, and 70% of time with respect to Flash implementation. The framework has been used to implement other four applications, for a sculpture exhibition, two fairs and another type of application for learning with a different interaction with the users. Table 1 reports the required time to implement these applications using our framework. These results are compared with the estimated time required to develop the same applications using Flash and C++ language with a native SDK solution. This information has been collected from quotations that have been made during the sale phases of the final product to the customer. We can see that our framework allows to save between 60% and 75% of developing time respect to Flash implementation. This important range of percentage rises to 80% and 91% for applications developed using C++ language and a native SDK. It is easy to note that this saving is higher for complex applications. Although this important result in terms of time saving, our framework introduces also some drawbacks. In particular, to allow independence from the underlying hardware, we abstract from its characteristic and we implement a software layer which is able to operate with any tabletop. This means that Tactive defines a set of functions common to all tabletop solutions, and does not consider features which are available only on a particular hardware configuration: this choice limits the expressiveness of Tactive, which does not allow to use manufacturer-specific features in applications development. However, further development of HTML5 API will be considered in the future release of our software in order to lower this limitation. 5 CONCLUSION In this paper we present Tactive, a software layer for fast development of portable applications for multi-touch interactive tabletops. The framework is based on modern web technologies and its core unit is developed using the C++ language. The novelty of our approach consists in three points: - the development of a framework for the creation of application for multi-touch surfaces which are independent from the hardware and software equipment; - the possibility to use (and possible re-use) web pages decreases the time spent to develop the multi-touch applications and does not require to learn any new technology. Our experiments shows that Tactive allows an important reduction in time needed for development, between 60% and 91%; - finally, no other software framework provides an easy communication between different windows of the same applications. Moreover, our framework extends the qTUIO library to manage the recognition of gestures made with up to five fingers. Future works will be dedicated to the implementation of an API to manage Near Field Communication (NFC). NFC is a technology that provides short-range (up to a maximum of 10 cm) and bi-directional wireless connectivity. The idea is to save the state of the user, in term of opened documents and windows, and which is the window currently active, and to re-create the entire workspace at the correct state, every time that user approaches the system. REFERENCES the 2012 ACM international conference on Interactive tabletops and surfaces, ITS ’12, pages 225–234.
{"Source-Url": "http://www.math.unipd.it/~gaggi/doc/webist14touch.pdf", "len_cl100k_base": 6101, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24281, "total-output-tokens": 7714, "length": "2e12", "weborganizer": {"__label__adult": 0.000324249267578125, "__label__art_design": 0.0011167526245117188, "__label__crime_law": 0.0002491474151611328, "__label__education_jobs": 0.0004427433013916016, "__label__entertainment": 9.53078269958496e-05, "__label__fashion_beauty": 0.0001723766326904297, "__label__finance_business": 0.00014352798461914062, "__label__food_dining": 0.00031256675720214844, "__label__games": 0.000713348388671875, "__label__hardware": 0.00287628173828125, "__label__health": 0.0004122257232666016, "__label__history": 0.00024139881134033203, "__label__home_hobbies": 9.864568710327148e-05, "__label__industrial": 0.0003497600555419922, "__label__literature": 0.00017583370208740234, "__label__politics": 0.0001341104507446289, "__label__religion": 0.0003533363342285156, "__label__science_tech": 0.031982421875, "__label__social_life": 5.364418029785156e-05, "__label__software": 0.013671875, "__label__software_dev": 0.9453125, "__label__sports_fitness": 0.00023376941680908203, "__label__transportation": 0.0003581047058105469, "__label__travel": 0.00017845630645751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34205, 0.02753]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34205, 0.51086]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34205, 0.9014]], "google_gemma-3-12b-it_contains_pii": [[0, 3991, false], [3991, 8482, null], [8482, 13223, null], [13223, 17305, null], [17305, 21676, null], [21676, 25548, null], [25548, 30232, null], [30232, 34205, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3991, true], [3991, 8482, null], [8482, 13223, null], [13223, 17305, null], [17305, 21676, null], [21676, 25548, null], [25548, 30232, null], [30232, 34205, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34205, null]], "pdf_page_numbers": [[0, 3991, 1], [3991, 8482, 2], [8482, 13223, 3], [13223, 17305, 4], [17305, 21676, 5], [21676, 25548, 6], [25548, 30232, 7], [30232, 34205, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34205, 0.06723]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
678cd69f7b8e83a21f6ec8e7a57c0c35a1c09a22
Mapping Pragmatic Class Models to Reference Ontologies Heiko Paulheim #1, Roland Plendl #2, Florian Probst #3, Daniel Oberle *4 #SAP Research Bleichstrasse 8, 64283 Darmstadt, Germany 1heiko.paulheim@sap.com 2roland.plendl@sap.com 3f.probst@sap.com *SAP Research Vincenz-Priessnitz-Strasse 1, 76131 Karlsruhe, Germany 4d.oberle@sap.com Abstract—Capturing a domain of discourse in an object-oriented class model and in a reference ontology leads to different results. On the one hand, modeling decisions for class models are motivated by pragmatic and efficiency-related choices because modeling decisions are motivated by different choices. However, information integration scenarios require a coexistence between reference ontologies and class models at runtime. When implementing an integration scenario, objects need to be transformed between both types of models, thus requiring expressive, executable, and bidirectional mappings. In this paper, we present a novel approach for a bidirectional mapping between a class model and an ontology. The approach is both non-intrusive and dynamic. That means it can be integrated in existing IT systems without having to change the class model and it does not rely on static 1:1 mappings, respectively. I. INTRODUCTION The recent W3C Semantic Web recommendations, such as OWL, are increasingly used to specify reference ontologies. The idea of a reference ontology is to be a generic, commonly agreed upon conceptual model of a domain. To this end, modeling decisions are taken in a way such that the semantics of domain terms are captured as precisely as possible. As an example, consider the ISO 15926 Oil and Gas ontology standard whose purpose is to provide a lingua franca for Oil and Gas IT systems [1]. One goal of reference ontologies is to facilitate and enable information integration across IT systems. For example, the declared goal of the Norwegian Oil and Gas industry is to enable information integration based on ISO 15926 to support, e.g., the integration of different data sources and sensors, the validation of information delivered by different sources, and the exchange of information within and between companies via web services [2]. This requires that IT systems share information according to the reference ontology, and that they can interpret messages using that ontology. However, existing IT systems are based on database schemas or class models that historically deviate from the reference ontology. In addition, we argue that even for new IT systems, it is likely that their class models or database schemas will deviate from the reference ontology as demonstrated by [3], [4]. The reason is that class models are task-specific, with the focus on an efficient implementation of an application. In contrast to reference ontologies, modeling decisions are geared towards a pragmatic and efficient model. Due to those differences, one often faces the situation where class models and reference ontologies are incompatible in the sense that a 1:1 mapping between them does not exist. Yet, it is required to use both the pragmatic class model and the reference ontology in parallel and to transform objects between class models and ontologies as illustrated by the Oil and Gas integration scenario. Practically, due to the dominant paradigm of object-oriented programming, this means that programming models supporting arbitrary mappings between class models and ontologies are needed. Despite the existence of sophisticated solutions for model mapping [5] and database integration [6], the existing approaches for mapping class models to ontologies are still limited with respect to supporting arbitrary mappings. In this paper, we contribute a novel approach for bidirectionally mapping between reference ontologies and class models. As required for integrating existing IT systems, the approach is non-intrusive, i.e., it can be implemented without having to change the class model. The approach is dynamic, i.e., it does not rely on static 1:1 mappings between the class model and the reference ontology. Instead of using static mappings between the class model and the ontology, our approach uses rules which are dynamically evaluated at run-time. We start by introducing a scenario of information integration in the area of emergency management in Section II. This scenario originates from building a prototype in the context of the German lighthouse project called SoKNOS.[1] The section also highlights typical deviations between class models and reference ontologies which go beyond a 1:1 mapping. Section III discusses how mappings for both directions are specified in the form of rules, followed by a discussion of the architectural design in Section IV. A scalability and performance evaluation can be found in Section V. Finally, we discuss related work and give a conclusion in Sections VI and VII, respectively. II. Motivating Scenario Emergency management is a domain where semantically unambiguous information exchange between organizations is crucial. Fighting large incidents like floodings or earthquakes requires close cooperation between a large number of organizations. If such an incident occurs, an emergency headquarters is put into place. The command staff operating the headquarters has the task to plan counter measures and to coordinate the joint efforts of all involved organizations. Since more and more organizations use IT systems, the challenge of system interoperability arises. In the project SoKNOS, a prototype of an emergency management infrastructure has been developed, which integrates both newly developed and existing IT systems, ranging from geographical information providers to databases for managing resources. The SoKNOS system enables the command staff to quickly integrate new information services, such as map overlays or sensor information, but it also enables the integration of complete modules from other systems, such as modules displaying the currently available resources [7], [8]. For facilitating information exchange between the IT systems, we have developed a reference ontology of the emergency management domain [9], which defines the categories and relations of relevant terms in the emergency management domain, based on the formal top level ontology DOLCE [10]. The SoKNOS prototype includes a number of system components developed in Java, which use a simplified, pragmatic class model for information processing. Several deviations do occur between such a class model and a reference ontology. For example, measures in emergency management, such as extinguishing a particular fire, have to be planned and carried out. Therefore, both planned measures as well as measures that are actually performed have to be stored by an emergency management system. Fig. 1 shows a multi purpose class, a typical deviation between a pragmatic class model and a reference ontology: The class Measure contains a flag type indicating whether an instance is an actual or a planned measure. In the ontology, both belong in different categories which do not even have a meaningful common super category. Information exchange in the SoKNOS prototype requires IT systems to represent and share information according to the reference ontology but residing in the corresponding object. Therefore, a mapping from objects to the ontology has to be put in place and vice versa. However, objects of a class cannot be mapped statically at design time. In our example, mapping an object of a class with either Measure or PLANNED MEASURE can only be performed at run-time by inspecting the Measure object and dynamically assigning a mapping, based on the value of the type flag. Besides multi-purpose classes, there are a number of other deviations that can occur due to the differences of class models and reference ontologies: - **Multi purpose relations**, just as multi purpose classes, are relations between objects that are used for expressing different relations in the ontology, distinguished by a flag or by implicit background knowledge, such as naming conventions of the related object. A typical example would be a String attribute contactData, which, if it contains an @ symbol, is interpreted as an e-mail address, otherwise as a telephone number. - **Artificial classes** are classes that do not have a corresponding category in the ontology, but hold some meaningful information, such as a class CustomerData containing both address and payment information. - **Relation classes** are classes that do not correspond to a category, but to a relation in the ontology, e.g. a Marriage class corresponding to a MARRIED TO relation in the ontology. - **Shortcuts** are direct relations between objects that only exist as indirect relations via intermediate categories in the ontology, e.g., a Customer object having a ZIP code attribute, while the connection between the CUSTOMER and the ZIP CODE category involves the intermediate category CITY in the ontology. - **Conditional objects** are objects whose existence is determined by a flag. Most commonly, this occurs as a deleted flag indicating that the object has been deleted. - **Object counting** occurs if a set of relations to other objects is represented by the number of those objects, e.g., using an attribute numberOfDamages instead of a set of damage objects. - **Non atomic attributes** are attributes that contain information belonging to several categories in the ontology. A typical example are attributes such as address, which contains both the street name and the street number. Those are often mapped to two different categories in a reference ontology. III. Mapping Specification Simply mapping one element from the class model to a category in the ontology is insufficient as we have learned in the previous section. Instead, arbitrary mappings between class models and ontologies are required which in turn necessitate an expressive representation formalism to specify the mappings. Therefore, some more versatile solution is needed, which allows for declaratively expressing dynamic mappings and interpreting those mappings at run-time. We use rules for expressing dynamic mappings between class models and ontologies (and vice versa). The rules can be evaluated at run-time on objects to create the desired RDF graph describing the object, and on an RDF graph to create the corresponding set of objects. Fig. 2 shows how the mapping rules are used to transform a Java object into an RDF graph and back, using the example depicted in Fig. 1. The next sections explain the different rule syntaxes in detail. A. Mapping Class Models to Ontologies Our mapping approach uses tests on the objects to be mapped as rule bodies, and a set of RDF triples to be produced as rule heads. For each class, one rule set is defined. In cases where objects have relations to other objects, the rule sets of the corresponding related classes are executed when processing a related object. Rule sets defined for super classes are inherited to sub classes, however, the developer may also override inherited rules explicitly. As the example depicted in Fig. 2 shows, the tests on the objects are defined using XPath expressions. The result of the expression can be used as a variable in the RDF triple to be generated (referred to with the % sign), as well as the current object (referred to with the . sign). The uri function used in a triple generates a unique URI for an object. The triples may also contain blank nodes, which are needed, e.g., to cope with shortcuts. The XPath expressions may also contain regular expressions to deal with implicit background knowledge and non-atomic data types. Those can be used for conditions (e.g., does the contactData attribute contain an @ symbol?) or for splitting data values, e.g., separating street names from numbers. A repeat function can be used to cover object counting deviations (e.g., produce a set of n blank nodes for an attribute value of n). B. Mapping Ontologies to Class Models The mapping rules from ontologies to class models are similar. Again, rule sets are defined per ontology category, are inherited to sub categories, and rules can be explicitly overridden by the developer. For rule bodies, SPARQL expressions are used. In the rule heads, objects are created by using createObject, and attribute values for created objects are set (by using getObject and an XPath expression identifying the attribute to be set). The execution order of rules is defined such that all createObject statements are executed first, assuring that all objects are created before attempting to set attribute values. For determining which categories an RDF instance belongs to, and for executing SPARQL statements, reasoning on the RDF graph and the domain ontology can be used. To cover non-atomic data types, the rules may use a concat function for concatenating different results of a SPARQL query. For coping with object counting, a count function can be used to produce the corresponding attribute values (once SPARQL version 1.1, which supports counting, becomes a standard, this will be obsolete). Although the rules for both directions look similar, there is one subtle difference. The XPath expressions used on the object model are executed with closed world semantics, while the SPARQL expressions used on the RDF model are executed with open world semantics. The rationale is that the set of objects in an information system is completely known (and therefore forms a closed world), whereas an RDF graph representing a set of objects will typically only represent a subset of the original information. IV. ARCHITECTURE Fig. 3 shows the architecture of our implemented prototype with two IT systems involved in an information exchange. As a first step, Java→RDF and RDF→Java mapping engines have to be put in place. As a second step, Java→RDF rules for mapping each object and RDF→Java rules for mapping each ontology category have to be specified, as discussed in the previous section. The rules have to registered in the corresponding mapping engine. At runtime, the IT systems are handled as black boxes with an API, following the paradigm of non-intrusiveness. When an object from an IT system is retrieved via its API and needs to be exchanged, an annotation in RDF is generated which conforms to reference ontology. The exchange of objects between IT systems is controlled by object exchange components. They retrieve objects from and hand over objects to the IT systems’ APIs, invoke the corresponding mapping engines, and control the transfer to other object exchange components. Let us assume IT System 1 needs to exchange information with IT System 2. For generating the RDF representation, rules are processed by a Rule Engine. For executing rules, the Object Inspector evaluates conditions, and a URI Factory creates unique URIs for each object. An RDF Writer, unifies the outcomes of all relevant rules and creates the RDF annotation. On the side of IT System 2, an object has to be created from the received RDF annotation. The RDF Reader first reads the received annotation, and another Rule Engine processes the mapping rules. A SPARQL Processor evaluates conditions, and an Object Factory eventually creates objects. The objects can then be used for calling IT System 2’s API. When exchanging objects, it may be desirable not to transfer all RDF statements that can be generated for a particular object, but only a subset. Since connected object graphs can be fairly large, the resulting RDF annotation can grow considerably in size, thus slowing down the transmission as well as the transformation back to objects. Therefore, an RDF Filter can be applied for reducing the set of RDF statements to be transmitted. The filter uses RDF templates which consist of blank nodes and labeled relations, according to the reference ontology. When applying a template to an RDF graph, only those resources are kept which are contained in the template. Although the filtering could also be done by defining a subset of rules, we have chosen to keep both issues separated. The rules define all possible annotations that can be generated for a particular object and are therefore universal. The templates define the subset that is needed for a particular use case, e.g., transmission to another IT system. It is possible to use the same set of rules with a different set of templates for different use cases. For example, information exchange between systems owned by the same company might include more information, while the information exchanged with systems outside of the company is restricted. We have implemented the solution sketched above in a Java-based prototype, using JXPath² for evaluating the XPath expression on Java objects and Jena [11] for RDF processing. The parsers for the rules were generated with JavaCC.³ By using Java’s reflection API [12] and relying on the Java Beans specification [13] (a naming convention for object constructors and for methods for accessing property values), the implementation is non-intrusive and does not require changes to the underlying class model. V. EVALUATION We have evaluated our mapping prototype with the class model used in the SoKNOS prototype and the reference ontology of emergency management described in [9]. The SoKNOS class model consists of 58 classes with 30 attributes and 25 relations. The ontology consists of 156 categories and 136 relations. The number of occurrences of the deviations of the mapping between both are depicted in Table I. Thus, about 16% of all model elements (classes, attributes, and relations) ²http://commons.apache.org/jxpath/ ³https://javacc.dev.java.net/ TABLE I DEVIATIONS IN THE SOKNOS PROTOTYPE. <table> <thead> <tr> <th>Deviation type</th> <th>No. of Occurrences</th> </tr> </thead> <tbody> <tr> <td>Multi purpose class</td> <td>3</td> </tr> <tr> <td>Multi purpose relation</td> <td>2</td> </tr> <tr> <td>Artificial class</td> <td>0</td> </tr> <tr> <td>Relation class</td> <td>1</td> </tr> <tr> <td>Shortcuts</td> <td>3</td> </tr> <tr> <td>Conditional objects</td> <td>2</td> </tr> <tr> <td>Object counting</td> <td>3</td> </tr> <tr> <td>Non atomic attributes</td> <td>4</td> </tr> </tbody> </table> and 6% of all ontology elements (categories and relations) are involved in some of the deviations. Our mapping prototype was able to handle most of the deviations. The only problematic cases are non-atomic data types. In cases where a convention exists that can be formalized, such as “a street name never contains numbers, and the street number always starts with a digit between 1 and 9,” our approach can handle them by using regular expressions. Cases where such a formalization cannot be found, such as for separating first and last names, cannot be handled automatically without human intervention. We processed different, artificially created sets of objects from 10 to 10,000 connected objects in order to test the performance and scalability of our approach. Fig. 4 shows the processing times for producing RDF annotations for Java objects. The processing times scale linearly, the average time for processing one object is below one millisecond. Fig. 5 shows the corresponding processing times for creating Java objects from RDF annotations. For processing SPARQL statements, different reasoning options can be used. We tested our approach with three built-in reasoners of Jena [11]: the transitive reasoner, which only uses rdfs:subClassOf and rdfs:subPropertyOf statements, the RDFS reasoner, and the OWL reasoner, which covers OWL Lite reasoning. Except for the latter, which creates some overhead for larger sets of objects, creating an object from RDF takes less than ten milliseconds, and the processing times scale linearly. VI. RELATED WORK In this paper, we have discussed an approach for creating annotations for Java objects, and for re-constructing Java objects from such annotations. Such approaches can be roughly categorized into generative, intrusive, and non-intrusive approaches. The latter are the only approaches that can be applied when the developer cannot or must not change the class model, be it for technical or for legal reasons. Generative approaches create a class model from an ontology. They can thus be seen as a special case of model driven architecture (MDA) [14]. Examples for generative approaches are RDFReactor [15], OWL2Java [16], OntoJava [17], and agogo [18]. As a special case, the class model may also be generated on-the-fly at run-time when using dynamically typed scripting languages, as shown with ActiveRDF [19] for Ruby and Tramp for Python. For the developer, such approaches are more comfortable, since they do not require additional tools such as a code generator, but work directly out of the box by invoking an API. In general, generative approaches usually create one Java class per ontology category, therefore, a 1:1 mapping exists by definition. Generative approaches are a suitable solution for developing new IT systems, but not for integrating existing ones. Intrusive approaches do not generate new Java classes from an ontology, but modify (i.e., intrude into) an existing class model by adding additional code fragments, such as Java annotations or additional methods and/or attributes. Examples for intrusive approaches are otm-j [20], Sommer,7 Texai KB [21], RDFBeans,8 Jenabean,9 and P-API [22]. All those approaches require a 1:1 mapping between Java classes and categories in the ontology. Non-intrusive approaches can produce annotations without altering the class model. The only examples we found are ELMO10 and the EMF-RDF transformations introduced in [23]. Both approaches require that the “domain model should be rather close to the ontology” [23] and cannot cope with most of the deviations discussed above. While some of the approaches also cover both directions of conversion – from Java to RDF and from RDF to Java – and can cope with some of the deviations discussed above, to the best of our knowledge, there are no solutions that can cope with all the deviations and can therefore support arbitrary mappings between class models and reference ontologies. Fig. 4. Run time for creating the RDF annotation for a set of Java objects Fig. 5. Runtime for creating Java objects from a set of annotations, using different types of reasoning 6http://www.aaronsw.com/2002/tramp/ 7http://code.google.com/p/jenabean/ 8http://code.google.com/p/rdfbeans/ 9http://code.google.com/p/jenabean/ 10http://www.openrdf.org/doc/elmo/1.5/ The tests were run on a Windows PC with a Pentium 3.6GHz dual core processor and 2GB of RAM. VII. CONCLUSION AND OUTLOOK We have discussed a number of typical deviations between class models and reference ontologies. In a use case from the emergency management domain, we have shown that the deviations do occur in actual IT systems and ontologies. We argue that also other domains are affected by such deviations, e.g., the Oil and Gas domain. We have introduced an approach for mapping class models to ontologies. Our approach uses rules both for creating RDF representations of objects as well as vice versa, thus supporting information exchange between Java objects as RDF between potentially heterogeneous class models. A prominent area of using ontologies is facilitating interoperability between heterogeneous information systems by allowing for semantically unambiguous information exchange. As this may involve IT systems that cannot or must not be altered for technical or legal reasons, our approach has been implemented in a non-intrusive way. It supports arbitrary mappings between class models and ontologies. We have further evaluated the performance and scalability of our approach, showing that the transformations are performed within milliseconds, growing linearly with the number of objects. The paper has focused on the use case of information integration and exchange. However, the mechanism of mapping different class models to one common ontology may also be used to make the data in different applications available as a unified linked data set, allowing for reasoning and for unified visualization [24]. Currently, the developer has to develop the mapping rules by hand. In the future, techniques developed, e.g., in the field of ontology matching [25] may be employed to assist the user, however, the state of the art in ontology matching is most often focused on discovering 1:1 mappings. REFERENCES
{"Source-Url": "http://www.ke.tu-darmstadt.de/bibtex/attachments/single/267", "len_cl100k_base": 4919, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22029, "total-output-tokens": 7245, "length": "2e12", "weborganizer": {"__label__adult": 0.0003185272216796875, "__label__art_design": 0.0004906654357910156, "__label__crime_law": 0.0004467964172363281, "__label__education_jobs": 0.0008444786071777344, "__label__entertainment": 8.64863395690918e-05, "__label__fashion_beauty": 0.00015985965728759766, "__label__finance_business": 0.0002963542938232422, "__label__food_dining": 0.0003330707550048828, "__label__games": 0.0004780292510986328, "__label__hardware": 0.0006012916564941406, "__label__health": 0.0005626678466796875, "__label__history": 0.00031638145446777344, "__label__home_hobbies": 8.749961853027344e-05, "__label__industrial": 0.0004346370697021485, "__label__literature": 0.0004391670227050781, "__label__politics": 0.0003097057342529297, "__label__religion": 0.0004818439483642578, "__label__science_tech": 0.07000732421875, "__label__social_life": 0.0001195073127746582, "__label__software": 0.0178680419921875, "__label__software_dev": 0.904296875, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.0004887580871582031, "__label__travel": 0.0002187490463256836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30416, 0.02504]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30416, 0.61707]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30416, 0.88622]], "google_gemma-3-12b-it_contains_pii": [[0, 4824, false], [4824, 10268, null], [10268, 14560, null], [14560, 17733, null], [17733, 22647, null], [22647, 30416, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4824, true], [4824, 10268, null], [10268, 14560, null], [14560, 17733, null], [17733, 22647, null], [22647, 30416, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30416, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30416, null]], "pdf_page_numbers": [[0, 4824, 1], [4824, 10268, 2], [10268, 14560, 3], [14560, 17733, 4], [17733, 22647, 5], [22647, 30416, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30416, 0.08403]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
d78fba86c059195056a520d48135a6465ca214cf
Grouping and joining transformations in the data extraction process Marcin Gorawski*, Paweł Marks Institute of Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland Abstract In this paper we present a method of describing ETL processes (Extraction, Transformation and Loading) using graphs. We focus on implementation aspects such as division of a whole process into threads, communication and data exchange between threads, deadlock prevention. Methods of processing of large data sets using insufficient memory resources are also presented upon examples of joining and grouping nodes. Our solution is compared to the efficiency of the OS-level virtual memory in a few tests. Their results are presented and discussed. 1. Introduction Nowadays data warehouses gather tens of gigabytes of data. The data, before loading to the warehouse, is often read from many various sources. These sources can differ in terms of a data format, so there is necessity of applying proper data transformations making the data uniformly formatted. In consecutive steps the data set is filtered, grouped, joined, aggregated and finally loaded to a destination. The destination can be one or more warehouse tables. A whole process of reading, transforming and data loading is called data extraction process (ETL). The transformations used in the ETL process can differ in terms of complexity. A few of them are simple (e.g. filtration, projection), whereas others are very long lasting and require a lot of operational memory (e.g. grouping, joining). However, the common feature of the transformations is that each one contains at least one input and an output. This allows to describe the extraction process using a graph, whose nodes correspond to objects performing some operations on tuples, and its edges define data flow paths. Most of commercial tools like Oracle WB do not consider internal structure of transformations and graph architecture of ETL processes. Exceptions are the research [1,2], where the authors describe ETL ARKTOS (ARKTOS II) tool. It *Corresponding author: e-mail address: Marcin.Gorawski@polsl.pl can (graphically) model and execute practical ETL scenarios, providing us with primitive expressions that brings the control over the typical tasks using declarative language. Work [3] presents advanced research on the prototypes containing the AJAX data cleaning tool. To optimize the ETL process, there is often designed a dedicated extraction application, adjusted to requirements of a particular data warehouse system. Based on the authors’ experiences [4,5], a decision was made to build a developmental ETL environment using JavaBeans components. Similar approach was proposed, in the meantime in work [6]. J2EE architecture with the ETL and ETLLet container was presented there, providing efficient ways of execution, controlling and monitoring of ETL process tasks for the continuous data propagation case. Further speeding up of the ETL process forced us to give the JavaBeans platform up. An ETL-DR environment [7] is a successor to the ETL/JB and DR/JB [8]. It is a set of Java object classes, used by a designer to build extraction applications. These are analogous to JavaBeans components in the DR/JB environment. However, object properties are saved in an external configuration file, which is read by an environment manager object. It relieves us from recompilation of the application each time the extraction parameters change. In comparison to ETL/JB and DR/JB we improved significantly the processing efficiency and complexity of the most important transformations: grouping and joining. The possibility of storing data on a disk was added when the size of the data set requires much more memory than it is available. In the following sections we present in detail a method of describing ETL processes using graphs and we show how this description influences the implementation. The problems resulting from the graph usage are also discussed and the methods of data processing using insufficient memory resources are presented. 2. Extraction graph Operations performed during the extraction process can be divided into three groups: - reading source data, - data transformations, - writing data to a destination. ![Fig. 1. One of the simplest extraction graphs. Node E is an extractor, node T is a transformation and node I is an inserter](image) Nodes belonging to the above mentioned operation groups are respectively: extractors (E), transformations (T) and inserters (I). From the graph point of view extractors have only outputs, transformations have both inputs and outputs, whereas inserters contain inputs only. By connecting inputs to outputs we create a connection net that defines data flow paths (Fig. 1). The data flow inside the node is possible in one direction only: from the inputs to the outputs, in the opposite direction it is forbidden. It is also assumed that connection net does not contain closed loops, which means there is no possibility to enter the same graph node traversing along the selected path of the graph. Such a net of nodes and connections is called the directed acyclic graph DAG. 3. ETL-DR data extraction environment ETL-DR is our research environment designed in Java. It uses the extraction graph idea presented above to describe the extraction processes. During processing each graph node is associated with a thread, that is an instance of a transformation, or an extractor, or an inserter. Available components are: 1. Extractors – FileExtractor (FE) – reads tuples from a source file, – DBExtractor (DE) – reads tuples from a database, 2. Transformations – AggregationTransformation (AgT) – aggregates a specified attribute, – FilterTransformation (FiT) – filters the stream of tuples, – FunctionTransformation (FuT) – user-definable tuple transformation, – GeneratorTransformation (GeT) – generates ID for each tuple, – GroupTransformation (GrT) – grouping, – JoinTransformation (JoT) – joining, – MergeTransformation (MeT) – merges two streams of tuples, – ProjectionTransformation (PrT) – projection, – UnionTransformation (UnT) – union, 3. Inserters – FileInserter (FI) – writes tuples to a destination file, – DBInserter (DI) – writes tuples to a database table via JDBC interface, – OracleDBInserter (ODI) – writes tuples to a database using Oracle specific SQL*Loader, 4. Specials – VMQueue (VMQ) – FIFO queue which stores data on a disk. Most of the components process data on-the-fly, which means each tuple just received is transformed or analyzed independently and there is no need to gather a whole data set. The exceptions are: joining node JoT, grouping node GrT and VMQ queue. 3.1. Implementation of graph nodes interconnections In order to facilitate analysis of interconnections between the graph nodes we have to describe the structure of inputs and outputs of the ETL-DR extraction graph nodes. Each node has a unique ID. Each node input contains ID of a source node assigned by the graph designer, and an automatically assigned number of an output channel of the source node. A node output is a multi-channel FIFO buffer with the number of channels equal to the number of inputs connected to the node (Fig. 2). When a node produces output tuples, it puts them into its output, where they are grouped into tuple packets. Upper limit of the packet size is defined by the designer. Packets are gathered in queues, separately for each output channel. The queue size is also limited to avoid unnecessary memory consumption. ![Fig. 2. Nodes interconnection on the implementation level. Data produced by the node 123 are stored in a multichannel output buffer. Source of the node 124 is defined as a node with ID = 123 and logical channel number = 1](image) 3.2. Data exchange between nodes and a risk of deadlock Let us analyze a case of processing performed by a part of the graph presented in Fig. 3a. The function node FuT(11) produces tuples with attributes (eID, date, transactionsPerDay), and the grouping node GrT(12) computes an average number of transactions for each employee. This is similar to the SQL query below: SELECT eID, AVG(transactionsPerDay) AS avgTPD FROM GrTFuT GROUP BY eID The joining node JoT(13) performs an action defined by the following SQL query: SELECT s1.eID, s1.date, s1.transactionsPerDay, s2.avgTPD FROM JoTFuT s1, JoTGrT s2 WHERE s1.eID = s2.eID Such simple operations like grouping and joining are dangerous because they can be a reason of deadlock. This is a result of the data transferring method between node threads. The joining node works as follows: it receives tuples from the slave input and puts them into a temporary buffer, next it receives tuples from the primary input. Each tuple from the primary input is checked if it can be joined with tuples in a temporary buffer according to the specified join condition. In the presented example, slave input is the one connected to the grouping node, as it is greatly possible, that after grouping the size of the data set will decrease and a smaller number of tuples will be kept in memory. Tuples generated by the function node are simultaneously gathered in both output channels of the node for the nodes JoT(13) and GrT(12). The grouping node aggregates data all the time, but the joining node waits for the grouped data first, and it still does not read anything from the function node. After exceeding the limit of the output queue size, the function node is halted until the queue size decreases below the specified level. This way a deadlock occurs: - the node FuT(11) waits until the node JoT(13) starts reading data from it, - the node GrT(12) waits for the data from the node FuT(11), - the node JoT(13) waits for the data from the node GrT(12). To eliminate the reason for the deadlock we have to make sure that the data from the function node FuT(11) are fetched continuously without exceeding the queue size limit. To do it we created a special VMQueue component. This is a FIFO queue with ability of storing data on a disk. It reads tuples from its input, no matter if they can be hand further or not. If tuples are fetched from the VMQ node continuously it does nothing more but transfers data from the input to the output. In the other case, it writes tuples to the disk in order to avoid overfilling of the output queue of its source node. Next, when VMQueue destination continues processing, the tuples are read from the disk and sent to the queue output. Inserting a VMQueue node between FuT(11) and JoT(13) avoids the deadlock (Fig. 3b). 3.3. Formal definition of the deadlock prone graph nodes subset A deadlock may occur if two or more data flow paths that split in one node of the graph, meet again in another node. In other words, a given node $X$ is connected with any of its direct or indirect source nodes by two or more paths. This let us conclude that node $X$ must have more than one input. Let us represent a set of source nodes of the node $X$ as $SourceNodes(X)$, and a set of source nodes of the i-th input of $X$ as $InputSourceNodes(X,i)$. We can define: - $InputSourceNodes(X,i) = SourceNodes(X.in[i].sourceID) \cup \{X.in[i].sourceID\}$ - $SourceNodes(X) = \emptyset$ if $X$ is an extractor, - $SourceNodes(X) = \bigcup_{i=1}^{n} InputSourceNodes(X,i)$ if $X$ is a transformation or an inserter - $CommonNodes(X,i,j) = InputSourceNodes(X,i) \cap InputSourceNodes(X,j)$ - $LastNode(N) = \{X \in N: SourceNodes(X) = N \setminus \{X\}\}$ If for each node $X$ of an extraction graph, which is not an extractor, the following condition is satisfied: $$\forall_{i \in [1,n]} \forall_{j \in [1,n]} i \neq j \Rightarrow CommonNodes(X,i,j) = \emptyset$$ then deadlock cannot occur. Otherwise deadlock is possible and we should use VMQueue component and insert it into the graph, to avoid the application hang. Insertion of VMQueue node makes sense only behind the nodes from a $LastNode(CommonNodes(X,I,j))$ set, that is a set of the last nodes from the set of common parts of the two data flow paths. In the example presented in the previous section it was the FuT(11) node (Fig. 3b). 3.4. Temporary data buffering on disk During an extraction process a large number of tuples is processed. When they need to be buffered, there is a problem of selection of the right place for the buffer. Keeping them in memory is impossible because the size of the data set is usually much bigger than that of the available RAM. The only solution is storing the data on a disk. Two approaches are possible: virtual memory supported by the operating system or storing implemented on the application level in algorithms used in transformation nodes. In our ETL-DR environment the nodes using application-level virtual memory are: VMQueue, GroupTransformation and JoinTransformation. **VMQueue Component.** As it was presented in Sect. 3.2 VMQueue component is a FIFO queue able to store the buffered data on a disk. Its task is to ensure the data is read from its source as it comes, even if the node receiving data from VMQueue does not work. In such a case tuples are stored in a disk file rather than put into the output buffer. Next when possible, tuples are read from the file. and hand further. Because of a sequential access to the disk file, this solution is more efficient than the OS-level virtual memory. **GroupTransformation Component.** A grouping component can work in one of three modes: 1. input tuples are sorted according to the grouping attribute values, 2. tuples are not sorted, grouping in memory, 3. tuples are not sorted, external grouping. ```plaintext procedure Group() Begin List fileList; While Input.hasTuples() do Tuple T = Input.getTuple(); If not HM.contains(Attributes(T)) then HM.put(Attributes(T), Aggregates(T)); End if Aggregates AG = HM.get(Attributes(T)); AG.doAggregate(T); If HM.size() > SIZELIMIT then fileList.add(WriteToFile(HM)); HM.clear(); End if End while AggrSource as = getSource(fileList, HM); Aggregates AG = null; While as.hasNext() do If AG == null then AG = a.next(); Else Aggregates newAG = as.next(); If (newAG.attr == AG.attr) then AG.aggregate(newAG); Else ProduceOutputTuple(AG); AG = newAG; End if End if End while ProduceOutputTuple(AG); End ``` Fig. 4. External grouping algorithm In case 1) aggregates are computed as they come, and memory usage level is very low. In case 2) each new combination of the grouping attributes is saved in a hash table with associated aggregates. If such a combination appears again during processing, it is located and the aggregates are updated. The number of entries in the hash table at the end of the processing equals to the number of tuples produced. Both cases 1) and 2) use only RAM. Case 3) has features of the processing used in cases 1) and 2). First, data set is gathered in the hash table and aggregates are computed (Fig. 4). When the number of entries in the table exceeds the specified limit, the content of the table is written to the external file in the sorted order according to the grouping attribute values. Next, the hash table is cleared and the processing is continued. Such a cycle repeats until the input tuple stream ends. Then the data integration process is run. Tuples are read from the previously created files and final aggregates values are computed. This is very similar to case 1) processing with the exception of getting data from external files instead of the node input. **JoinTransformation Component.** A joining node works based on the algorithm presented in Fig. 5. The first step is collecting tuples from the slave input. They can be loaded to a temporary associating array or written to a temporary disk file. Before writing to the file, tuples are sorted according to the joining attributes using the external version of the standard Merge-Sort algorithm: tuples are gathered in memory, if the limit of tuples in memory is exceeded they are sorted and written to a file. Next portions of the data set are treated in the same way. Finally, tuples from all the generated sorted files are integrated into one big sorted file. Sorting lets us locate any tuple in the external file in \( \log(n) \) time using the binary search algorithm. **procedure** Join() **Begin** While Input(2).hasTuples() do Tuple T = Input(2).getTuple(); HM.put(Attributes(T), T); End while While Input(1).hasTuples() do Tuple T = Input(1).getTuple(); Tuple[] TT = HM.get(Attributes(T)); For each JT in TT do Tuple O = Join(T, JT); ProduceOutputTuple(O); End for End while **End** **Fig. 5. General joining algorithm** Additional indexing structure located in memory also decreases searching time, by reducing the number of accesses to the file. The index holds locations of the accessed tuples, which enables narrowing down the searching range when accessing consecutive tuples. The second phase is the same, no matter if the temporary buffer is located in memory or on a disk. Only the implementation of the HM (HashMap) object changes in the algorithm presented in Fig. 5. Each tuple from the primary input is checked if it can be joined with tuples in the temporary buffer according to the specified join condition. 4. External processing tests For tests we used data files that forced Java Virtual Machine to use much more memory than it was physically available. Tests were performed using the computer with AMD Athlon 2000 processor working under WindowsXP Professional. During tests we were changing the size of the available RAM. 4.1. Grouping test Grouping was tested based on the extraction graph containing an extractor $FE$, a grouping node $GrT$ and an inserter $I$ (Fig. 6). The extractor reads the tuple stream with attributes ($eID$, $date$, $value$), in which for each employee $eID$ and for each day of his work, the transaction values were saved. The number of employee transactions per day varied from 1 to 20. The processing can be described by SQL query as: ```sql SELECT eID, date, sum(value) as sumVal, count(*) as trCount FROM GrT_{FE} GROUP BY eID, date ``` Fig. 6. Grouping test extraction graph The processing time was measured depending on the number of input tuples (10, 15, 20 and 25 millions) and the type of processing. The result chart contains the total processing time (TT) and the moment of loading the first tuple to a destination, so called Critical Time (CT). During all the tests using external grouping (Ext) JVM was assigned only 100MB of RAM. During grouping in memory, we examined the two cases: JVM memory was set with some margin (Normal) and with a minimal possible amount of RAM (Hard) that guaranteed successful completion of the task. The obtained results are shown in Fig. 7. Fig. 7. Processing times measured during grouping test. TT is a total processing time, whereas CT denotes a moment when the first output tuple is produced (Critical Time). The test computer contained 384MB physical RAM, and for JVM using virtual memory and for 10m and 15m tuples it was assigned respectively 450MB and 550MB during Normal test, then 300MB and 425MB during Hard test. As it can be seen, the most efficient processing method is definitely the one using application-level data storing. Its processing time changes from 129 sec. to 322 sec. depending on the number of input tuples. The use of OS-level virtual memory causes that the whole process takes much more time. Only for 10 million of input tuples and strongly limited JVM memory, which resulted in a very low usage of the virtual memory, we obtained results slightly better than for built-in data storing. However, for 15 million tuples the processing takes an extremely long time (the line going rapidly outside the chart). The main reason for so low efficiency of a virtual memory are random accesses to the memory caused by updating aggregates in temporary buffers and Java garbage collector. The application-level storing accesses data files sequentially, and as a results this method is much more efficient. We have not finished the OS-level virtual memory tests for 20m and 25m tuples because it needed extremely long time (several hours). Our goal was only to show that the application-level buffering can be much better than the OS-level buffering. 4.2. Joining test Joining test is based on the extraction graph shown in Fig. 8. The extractors read the same number of tuples: FE1 reads tuples with attributes (eID, date, depID), describing where each employee was working each day, whereas FE2 reads a set of tuples produced in the previous test (eID, date, sumVal, trCount). Joining attributes are (eID, date) and processing times were measured for the following number of input tuples from each extractor: 10, 15 and 20 millions. During the test the computer was equipped with 256MB RAM, JVM was assigned 100MB RAM when joining with data storing on disk was used, and respectively 400MB and 600MB for 10 and 15 million of tuples when using virtual memory. In this test we can still observe benefits of using application-level data storing, but the difference in comparison to OS virtual memory is not so big as in the grouping test because this time the external file is accessed randomly, not sequentially. The obtained results are presented in Fig. 9. 4.3. Real extraction test We also performed a real extraction test. The ETL process generates a star schema data warehouse containing a fact table and two dimensions. In this test, both grouping and joining nodes appear in the extraction graph and they run concurrently: when the grouping node GrT(2) produces output tuples, the joining node JoT(30) puts them into its internal buffer (memory or a disk file). This test lets us examine the behavior of the buffering techniques when more than one node require a lot of memory resources. The size of the input data set was 300MB. JVM required 475MB RAM to complete the task using virtual memory, and only 100MB when using application-level data storing. The computer had 256MB RAM. The ETL process using data storing took only 26 minutes, whereas when using the virtual memory, it needed 3 hours to complete only 10% of the whole task (the whole processing could take even 30 hours). Continuation of the test did not make sense, because we could already conclude that in this case the efficiency of the virtual memory was extremely low. The size of the input data set was 300MB. JVM required 475MB RAM to complete the task using virtual memory, and only 100MB when using application-level data storing. The computer had 256MB RAM. The ETL process using data storing took only 26 minutes, whereas when using the virtual memory, it needed 3 hours to complete only 10% of the whole task (the whole processing could take even 30 hours). Continuation of the test did not make sense, because we could already conclude that in this case the efficiency of the virtual memory was extremely low. The main part of the extraction graph generating star schema data warehouse. Path FE(1)-FI(32) generates fact table, whereas path FE(1)-FI(5) is responsible for producing one of the dimension tables. Extractor FE(1) reads 300MB data file. In our opinion the obtained results come from of the random accesses to the VM swap file. When many nodes keep a lot of data in a virtual memory and access it randomly (because each node runs as an independent thread) the swap file has to be read and written very often from various locations. This does not take place during application-level buffering, the external files are accessed sequentially if only it is possible (depending on the algorithm that is used). 5. Conclusions This paper presents a concept of describing extraction processes using graphs, the meaning of graph nodes and the graph edges in the extraction process. We focused on a few implementation aspects like interconnections between nodes and the possibility of deadlock occurrence when particular graph structures are used. A method of avoiding deadlocks was also presented and it was described. by mathematical formulas. Next we introduced algorithms for external data queuing, grouping and joining. Although not tested in this paper, the presented data queuing is the efficient method of avoiding deadlocks that may occur in our ETL-DR extraction environment due to the data transferring method we used. The grouping transformation can process data sets of any size, the only limitation is the available temporary disk space. It makes use of the additional tuple stream properties, such as sorted order according to the values of the grouping attributes. The joining transformation can also process an unlimited number of tuples. It can store its slave-input tuples to disk files in a sorted order and then access any tuple in the file in log(n) time. Our research proves that a virtual memory offered by operating systems is not always the efficient solution. Dedicated algorithms of storing data in external files working on the application level are more efficient due to elimination of random accesses to a disk, which is the weakest side of the OS virtual memory. This weakness is especially emphasized in Java applications. A typical JVM prefers allocating new memory blocks to freeing unnecessary ones as soon as possible. This may be very efficient when only physical RAM is in use, but when JVM enters a virtual memory area and a garbage collector tries to recover unused memory blocks from it, the efficiency of a whole application dramatically drops. References
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3050/2246", "len_cl100k_base": 5616, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 27307, "total-output-tokens": 6644, "length": "2e12", "weborganizer": {"__label__adult": 0.0002512931823730469, "__label__art_design": 0.0002484321594238281, "__label__crime_law": 0.0003483295440673828, "__label__education_jobs": 0.0007123947143554688, "__label__entertainment": 4.6253204345703125e-05, "__label__fashion_beauty": 0.00011813640594482422, "__label__finance_business": 0.0003647804260253906, "__label__food_dining": 0.00026297569274902344, "__label__games": 0.00033855438232421875, "__label__hardware": 0.0011138916015625, "__label__health": 0.00037932395935058594, "__label__history": 0.00019609928131103516, "__label__home_hobbies": 8.183717727661133e-05, "__label__industrial": 0.0005631446838378906, "__label__literature": 0.0001456737518310547, "__label__politics": 0.00019609928131103516, "__label__religion": 0.00028967857360839844, "__label__science_tech": 0.048675537109375, "__label__social_life": 7.867813110351562e-05, "__label__software": 0.01910400390625, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.00017881393432617188, "__label__transportation": 0.00036525726318359375, "__label__travel": 0.00015723705291748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26760, 0.01643]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26760, 0.84946]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26760, 0.89639]], "google_gemma-3-12b-it_contains_pii": [[0, 2154, false], [2154, 4427, null], [4427, 6768, null], [6768, 8480, null], [8480, 10652, null], [10652, 13299, null], [13299, 14492, null], [14492, 16795, null], [16795, 18913, null], [18913, 20607, null], [20607, 21455, null], [21455, 24206, null], [24206, 26760, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2154, true], [2154, 4427, null], [4427, 6768, null], [6768, 8480, null], [8480, 10652, null], [10652, 13299, null], [13299, 14492, null], [14492, 16795, null], [16795, 18913, null], [18913, 20607, null], [20607, 21455, null], [21455, 24206, null], [24206, 26760, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26760, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26760, null]], "pdf_page_numbers": [[0, 2154, 1], [2154, 4427, 2], [4427, 6768, 3], [6768, 8480, 4], [8480, 10652, 5], [10652, 13299, 6], [13299, 14492, 7], [14492, 16795, 8], [16795, 18913, 9], [18913, 20607, 10], [20607, 21455, 11], [21455, 24206, 12], [24206, 26760, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26760, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
bb24b77f1e3d286c8f9d9fe7db03c1f9762e6363
### C++ Basics - basic i/o: cout, cin - comments - variables: const, types, strong typing, casting - operators: unary, binary - Boolean expressions - conditional: if, else, ?: - loops: for, while - arrays - functions, call by value, prototypes, header files --- ### basic i/o: cout We start by learning the essential, non-Object Oriented features of C++. No programming course is complete without the "Hello World" program: ```cpp #include <iostream.h> // HelloWorld.cc int main() { cout << "Hello World" << endl; return 0; } ``` Anatomy of the program HelloWorld.cc: --- 1. `#include <iostream.h>` - (a) `#include` is a C pre-processor directive (more on those later) to "include" the file iostream.h - (b) the brackets `<...>` say that the file is in the "standard" include directory. - (c) iostream.h (much more on that later) contains the standard C++ i/o class declarations and function prototypes (much more on that later). --- 2. `int main()` { - (a) all C++ programs **must** have a main - (b) main is actually a function (more on that later) of type int - (c) main has no arguments (more on that later). Actually, main can have arguments – more on that later. - (d) The body of main (and every function) is contained within `{ }`. - (e) C++ is written out free-form. *How we write the code is a matter of style.* **For space reasons on the slides, I will often break style guidelines.** 3. `cout << "Hello World" << endl;` (a) `cout` is a pre-defined instance of the stream on stream - much more on that later. It is used for writing output to tty. (b) `<<` is the *insertion* operator: it inserts what follows into the stream `cout`. (c) "Hello World" is the string we want to output. (d) `endl` does 2 things: i. it writes a `<CR>` ii. it flushes the output buffer (e) all statements in C++ must end with `';` We compile and link HelloWorld.cc with the command: ``` % g++ -o HelloWorld HelloWorld.cc ``` and run it with the command: ``` % ./HelloWorld ``` Notes: 1. C++ files can have many extensions, e.g: .cc, .C, .cpp, .cxx and probably others. Some are part of the standard, some are expected by certain compilers. We will use .cc, which is both standard, and works with g++. 2. The C++ version of gcc is invoked with "g++". Since g++ and gcc are really the same beast, look at the *sillions* of command line options: ``` % man gcc ``` 4. `return 0;` (a) since `main` is of type `int`, it must return an `int` to the program that called it (the shell). (b) a `return value of '0'` signifies successful completion (that's Unix, not C++) 5. } *finally*, we mark the end of the `main` code block. 3. What is the name of the executable file if we omit "-o HelloWorld"? 4. From now on, we will *always* use the compiler switch "-Wall" (= -W warnings all) to print all compiler warnings 5. We are compiling and linking together - we could do separately 6. Soon we will use "make" files - which are particularly useful for more complicated compilations. 7. For security, it is good to leave the current (working) directory out of the path - then we need to precede the executable name with "./" more i/o: cin - C++ is symmetric between output - with cout, and input - with cin. - Before using cin, we have to jump ahead to variables. - All variables are strongly typed - a variable must be declared before it can be defined, or used. - C++ supports several built-in types: the number of bits used for each variable is implementation dependent. Since we've already seen int with main, we'll stay with int. On a 32-bit machine, int is usually a 32-bit signed integer. - Let's write a program that reads in a number from the keyboard: ``` #include <iostream.h> // ReadNumber.cc int main() { cout << "Enter a number: ": " << ends; int i; cin >> i; cout << "You typed " << i << endl; return 0; } ``` 1. cout << "Enter a number: ": " << ends; we don't want to use a <CR>, but we do need to flush the output buffer. This is done with ends. 2. int i; before we can use the integer i, we must declare it. The declaration can go anywhere in the same scope before i is used. 3. cin >> i; (a) the predefined input stream object is cin (b) the extraction operator is >>. (c) we extract the integer from the stream cin into the integer i. (d) when we type the <CR>, we automatically flush the stream. 4. cout << "You typed " << i << endl; we can use arbitrarily many (or >>) on the same line. Comments C++ supports 2 types of comment syntax: 1. A single line comment with: // (which can be anywhere on the line. E.g: ``` // now we're going to type a message cout << "Enter a number: ": " << ends; int i; // i is an integer ``` 2. The C-style "block" comment, /* a comment */ This is useful for temporarily "commenting out" a block of code. ⚠️ be careful using the 2 together, because // can comment out the /* or */. Two comments about comments: 1. Use comments liberally to document your code. There are 2 reasons: (a) so you can understand your own code 24 hours later (b) so someone else (partner, TA, boss, successor) can understand it 2. Having said that, well-written C++ should be self-documenting. variables: types, strong typing, casting The built-in types, and the minimum number of bits for a 32-bit architecture are: <table> <thead> <tr> <th>Type</th> <th>Bits</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>char</td> <td>8</td> <td>character</td> </tr> <tr> <td>short</td> <td>16</td> <td>integer</td> </tr> <tr> <td>int</td> <td>32</td> <td>integer</td> </tr> <tr> <td>long</td> <td>64</td> <td>integer</td> </tr> <tr> <td>float</td> <td>32</td> <td>floating point</td> </tr> <tr> <td>double</td> <td>64</td> <td>floating point</td> </tr> </tbody> </table> All but float and double can be modified by unsigned A variable must be declared before it is defined — but they can be done together. A variable can also be initialized with its declaration: ```c int i; int j=i; int k=2; ``` Variable Names: - case sensitive - alphanumeric, - begin with letter or _ --- **operators: unary, binary** C++ supports the usual binary operators: ``` +, -, *, / ``` (binary, because there are two operands). ```c float a=2.0; float b=5.0; float c=6.0; float arg2 = b*b - 4.0*a*c; ``` Operator precedence follows the usual BODMAS rules — when in doubt, use parentheses. C++ requires strong typing. --- Some operators are used so frequently, there is a convenient shorthand: <table> <thead> <tr> <th>Operator</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>a += b</td> <td>a = a + b</td> </tr> <tr> <td>a -= b</td> <td>a = a - b</td> </tr> <tr> <td>a *= b</td> <td>a = a * b</td> </tr> <tr> <td>a /= b</td> <td>a = a / b</td> </tr> </tbody> </table> --- Other miscellaneous binary operators: <table> <thead> <tr> <th>Operator</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>a % b</td> <td>modulus of a/b</td> </tr> <tr> <td>a &amp; b</td> <td>bit-wise AND</td> </tr> <tr> <td>a</td> <td>b</td> </tr> <tr> <td>a ^ b</td> <td>bit-wise XOR</td> </tr> </tbody> </table> C++ supports unary operators too - there is only one operand: <table> <thead> <tr> <th>Operator</th> <th>Meaning</th> <th>Comment</th> </tr> </thead> <tbody> <tr> <td>a++</td> <td>a = a + 1</td> <td>postfix</td> </tr> <tr> <td>++a</td> <td>a = a + 1</td> <td>prefix</td> </tr> <tr> <td>a--</td> <td>a = a - 1</td> <td>postfix</td> </tr> <tr> <td>--a</td> <td>a = a - 1</td> <td>prefix</td> </tr> <tr> <td>&gt;&gt;a</td> <td>bit-wise right shift</td> <td></td> </tr> <tr> <td>&lt;&lt;a</td> <td>bit-wise left shift</td> <td></td> </tr> <tr> <td>~a</td> <td>1's complement</td> <td></td> </tr> </tbody> </table> ### Casting - Casting means converting one type to another, C++ does not enforce strong casting. - An expression with mixed types will cast one type to another - sometimes with unexpected results. - The unary operator (type) acting on a variable converts the variable to type type. - When in doubt, use an explicit cast. ```cpp #include <iostream.h> // Unary.cc int main() { int i=4; cout << "i = " << i << "", i++ = " << i+1 << endl; cout << "i = " << i << "", ++i = " << ++i << endl; return 0; } ``` ```cpp #include <iostream.h> // Cast.cc int main() { int i=4; int j=5; cout << "i/j = " << i/j << endl; cout << "i/(float)j = " << i/(float)j << endl; return 0; } ``` **const** - In C++, we use `const`, which creates a run time constant. - A `const` cannot be altered once it is declared - so initialization (definition) must take place with declaration: ```cpp #include <iostream.h> // Const.cc int main() { const int i=12; const float pi=3.14159; // also M_PI in math.h cout << "i = " << i << ", pi = " << pi << endl; return 0; } ``` **Boolean expressions** - A Boolean expression evaluates to either "true" (if any bit is set), or "false" (if all bits are zero). - C++ supports a Boolean type, with values `true` and `false`. - Boolean expressions are formed with Boolean operators: - Parentheses should be used to resolve ambiguities in operator precedence. ```cpp #include <iostream.h> // Boolean.cc int main() { int i=12; int j=137; cout << "i==j " << (i==j) << endl; cout << "(i>j) || 1 " << ( (i>j) || 1 ) << endl; return 0; } ``` conditional: if, else, ?: Armed with the ability to form logical expressions, we can now do conditional execution. i.e. execution conditional upon the truth of a Boolean variable or expression Points to note: 1. #include <math.h> is used for math functions. We’ve sneakily introduced functions – more on them later. 2. if {} conditional expression. The Boolean expression is evaluated, If it resolves to true, the statements inside {} are executed. 3. since C++ is written free form, statements can be written across several lines. 4. if only one statement follows the if, the {} are not necessary (though recommended) Very often, we not only want to execute statements after the if, but do something else if the Boolean is *not* true. This is done with else. ``` #include <iostream.h> // If,cc #include <math.h> int main() { cout << "Enter 3 numbers: " << ends; float a, b, c; cin >> a >> b >> c; float arg2 = b*b - 4.0*a*c; if ( (arg2>0.0) && (a=0.0) ) { float arg = sqrt(arg2); cout << "Roots are: " << (-b + arg)/(2.0*a) " and " << (-b - arg)/(2.0*a) << endl; } return 0; } ``` ``` #include <iostream.h> // IfElse,cc #include <math.h> int main() { cout << "Enter 3 numbers: " << ends; float a, b, c; cin >> a >> b >> c; float arg2 = b*b - 4.0*a*c; if ( (arg2>0.0) && (a=0.0) ) { float arg = sqrt(arg2); cout << "Roots are: " << (-b + arg)/(2.0*a) " and " << (-b - arg)/(2.0*a) << endl; } else cout << "I can’t evaluate these roots" << endl; return 0; } ``` Finally, the statement after an else can be another if: #include <iostream.h> // IffElseIf.cc int main() { cout << "Enter a number: " << endl; int i; cin >> i; if (i<0 ) cout << "number is < 0" << endl; else if (i==0 ) cout << "number is == 0" << endl; else cout << "number is > 0" << endl; return 0; } The construction: if (condition) do something else do something else is used so often to evaluate an expression, that there is a shorthand operator: a = (condition) ? b : c; The condition is first evaluated. - If it is true, a is set equal to the value of the expression b - If it is false, a is set equal to the value of the expression c loops: for, while To execute a block of code a number of times, or while some condition holds true. C++ provides the for and while loops. #include <iostream.h> // For.cc int main() { cout << "Enter how many times to run loop: " << endl; int n; cin >> n; for (int i=0; i<n; i++) { cout << "i, i*i = " << i << " " << i*i << endl; } return 0; } Points to Note 1. The for expression has 3 parts, separated by ‘;’s (a) setting an initial value (int i=0), (b) a termination condition (i<n), (c) an action at the end of each iteration (i++) Any or all of these parts may be omitted, but the ";" is still necessary. 2. The code block to be executed is contained within { }. If there is only 1 statement, the { } can be omitted — but shouldn’t be. 3. The for loop parameter, i, is valid only within the scope of the loop — within { } Sometimes, we want to execute a block of code while a condition holds true. This is done with a while loop, ``` #include <iostream.h> // While.cc #include <cmath.h> int main() { float a=0.0; while ( a>0.0 ) { cout << "sqrt( " << a << " ) = " << sqrt(a) << endl; cout << "Enter a +ve number; -ve to end: " << ends; cin >> a; } return 0; } ``` Sometimes we want a clean way of either skipping an iteration, or breaking out of a loop when some condition is met. This is done with the continue and break statements, ``` #include <iostream.h> // ForContinue.cc int main() { int a=50; for (int i=0; i<n; i++) { if (!((i%7) ) continue; cout << "i, i=i = " << i << ", " << i+i << endl; } return 0; } ``` We could even have an endless loop (which is often useful). We might break out of the loop with "C. ``` #include <iostream.h> // Endless.cc int main() { long i=0; while (1) cout << "This is the " << i++ << th iteration" << endl; return 0; } ``` Note that since "1" has one bit set, it always evaluates to TRUE. ``` Style This smells awfully like goto, so should be avoided where possible ``` #include <iostream.h> // EndlessBreak.cc #include <time.h> int main() { long i=0; while (1) { cout << "This is the " << i++ << "th iteration" << endl; if ( clock()>10 ) break; } return 0; } In this last example, we keep going indefinitely, until the used CPU time exceeds a certain number of ticks. Clearly, there is no unique way to do what we want to do: the combination of for, while, continue, break is a matter of style. Notes: 1. To generate random numbers: (a) the header file stdlib.h is needed. (b) rand() is the random number generator that returns an int. (c) to convert to a float, in the range 0 <= x <= 1, divide by RAND_MAX, using a cast. 2. a const is used for the array size. Its value is known at compile time, but it becomes a run time variable. 3. the operator [] is used to declare and access elements of the array. 4. the array's first element is array[0] 5. the index i is only valid within the scope of the for loop, so it can be "recycled" in subsequent loops. ## arrays Now that we can use loops, we can also use arrays. ```c #include <iostream.h> // Array.cc #include "stdlib.h" // to fix SunOS int main() { const kArraySize=10; float a[kArraySize], b[kArraySize]; for (int i=0; i<kArraySize; i++) { a[i] = rand()/(float)RAND_MAX; b[i] = rand()/(float)RAND_MAX; } for (int i=0; i<kArraySize; i++) { cout << "element " << i << ": a, b, a+b " << a[i] << ", " << b[i] << ", " << a[i]+b[i] << endl; } return 0; } ``` ## multi-dimensional arrays With the correct use of data structures, we actually use them much less than we'd think. But for some applications (e.g. matrices) they are still useful. A multi-dimensional array is really an array of arrays: ```c float matrix[4][7]; // matrix[row][column] ``` i.e. matrix is an array of columns - the rightmost subscript changes the fastest. functions, call by value, prototypes We have sneakily used a few functions already, - A function has a type: - void — no type - a built-in type, int, float, long etc. - a user-defined type (see later) - A function returns a value, unless the function is of type void. - A function can take zero, 1 or several arguments. - The function arguments are passed by value from the calling program to the function. This means the function has its own copy of the parameters, and does not change the calling program’s variables. - The parameter names are only valid within the scope of the function (except for global parameters – avoid, but see later). - Each function has a unique signature composed of: - the function’s name - the function’s class – see later - the function’s argument types The function’s return type is not part of the signature (see later for function overloading). - A function must be: 1. first declared (or prototyped) 2. then defined (or implemented) — this can be done with declaration 3. then invoked in the body of the code Points to note: 1. The function printMe must be declared before its use. 2. The function can be defined at declaration time (but it doesn’t have to be). 3. The parameter type(s) must be specified in the declaration. Note that x is a dummy parameter — any name would do, since it is local to printMe. 4. The function printMe is of type void. 5. Since it is void, there is no return value. 6. The argument passed to the function is a. 7. Since printMe is void, it is not used in an assignment statement. #include <iostream.h> // Function1.cc void printMe(float x) { cout << "Number is: " << x << endl; } int main() { float a; while (1) { cout << "Enter a number: " << ends; cin >> a; printMe(a); } return 0; } What happens if `printMe` is passed some variable that is not a float? ```cpp #include <iostream.h> // Function2.cc #include <stdlib.h> void printMe(float x) { cout << "Number is: " << x << endl; } int main() { while (1) printMe(rand()); return 0; } ``` The absence of strong casting is a double-edged sword: it allows this to work, but may not always give the result we intended. (See later for function overloading and template functions). --- We can see explicitly that the parameters really are passed by value: ```cpp` #include <iostream.h> // Function3.cc int incrementMe(int x) { return x+1; } int main() { int i=4; int j=incrementMe(i); cout << "i,j: " << i <<", " << j << endl; return 0; } ``` --- ### Function Overloading Since functions with a different signature are considered different functions, we can use this to overload function names: ```cpp` #include <iostream.h> // Function4.cc float halveMe(float x) { return x/2.; } int halveMe(int x) { return x/2; } int main() { cout << "float halveMe: " << halveMe(5.0f) << ", " << "int halveMe: " << halveMe(5) << endl; return 0; } ``` --- **Notes:** 1. The functions `float halveMe(float)` and `int halveMe(int)` really are 2 different functions. The linker decides which to use based on the signature. 2. `5.0` *without* a `f` is a double, so the linker wouldn’t know which function to use. We can either specify `5.0f` as a float, or cast `5` to a float with: `(float)5` 3. The return type is not part of the signature, so `cannot` be used to resolve ambiguities, since there is no strong casting. Usually, we want to keep the function *definitions* in a separate file from their use. In this case, we must still *declare* the function by specifying its signature, or function prototype. This will usually be done in a header file. We will do this from now on. ```cpp #include "util.hh" // Function5.cc int main() { cout << "float halve5: " << halve5(5.0f) << ", " << "int halve5: " << halve5(5) << endl; return 0; } ``` Notes: 1. We have put the function declarations in the file util.hh (we choose to use the .hh suffix to signify C++ header files). 2. Since util.hh is not a standard header file, it is enclosed in "..." not <... 3. We have chosen to put iostream.h inside util.hh — since we know we'll always need it. Points to note: 1. ifndef __UTIL__HH We only want to include the header file once, so we enclose it in an ifndef, #endif block. This is a C pre-processor directive. 2. define __UTIL__HH And then define a compile time variable to prevent subsequent inclusions. 3. float halve5(float); The function prototype does not need to specify the actual parameter names - but it must specify the types. (That is the purpose). Since we didn't put the function definitions in the header file, we'll define them in util.cc Let's look at util.hh ```cpp #ifndef __UTIL__HH // util.hh #define __UTIL__HH #include <iostream.h> float halve5(float); int halve5(int); #endif // __UTIL__HH ``` System Functions C++ uses the standard C functions, as well as C++ ones. There are several families of functions, with their associated header files: - "Standard" C functions. These are documented in e.g. K&R. The include files are usually in /usr/include or /usr/local/include. Note that many of these are made redundant or obsolete by C++. They are usually documented in the Unix man pages. - "Standard" C++ functions. These are documented in e.g. E&S. The include files are usually (for gcc) in /usr/include/g++ or /usr/local/include/g++. They will make more sense once we’ve covered more C++. Notes: 1. We also must include the same header file – even when it’s not technically needed, it forces consistency between declarations and definitions. 2. Now we define the functions with the actual parameters. 3. If we change the function signature, we are forced to change both the header file and the implementation. 4. To build the executable, we can either compile both files together: `% g++ -Wall -o Function5 Function5.cc util.cc` or else first compile util.cc to make an object file, and then link: `% g++ -Wall -c util.cc` `% g++ -Wall -o Function5 Function5.cc util.o` - "System" C functions. These are C functions specific to the Operating System. In the case of Unix, there will be a core set of "Posix-compliant" functions, plus additional OS specific functions. The "Posix-compliant" functions will often be defined on non-Posix systems, but it is not guaranteed. The include files are usually in /usr/include or /usr/local/include. They are usually documented in the Unix man pages. - Library functions. These are C or C++ functions provided as part of a library. They may be used for e.g. graphics, database applications, etc. Conclusion You now know enough C++ to write pretty much any non-Object Oriented program.
{"Source-Url": "http://users.ece.utexas.edu/~adnan/C++/Lecture-1.4up.pdf", "len_cl100k_base": 6371, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 100076, "total-output-tokens": 7124, "length": "2e12", "weborganizer": {"__label__adult": 0.00041365623474121094, "__label__art_design": 0.00029158592224121094, "__label__crime_law": 0.00019073486328125, "__label__education_jobs": 0.0005221366882324219, "__label__entertainment": 5.632638931274414e-05, "__label__fashion_beauty": 0.00012612342834472656, "__label__finance_business": 0.00011664628982543944, "__label__food_dining": 0.00042366981506347656, "__label__games": 0.0005784034729003906, "__label__hardware": 0.00083160400390625, "__label__health": 0.00021851062774658203, "__label__history": 0.00016450881958007812, "__label__home_hobbies": 8.481740951538086e-05, "__label__industrial": 0.00028586387634277344, "__label__literature": 0.00017154216766357422, "__label__politics": 0.00015723705291748047, "__label__religion": 0.0004122257232666016, "__label__science_tech": 0.0014934539794921875, "__label__social_life": 6.657838821411133e-05, "__label__software": 0.00278472900390625, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0003402233123779297, "__label__transportation": 0.0004477500915527344, "__label__travel": 0.000240325927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21820, 0.0156]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21820, 0.66179]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21820, 0.80228]], "google_gemma-3-12b-it_contains_pii": [[0, 1448, false], [1448, 3211, null], [3211, 5015, null], [5015, 5762, null], [5762, 6787, null], [6787, 7952, null], [7952, 8867, null], [8867, 10462, null], [10462, 11544, null], [11544, 13185, null], [13185, 15113, null], [15113, 16917, null], [16917, 18803, null], [18803, 19963, null], [19963, 21731, null], [21731, 21820, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1448, true], [1448, 3211, null], [3211, 5015, null], [5015, 5762, null], [5762, 6787, null], [6787, 7952, null], [7952, 8867, null], [8867, 10462, null], [10462, 11544, null], [11544, 13185, null], [13185, 15113, null], [15113, 16917, null], [16917, 18803, null], [18803, 19963, null], [19963, 21731, null], [21731, 21820, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21820, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21820, null]], "pdf_page_numbers": [[0, 1448, 1], [1448, 3211, 2], [3211, 5015, 3], [5015, 5762, 4], [5762, 6787, 5], [6787, 7952, 6], [7952, 8867, 7], [8867, 10462, 8], [10462, 11544, 9], [11544, 13185, 10], [13185, 15113, 11], [15113, 16917, 12], [16917, 18803, 13], [18803, 19963, 14], [19963, 21731, 15], [21731, 21820, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21820, 0.0538]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
a2a50bd9f0f39e704ae26f0f66ed5e479ce19bba
Solving Sudoku Puzzles using Backtracking Algorithms Jonathan Christopher / 13515001 Program Studi Teknik Informatika Institut Teknologi Bandung Bandung, Indonesia 13515001@students.itb.ac.id Abstract—Sudoku is a popular puzzle consisting of a 9x9 grid of squares which must be filled using the digits 1-9 according to several constraints. This paper investigates two backtracking approaches for finding solutions for Sudoku puzzles: naïve backtracking and backtracking with constraint propagation; as well as other techniques that may be used to optimize solving algorithms. Keywords—Sudoku; backtracking; constraint satisfaction; constraint propagation I. INTRODUCTION A Sudoku puzzle is a type of puzzle in the form of square grid. The grid consists of 81 square cells, arranged in 9 rows and 9 columns. The grid itself is further divided into nine non-overlapping 3x3 subgrids. Initially, only some of the cells in the grid would be filled. To solve the puzzle, one must fill in the remaining cells with the digits 1 to 9 such that the digits must be unique in each row, column or subgrid. Sudoku puzzles were likely invented in the 1970s by Howard Garns. It was popularized in Japan by a magazine in 1984. It was during this time it got its name, which in Japanese roughly means ‘single numbers’. Sudoku started to become popular in other countries when a man named Wayne Gould created a computer program that could generate Sudoku puzzles, and proposed for it to be published by daily newspapers [1]. Solving a Sudoku puzzle is a constraint satisfaction problem, as one seeks a solution which digits fulfills the column, row and subgrid uniqueness constraints. Many types of algorithms can be used to solve this kind of problem, ranging from simply brute-forcing the digits up to non-deterministic algorithms like genetic programming. However, each solver algorithm differs in their time and space complexities. The choice of algorithm and optimizations used is important in creating a Sudoku solver. Although a 9x9 grid might not seem big, the permutation of digits required to fill it increases exponentially. A simplistic brute-force solver that generates all possible solutions would not be able to solve Sudoku puzzles in a reasonable length of time, as there are approximately 6,670,903,752,021,072,936,960 possible valid Sudoku grids [1]. Smarter algorithms that needs less time and memory space are needed. Moreover, solving Sudoku puzzles has been shown to be an NP-complete problem by Takayuki Yato and Takahiro Seta from University of Tokyo [1]. This means that currently, there are no known polynomial-time deterministic algorithm for solving Sudoku puzzles. It also means that if a polynomial time solution for Sudoku is discovered, the polynomial time algorithm can also be applied to solve other NP-complete problems, many which have more important usage in the real world. This paper focuses on using backtracking algorithms to solve Sudoku puzzles. Backtracking is chosen because with the right optimizations, it can produce solutions in reasonable time and memory space. Two approaches are be considered: naïve backtracking, which fills in each cell one by one, and backtracking with constraint propagation, which executes non-branching operations in between node visits to fill in cells which values are already determined by other cells. Fig. 1. A Sudoku puzzle (hard difficulty sample puzzle used in the author’s experiment, from the New York Times 18 May 2017 daily puzzle: https://www.nytimes.com/crosswords/game/sudoku/hard). by R.J. Walker, Golomb, and Baumert. In backtracking, a solution is constructed in a depth-first pattern from a sequence of decisions. If a decision leads to a dead end, from which no valid solution can be constructed, the current sequence of decisions is partially unwound, up to a point from which a different decision can be tried [2]. The algorithm starts from a root node, which is typically an empty solution. This node is then expanded to reveal other nodes (partial solutions) which are only a step/decision away from the current node. A newly expanded node is then selected as the next current node. The new current node is then checked to determine whether the partial solution it represents can be part of a valid full solution. If not, the current node is pruned or cut off. Otherwise, it is then recursively expanded, just like the previous current node. Using backtracking for search is an improvement over using brute-force techniques, such as exhaustive search. Brute-force approaches need to consider all solutions. The number of possible solutions is usually very large, as most solutions are permutations or combinations of some form, which increases exponentially. In backtracking, a large fraction of these possible solutions can be pruned at every step, when it became known that a partial solution won’t be part of a valid full solution. Pruning helps to quickly reduce the number of solutions which must be generated and checked. There are several general properties of backtracking approaches [2]: 1. Problem solution A problem solution is represented by a n-tuple of decisions for each step \( X = (x_1, x_2, ..., x_k) \), \( x_i \in S_i \). Each \( x_i \) represents a decision taken when constructing the solution, from all possible different decisions \( S_i \). The possible decisions might be different for each step taken, but it could also be the same. 2. Generating function A generating function \( T(k) \) generates all possible decisions that can be taken at step \( k \). One of the values generated is then assigned to \( x_k \), which is a decision made to construct the current partial solution. 3. Bounding function The bounding function \( B(x_1, x_2, ..., x_k) \) checks whether a partial solution leads to a valid full solution. If yes, it returns true and the algorithm proceeds to generate more decisions that can be made from this step; otherwise, this partial solution is pruned and discarded. The set of all complete solutions for a problem is called the solution space. It is represented as a n-tuple of decisions which has exactly the number of decisions which is required to lead to a full solution. It can also be organized as a state space tree. A tree is a graph which does not contain cycles. Every vertex/node of the tree represents a state or step in constructing the solution to a problem, while each edge represents a decision made at every step/state which transforms that state into a different state in the next step. A path starting from the root of the tree down to a leaf vertex represents a full solution; the set of all such paths is the solution space. A valid solution to the problem is searched by finding a path through the state space tree, starting from the root and ending at a valid leaf vertex. A solution is constructed by traversing the tree in a depth-first order. In traversing the tree, the vertices that are currently being visited are called live nodes. Vertices that are expanded from the live node are called expand-nodes. Every time a vertex is expanded through an edge, the path from the root vertex to itself becomes longer. A vertex is only expanded if when checked by the bounding function, it is still found to lead to a valid full solution. Otherwise, it is ‘killed’ or pruned at becomes a dead node, which will not be expanded again When a vertex has been fully expanded and can’t be expanded any more, but a full solution has not been found, the search will backtrack and return to the parent of the vertex. The search will end when either all vertices have been visited, or a valid full solution has been found. A valid full solution is called a goal node in the state space tree [2]. A backtracking algorithm is commonly implemented recursively, as tree traversal is inherently recursive in nature. Its basis is when the search reaches a leaf node, on which it checks whether the full solution that has been constructed is a valid solution or note. Its recurrence is to enumerate all decisions (tree edges) that can be taken from the current state (tree vertex/node), then to visit them if the bounding function does not eliminate any them. As backtracking is usually recursive in nature, then its worst case time and space complexity can be calculated using complexity theorems related to recursive functions. Its time complexity is exponential or factorial and depends on the time taken to compute each vertex and the average number of edges Fig. 2. Depth-first traversal/search of a state space tree. (source: http://www.w3.org/2011/Talks/01-steven-phenotype) leading away from each vertex. Its worst-case time is thus not very different from those of brute-force algorithms. However, a good bounding function will greatly reduce its average-case time and space complexity by increasing the amount of vertices pruned. Backtracking algorithms are commonly used to solve NP-hard problems. NP-hard problems currently have no known polynomial-time solution. Backtracking algorithms are not polynomial-time solutions since their worst-case complexity are exponential; but in most cases, with a good choice of bounding function, backtracking will provide an exact solution to most cases of those problems in reasonable time. III. SOLVING SUDOKU PUZZLES USING NAIVE BACKTRACKING A Sudoku puzzle is an ideal candidate for backtracking because of several factors. First, it is too hard to simply brute force, as the size of its solution space is prohibitively large. Second, it solution can be constructed from a sequence of decisions: which digits to pick for each remaining empty cell. To solve Sudoku puzzles using naive backtracking, the solution structure, generating function, and bounding function must first be defined. The solution structure for solving Sudoku puzzles is a n-tuple \( X = (x_{11}, x_{12}, ..., x_{99}) \), \( x_{ij} \in \{0, 1, 2, ..., 9\} \). Each cell value \( x_{ij} \) represents a digit in a cell on the \( i^{th} \) row and \( j^{th} \) column of the grid. At each state, each cell can contain the digits 1 to 9 or the value 0, which represents an empty cell. Initially, all cells except are marked empty, except the cells which digits are known from the problem. Partial solutions are constructed as the algorithm runs. A partial solution still contains empty cells. A full or completed solution has no more empty cells remaining. The generating function \( T(X) \) for a Sudoku puzzle receives a partially filled Sudoku grid and outputs all possible digits that can be filled for each empty cell. It is implemented by first creating a n-tuple \( A = (a_{11}, a_{12}, ..., a_{99}) \), \( 1 \leq i \leq 9, 1 \leq j \leq 9 \), where each \( a_{ij} \) is a set initially containing the digits 1 to 9 for empty cells, or an empty set for non-empty cells. Each set \( a_{ij} \) represents the digits which can be filled to the corresponding cell if it is not yet filled. Then for each non-empty cell \( x_{ij} \) in \( X \), the digit contained in the current cell is deleted from all \( a \) in the row \( i \), column \( j \), and the same subgrid as the current cell. After this process ends, only digits that can be filled legally according to Sudoku constraints will remain in the set for each non-empty cell. For solving Sudoku puzzles, the bounding function used is simply a function checking the constraints that must apply for a valid Sudoku grid. It receives a partially filled Sudoku grid and returns true or false depending on whether the given input grid fulfills all of the Sudoku constraints. These constraints alone reduces a lot of invalid solutions, without having to resort to other heuristics. It is implemented by simply looking through each non-empty cell and checking whether its value actually occurs in any of the other cells in the same row, column or subgrid. If any duplicate digit is found in the same row, column, or subgrid, the function halts and returns false. Otherwise, the function continues to check other cells. If it successfully checked all non-empty cells, it then returns true. A recursive implementation of a naïve backtracking solver typically consists of the implementations of the generating and bounding functions, and a recursive solve(grid) function. The solver function is called with the problem grid as a parameter, given as a two-dimensional array. It first checks whether the given Sudoku grid is valid, using the bounding function. If it is, it checks whether the grid has no more non-empty cells, indicating that the Sudoku puzzle is solved. If the Sudoku grid is not completely solved yet, the solver function proceeds to generate the sets of digits that are available to pick for every non-empty cell in the grid, using the generating function. It then picks a non-empty cell, then expands the state of the current grid by copying the current grid, then filling in the corresponding cell in the copied grid using a digit which is in the set of available digits for that cell. The copied grid is then passed recursively to the solver function. As the partially filled grids are passed deeper in the recursion tree, it must reach a state in which the grid is invalid or fully filled in, as there are only 81 cells in the grid, and at each recursion step a cell is filled. If a valid solution is found, the solver function will return the fully solved Sudoku grid, which will in turn cascade up through the recursion tree. If an invalid grid or partial solution is found, the solver function will return false, thus prompting the solver to backtrack as the current instance of the function is popped from the top of the call stack. The solver will attempt to try another digit from the available digits set. The solver will also backtrack when the available digits set is exhausted, as every non-empty cell must have a digit. If there are no solution for the given Sudoku grid input, the solver will continuously backtrack until the topmost instance of the solver function (which is called by the main program) also returns false. IV. SOLVING SUDOKU PUZZLES USING BACKTRACKING WITH CONSTRAINT PROPAGATION Using the naïve backtracking approach is often sufficient for solving most Sudoku puzzles. However, there are several harder problem instances which may require more time and memory space when solved using backtracking. To be able to solve even more puzzles in reasonable time, one can use a better approach to backtracking. The approach of backtracking with constraint propagation is inspired by the way humans try to solve intermediate to hard Sudoku puzzles. A human doing a Sudoku puzzle will not attempt to naively ‘guess’ digits like the naïve backtracking solver algorithm. Humans have very limited short-term memory relative to modern computers, and thus they would try to avoid having to backtrack as much as possible since backtracking is cumbersome to do on paper. Typically, humans would first try to fill in ‘fixed’ or non-ambiguous cells first. ‘Fixed’ cells are empty cells which value can be directly inferred from the values of non-empty cells in nearby rows, columns, or subgrids. They have exactly one valid digit in the available digits set, so that filling them in does not require branching or backtracking. As filling in a cell can provide more clues to the values of neighboring cells, filling in one cell can lead to a cascade of cells being filled. The action of filling ‘fixed’ cells without branching is called constraint propagation. Constraint propagation causes cell values which are directly dependent on the current values of neighboring cells to be updated all at once. Constraint propagation can be integrated with backtracking before generating expansions from the current state. This ensures that ‘fixed’ cells in the current state have been filled, reducing the number of empty cells and increasing the number of cells filled at each step of the recursion. Unlike the expansion done when constructing solutions, constraint propagation does not need much memory space. Constraint propagation also reduces the number of nodes/states visited by the backtracking solver algorithm, since every step of the algorithm can now fill in more than one cell in the grid. It has the effect of reducing average memory space used, and also the time taken to find a valid solution. Constraint propagation is implemented in a procedure named reduce(grid). This procedure takes a Sudoku grid and applies constraint propagation on it. First, it finds the set of available digits for each non-empty cell, similar to the available digits sets calculated by the generating function. Next, it checks whether any of the cells has only one possible digit to fill in. If a cell has only one possible digit, the cell is filled in using that digit. After that, the set of available digits is recalculated for all non-empty cells. This process is repeated until there are no more empty cells with only one possible digit, as at this point, there are no more ‘fixed’ cells – meaning that to fill in more empty cells, we must guess between at least two values and branch. V. COMPARING AND OPTIMIZING BACKTRACKING APPROACHES To compare between both the naïve and constraint-propagating backtracking approaches, the author ran a small experiment with three sample Sudoku puzzles with varying difficulties, taken from the New York Times’ 16 May 2017 daily puzzles\(^1\). Each puzzle is run using both approaches. For each execution, the execution time and the nodes visited by the backtracking algorithms are recorded. <table> <thead> <tr> <th>Sudoku Puzzle / Approach</th> <th>Execution Time (s)</th> <th>Nodes visited</th> </tr> </thead> <tbody> <tr> <td>Easy (naïve backtracking)</td> <td>0.0310001373291 s</td> <td>45</td> </tr> <tr> <td>Easy (with constant propagation)</td> <td>0.00099997270275 s</td> <td>1</td> </tr> <tr> <td>Medium (naïve backtracking)</td> <td>0.150000095367 s</td> <td>299</td> </tr> <tr> <td>Medium (with constant propagation)</td> <td>0.029000043869 s</td> <td>59</td> </tr> <tr> <td>Hard (naïve backtracking)</td> <td>0.878999948502 s</td> <td>1702</td> </tr> <tr> <td>Hard (with constant propagation)</td> <td>0.229000091553 s</td> <td>430</td> </tr> </tbody> </table> Fig. 4. Execution times and visited nodes count of the author’s Sudoku solver implementation The experiment results shows that backtracking with constant propagation consistently executes faster than naïve backtracking, especially for easy puzzles. Even for hard puzzles, constraint propagation improves the execution times, making it around 4 times as fast as the naïve backtracking algorithm’s execution times for the corresponding puzzles. Besides a reduction in execution times, the number of nodes visited during the search is also reduced when using constraint propagation. This is as predicted, as constraint propagation reduces the number of empty cells which must be searched. As recursive search and backtracking imposes a greater overhead than a simple loop, reducing the amount of recursion branching in the search algorithm can significantly improve the performance of the algorithm. <table> <thead> <tr> <th>Sudoku Puzzle / Approach</th> <th>Execution Time (s)</th> <th>Nodes visited</th> </tr> </thead> <tbody> <tr> <td>Easy (naïve backtracking)</td> <td>0.0299999713898</td> <td>44</td> </tr> <tr> <td>Easy (with constant propagation)</td> <td>0.000999927520752</td> <td>1</td> </tr> <tr> <td>Medium (naïve backtracking)</td> <td>0.204999923706</td> <td>412</td> </tr> <tr> <td>Medium (with constant propagation)</td> <td>0.0639998912811</td> <td>126</td> </tr> <tr> <td>Hard (naïve backtracking)</td> <td>0.411000013351</td> <td>772</td> </tr> <tr> <td>Hard (with constant propagation)</td> <td>0.108000040054</td> <td>200</td> </tr> </tbody> </table> Fig. 5. Execution times and visited nodes count of the author’s Sudoku solver implementation, with minimum available digits-first variable ordering enabled. Another heuristic investigated by the author is minimum remaining values variable ordering. Using this heuristic, when selecting the empty cell to fill after generating the available digits, a cell with the smallest number of possible digits is selected [2]. The purpose of this heuristic is to maximize the probability of picking the correct digit for the cell when pruning branches. For example, when picking a cell with five possible digits to fill in, each digit has a probability of only 0.2 of being the correct digit for the cell; but when picking a cell with only two possible digits, each digit has a probability of 0.5 of being the correct digit. The minimum remaining value variable ordering heuristic does sometimes help in reducing the number of nodes visited and thus the execution time. However, in some cases, such as the medium-difficulty sample puzzle, it has the opposite effect of slightly increasing the number of nodes visited and the execution time. More experiments are needed on a representative set of sample problems before the usefulness of this particular heuristic can be concluded. VI. CONCLUSION Backtracking is a suitable approach for solving Sudoku puzzles, due to the relatively large size of its state space. Backtracking can solve most Sudoku puzzles in reasonable time by pruning large parts of the state space tree that does not contain the solution as it searches it. From the author’s experiment, it can be concluded that constraint propagation improves the performance of backtracking algorithms for solving Sudoku puzzles when compared to naïve backtracking. The minimum remaining values variable ordering heuristic can be helpful for speeding up execution in some cases; however, a more conclusive experiment is needed to ensure that the heuristic is really applicable for the case of solving Sudoku puzzles. ACKNOWLEDGMENT The author thanks Dr. Masayu Leyla Khodra, S.T, M.T. and Dr. Ir. Rinaldi Munir, M.T. as the lecturer of the author’s Algorithm Strategy class, and also the author’s classmates, for their guidance in preparing this paper. REFERENCES
{"Source-Url": "http://informatika.stei.itb.ac.id/~rinaldi.munir/Stmik/2016-2017/Makalah2017/Makalah-IF2211-2017-034.pdf", "len_cl100k_base": 4842, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19600, "total-output-tokens": 5397, "length": "2e12", "weborganizer": {"__label__adult": 0.0006656646728515625, "__label__art_design": 0.0005788803100585938, "__label__crime_law": 0.0010890960693359375, "__label__education_jobs": 0.0022220611572265625, "__label__entertainment": 0.00023376941680908203, "__label__fashion_beauty": 0.00033283233642578125, "__label__finance_business": 0.000400543212890625, "__label__food_dining": 0.00113677978515625, "__label__games": 0.01910400390625, "__label__hardware": 0.002643585205078125, "__label__health": 0.0009984970092773438, "__label__history": 0.0006256103515625, "__label__home_hobbies": 0.0002510547637939453, "__label__industrial": 0.0010194778442382812, "__label__literature": 0.0005879402160644531, "__label__politics": 0.0004394054412841797, "__label__religion": 0.0008769035339355469, "__label__science_tech": 0.1082763671875, "__label__social_life": 0.00015437602996826172, "__label__software": 0.01103973388671875, "__label__software_dev": 0.8447265625, "__label__sports_fitness": 0.0012941360473632812, "__label__transportation": 0.0008683204650878906, "__label__travel": 0.00038552284240722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22587, 0.02815]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22587, 0.68871]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22587, 0.91528]], "google_gemma-3-12b-it_contains_pii": [[0, 3567, false], [3567, 8644, null], [8644, 13112, null], [13112, 19303, null], [19303, 21876, null], [21876, 22587, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3567, true], [3567, 8644, null], [8644, 13112, null], [13112, 19303, null], [19303, 21876, null], [21876, 22587, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22587, null]], "pdf_page_numbers": [[0, 3567, 1], [3567, 8644, 2], [8644, 13112, 3], [13112, 19303, 4], [19303, 21876, 5], [21876, 22587, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22587, 0.10323]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
359040d77b1c085497235e828cccd0db35309c45
Mobile Applications for Development: Project Implementation Plan Contents 1. Introduction: Creating Sustainable Businesses for the Knowledge Economy ...........3 2. Why mobile applications? .........................................................................................4 3. What is a mobile applications lab and what services and functions will it provide? .....5 3.1. Objectives ...........................................................................................................5 3.2. Management and Governance Structure ..............................................................6 3.3. Services and functions ..........................................................................................6 3.4. Selection process ..................................................................................................8 3.5. Partners .................................................................................................................8 4 Mobile Social Networking .............................................................................................10 4.1. The Role of Social Networking in Innovation .......................................................10 4.2. Scope of Work .......................................................................................................11 4.3. City-based social networking hubs ......................................................................11 4.4. Mentorship opportunities .....................................................................................12 4.5. Online social networking space ...........................................................................13 5 Project team ................................................................................................................13 1. **Introduction: Creating Sustainable Businesses for the Knowledge Economy** In conventional models of the technological diffusion, innovations start in the developed world and filter down to the developing world only once mass adoption begins to take place. Thus, highly-skilled R&D jobs tend to be clustered around universities and golf courses in advanced economies, while the developing world tends to get low-skilled, low-wage manufacturing jobs for products that are already mature. The software industry boom, spurred by the development of personal computing in the 1980s and the internet in the 1990s began to turn that model on its head. It created exceptional opportunities for innovation and growth in countries like India, which now has more than 2.5 million software developers, and an industry worth around US$60 billion in software exports and business process outsourcing. Today, following an unprecedented increase in access to mobile communications, where subscriptions worldwide will soon surpass five billion, a similar opportunity presents itself in the sphere of **mobile applications development**. Entrepreneurs in emerging markets are well positioned to benefit from the boom: mobile devices rather than computers are likely to become the primary way of accessing the Internet in these regions, and factors like large market size and strong pent-up user demand for mobile applications make developing countries a fertile ground for innovation in this space. In fact, it is difficult to speak about mobile innovation to date --- **Box 1: Mobile applications that began in the South** Reversing conventional wisdom, many new mobile innovations actually begin in developing countries and spread later to the developed world. A few examples include: - **WIZZIT and M-Pesa** are mobile applications that target users without bank accounts, allowing millions of rural or migrant dwellers to have access to banking services without opening a bank account. Both applications are compatible with a wide range of mobile devices, including early-generation cell-phones popular in low-income communities and work with pay-as-you-go plans. Both developed first in Africa. - The **“Ushahidi Engine”** is a platform that allows anyone to gather distributed data via SMS, email or web and visualize it on a map or timeline. The organization’s goal is to create the simplest way of aggregating information from the public for use in crisis response. It was initially developed to map reports of violence in Kenya after the post-election fallout in early 2008 and has since been used successfully in the aftermath of the Haiti and Chile earthquakes of 2010, to name but a few recent examples. It is a free and open source project led in collaboration by developers from Kenya, Ghana, South Africa, Malawi, Netherlands and USA. - **Reuters Market Light** (RML) brings commodity prices, crop and weather data to Indian farmers via mobile phones. In a country where some 650 million people depend on agriculture for a living, applications that reduce vulnerability to shifts in prices and weather conditions are helping farmers manage their crops. without mentioning the highly successful mobile-banking platforms like Wizzit and M-Pesa, the crisis-mapping tool Ushahidi, or the agricultural intelligence tool, Reuters Market Light, all developed in the Global South (see Box 1). 2. Why mobile applications? There are a number of features which make mobile applications an exciting opportunity for promoting economic and social development: - Mobiles already represent the largest platform for the delivery of development applications. In fact, new telephone subscriptions in low- and lower-middle-income countries have outnumbered those in upper-middle- and high-income countries since 1998 and virtually all new mobile customers in the coming years will be in developing nations.¹ - For many functions, like making credit-based payments or web-browsing, there are no adequate substitutes for a mobile phone in many developing countries and where substitutes do exist, for instance for making cross-border remittances, mobile phones often offer a more efficient and lower-cost solution. - The barriers to entry in mobile applications development are low and becoming lower as standard tools that can be downloaded for free become more widely accessible and easier to use. - The mobile applications market is highly segmented – by sector, by mobile operating system, by language – which means there are endless opportunities for specialization, for localization and for taking a successful application from one market and applying it in another. The mobile applications development industry has not yet experienced its “Google moment”, when a single company begins to dominate, and that may never happen while there are still more than ten major operating systems and while no single network operator can genuinely claim a global presence. Thus, there are opportunities for small and medium-sized enterprises (SMEs) to excel and start-ups can still make a big splash. The challenge is to expand this significant opportunity throughout emerging markets, and, more importantly, to find ways to exploit it in a way that supports sustainable economic, social and environmental development. In an effort to do this, infoDev, a donor-funded ICT for development agency hosted by the World Bank, has formed a public/private partnership with the Ministry of Foreign Affairs of the Government of Finland and Nokia to undertake a joint program on Creating Sustainable Businesses for the Knowledge Economy that will run from 2010 – 2012. The focus of mobile applications forms a significant component of the program, which foresees establishing regional mobile applications laboratories and using mobile social networking as a tool to promote the development of applications. This track on mobile applications will be supported by a suite of activities on business incubation and technology entrepreneurship, and an international conference that will bring together entrepreneurs, investors and market players at a Global Forum in Helsinki, May 30 – June 3 2011. These will be accompanied by a supporting track of analytical work in the field of ICTs and Innovations Systems in Agriculture, along with an analytical narrative describing the role of social networking in innovation in the context of mobile applications development. The program will be implemented at the country level in Finland’s development partner countries; at the regional level in Africa, Asia and in Europe, Caucasus and Central Asia (ECA) as well as at the global level. The program document is outlined in the attached document: About this note This project concept note concerns Tracks 1A and 1B of the program, on the establishment of a regional mobile applications labs and extending mobile applications through social networking. The aim is to launch regional mobile applications lab(s) in Africa before the end of 2010, and in the other two regions shortly afterwards. Beyond 2012, when this project is due for completion, it is anticipated that the labs should be able to become self-sustaining from revenues raised through their own operations. Each lab will work with the city-wide social networking hubs, to be established in association with Mobile Monday under Track 1B. Under Track 2, not described in this note, a series of activities in the field of technology entrepreneurship and business incubation are being launched that will, inter alia, see the creation of new business incubators and support for existing ones, and the launch of a global program of co-incubation (soft-landings for SME internationalization). Together with the regional mobile applications labs, these business incubators will be networked into a platform to provide virtual support for hosted start-ups. 3. What is a mobile applications lab and what services and functions will it provide? A mobile application lab is an open space where technology entrepreneurs can interact, work, gain access to tools and expertise, deploy their solutions, and start and grow their businesses. Run and managed by experts together with local developers, a lab provides the infrastructure necessary for the deployment and scaling of mobile applications. To access a lab, local programmers, web designers or mobile application developers can register as members, at no charge or for a nominal fee, depending on a particular lab’s business model. Each lab will provide an environment conducive to the development of solutions that have the potential to scale commercially, by providing state of the art equipment used to develop, test and scale software, technical training and workshops on business skills. Further, the labs will act as gateways to local, regional and international markets and will connect entrepreneurs with seed, venture and angel investors. 3.1 Objectives The objective of the Mobile Applications Labs are 1) to increase the competitiveness of innovative enterprises in the mobile industry, especially in the area of socially sustainable applications and services for base of the pyramid (BOP) communities, and 2) to ensure that locally relevant applications are created to meet growing developing country user demands. The labs will provide services both *locally*, serving the local entrepreneurial market, and *regionally*, providing resources to the mobile applications developers elsewhere in each region. To do this, it will provide some services in the physical location of the organization (e.g., training, testing, mentoring) while other services will be provided virtually (e.g., developing a website of resources for mobile apps developers throughout the region). This project will benefit from the experience of the program partners, notably: - *infoDev’s* experience in incubation of ICT enterprises, the regional Incubation Networks, the global ICT business incubation working group, and the global mobile flagship report. *infoDev* helps to animate a network of more than 300 business incubators in more than 80 economies around the globe, and is a leading agency in the field of information and communication technologies for development (ICT4D). - the Ministry of Foreign Affairs of the Government of Finland, which is thought-leader in the global development community, bringing specialist skills in the field of agricultural and rural development and forestry as well as in the application of mobile phone technology. - *Nokia*, which is the leading mobile communications equipment and solutions vendor and supplier worldwide, and brings to the program its immense experience in the development of mobile content and applications. - Mobile Monday, which is a volunteer-run Innovation Network which has established social networking hubs ("chapters") for the mobile industry in around 100 cities worldwide, including (with *infoDev* support) in Kampala (launched on 8 March 2010) and Nairobi (launched on 11 March 2010). One measure of success of the lab is that it should aim to generate between 8-10 mobile applications by 2012. It should also result in: - An increased commercialization rate of innovative m-application ideas that have potential for significant development impact; - Increased scale and competitiveness of innovative m-applications enterprises leading to greater reach to disadvantaged populations ### 3.2 Management and Governance Structure In addition to a manager, each lab will benefit from a steering committee with representation from local developers and the wider technical community, to ensure a sense of local ownership and responsibility for the initiative. In addition, representatives from academia will offer knowledge on training and certification, while inclusion of venture capital firms and individual investors will bring business strategy expertise to the committee. Finally, the committee will include industry partners who will advise on scalability of solutions as well as training activities and revenue generation. ### 3.3 Services and functions The services and functions of the lab will evolve over time, but it is expected that they will include some or all of the following: 1. **Training and accreditation** for mobile applications developers. The Labs could offer short and longer-courses for potential programmers and others in how to develop mobile applications, and in associated business skills. There are thousands of graduates in ICT from developing country universities each year, but often they lack the skills to be employed in the mobile sector. The Labs could offer courses, with appropriate accreditation, to help students gain employment or to develop their own applications. A parallel model would be the CISCO Network Academies which offer training in networking and IP skills. In the longer term, the Labs could work with universities to offer formal post-graduate qualifications. 2. **Certification.** Because there are so many different platforms for mobile operating systems (e.g., *Symbian, Meego* (the newly-announced Nokia/Intel open systems platform), Apple’s *iPhone*, Samsung *bada*, Microsoft *Windows 7*, Google *Android* etc) any application that is to gain scale needs to be able to demonstrate interoperability. In addition, local language versions of popular operating systems will need to be tested and verified. The Labs could offer a certification service for interoperable applications and provide facilities for network operators, service providers and applications developers to test their application under operational conditions. 3. **Competition for ideas.** The Labs could run competitions with prizes to attract submissions from small and medium-sized enterprises (SMEs) and budding entrepreneurs for applications development, including, for instance, a competition for ideas, for business plans, for brand names etc. The competition for ideas would be regional and could run in association with the Mobile Monday social networking hubs that are being established in different cities under the *Creating Sustainable Businesses* program. It should be emphasized that the innovation philosophy of the Labs is that applications should belong to the applications developers and entrepreneurs themselves, not to the Labs. 4. **Business mentoring.** Similar to an incubator, the Labs could assist applications developers with bringing their ideas to market. In this sense, the Labs could serve as specialized business incubators, as the entrepreneurs they serve develop their businesses over time. This may require additional space, and this function may evolve only after the first year or so of operation. The lab should also work with other Incubators in the infoDev network to bring start-ups to scale and help with product launches. The business mentoring would provide a more specialized form of training, for a targeted market of entrepreneurs. 5. **Replication of successful applications.** Mobile applications are often specific to individual countries, different operating systems, different languages etc. There exists a requirement, therefore, to assist, applications developers in replicating an application that has been successful in one market in other markets. This service would be particularly appropriate for smaller markets or more localized languages that might be late to receive beneficial applications under normal market processes. Development agencies or corporate social responsibility (CSR) programs may also find it useful to utilize the labs for support in replicating solutions across regions. The focus on replication would be important for those applications that have a social development value (e.g., in education, health, and especially agriculture which is one of the focus areas for the program as a whole etc). The replication service could also be offered to operators on a commercial basis. The intellectual property rights for the applications would belong to the developers, not the lab. 6. **Repository of knowledge in ICT4D.** There is a need in the ICT4D community to create a better basis for learning from past successes and failures. The mobile applications labs could establish an open knowledge base of ICT4D projects in the mobile space and document what has worked and what lessons can be learned. Content for this repository could come, for instance, from the ICT for Agriculture Sourcebook to be developed under this program. The Repository could also serve as a knowledge base of open source code for developers, similar to the Source Forge (sourceforge.net). 7. **Consumer behaviour research.** While consumer behaviour for mobile users is well-understood in the developed world, there is a lack of understanding of developing country markets, where cultural, linguistic and historical issues may affect take-up. The success of the M-Pesa mobile payments systems Kenya, or MXit in South Africa as a social networking platform, illustrates the fact that some m-applications are likely to do better in developing countries than in the developed world because there may be no good substitutes or alternative solutions available. The Labs could work with other partners to conduct user-behaviour research, especially among base-of-the-pyramid (BOP) communities, for instance on a single-client or multi-client basis. 8. **Access to finance, access to markets.** The Labs should act as a forum where entrepreneurs and applications developers can meet with potential partners that will enable them to commercialize their ideas and expand their business. These partners should include mobile network operators, equipment manufacturers, app store developers, investors, venture capitalists etc. The value of the Labs is that they will provide a neutral forum where matchmaking of partnerships can take place. They will provide sufficient scale to attract serious partners and, at the same time, a neutral environment where entrepreneurs and applications developers can discuss their ideas with larger organizations. Other components of the program will include activities on access to finance, SME internationalization and business co-incubation. In addition to these eight potential services and functions, infoDev welcomes other suggestions for how the Labs should perform, both from potential host organizations and from consultant firms bidding for this contract. ### 3.4 Selection process The selection of the labs will be an important step towards assuring their success. It is proposed that existing organizations be selected to host the labs and that the selection be carried out through a competitive tendering process. In each region, a series of scoping missions will first be carried out to meet with potential host organizations and other stakeholders and focus group discussions will be held to raise awareness and gather inputs. This will be followed by an open call for expressions of interest (EOI) followed by a request for proposals and site visits for shortlisted host organizations. For Africa, the scoping missions and four focus groups were carried out between Feb-April 2010 and the selection process is now underway. This will be used as a template for the activities to be launched in the other regions. In addition, an overall task manager for the mobile applications labs has been selected who will help in getting them launched. ### 3.5 Partners Each lab will work with partners in all aspects of its work. Examples of potential partners include: Box 2: As a major mobile labs partner, Nokia will provide: - Support and training on Nokia platforms through Forum Nokia - Devices for testing and verification - Access to SDK’s API’s, support documentation and emulators - Mobile programming curriculum through Eprom - Testing, signing and support for publishing onto the Ovi store - Monetization models for developers to build for the Ovi store. - Competitions (e.g. Nokia’s “Calling All Innovators” contest) - On the ground support through Forum Nokia Egypt, Forum Nokia South Africa and the Nokia Research Centre Africa - External venture capitalists, who will complement existing funding sources of the lab and increase local ownership of the initiative - Local and international universities, who will provide training and certification - Technical colleges and business schools, who will provide technical certification and business skills training, respectively. - Government agencies, who will advise on ICT sector structure and strategy. - Business incubator operators, who will provide essential incubation services to lab members. - Industry leaders, including operators, device manufacturers, content providers, and others, who will help provide application testing and software verification 3.6 Possible Revenue Streams In order to be sustainable, each lab will need to generate revenue to be sustainable. While it will be up to each lab to decide which business model is most appropriate, some options include: - Charging a nominal membership fee for developers to use the facilities. - Charging certification and training fees. - Charging fees for verifying software for various platforms. - Profit sharing with venture capital firms that find a successful match within the lab. - Profit sharing with incubated businesses for a limited period (e.g. 2 years). - Charging outside industry firms for services and software development carried out by lab members. *infoDev* is also in the process of commissioning a more detailed business plan for the labs from a consultancy, to be appointed through a competitive selection process. The consultant will, at a later stage, provide mentoring and technical assistance to the labs in their formative stage. 4 Mobile Social Networking The innovation potential of the laboratories will be supported by a range of activities in the field of mobile social networking (Track 1B), in all three regions. These will build upon an existing project, supported by the Korea Trust Fund of ICT for Development, for extending the reach of mobile applications in Africa through social networking. The existing project uses the Mobile Monday model of social gatherings, network, competitions etc to provide a mentorship scheme for applications developers. It is hoped that these online and offline activities will increase the ability of the programmers to successfully develop new applications. The aim is to help build communities of interest within the mobile communications sector, and, in particular, facilitate social networking between small and large companies, to help developing country firms participate in international initiatives, to foster entrepreneurship, to identify new applications and to contribute towards awareness raising and education among the general public. In terms of the innovation value chain, the mobile social networking initiative is intended to address the very early stage, but at the same time it should generate ideas and incipient companies that will benefit from the activities of the labs, the business incubators and the access-to-technology showcase that are planned as part of the broader Creating Sustainable Businesses program. 4.1. The Role of Social Networking in Innovation Since the 1990s, the number of online and offline social networks has grown exponentially. This is especially the case in the technology industries of developed countries, which have also benefited from a high level of innovation and entrepreneurship. From an analytical perspective, there are three angles through which to consider the link between social networks and innovation: - social networks are crucial for the functioning of any open innovation model, and this link is especially relevant in the early stages of the innovation process; - social networks that connect a wide range of industry players that are outside of the constraints of fixed regulatory or competitive positions— and which provide room for navigating through ambiguity – can provide a very useful function in a rapidly changing environment such as the mobile communications industry in developing countries; - social networks are a primary source of third-party connectors who can bridge seemingly unrelated or disconnected spaces or actors and so facilitate the application of proven ideas in new contexts. One particularly successful social networking model in developed countries, initially employed by First Tuesday and since inherited by several other initiatives, involves using informal social gatherings as a way of nurturing trust among local communities of entrepreneurs, investors, researchers and partners who then share information, knowledge and ideas more readily. To date, however, emerging markets have not benefited to the same extent from social networking opportunities, and this is true of the mobile industry as well as other technology areas. One exception are the developing country chapters of Mobile Monday, a volunteer-driven initiative similar in structure to First Tuesday, but focused on cooperation and cross-border business development within the mobile communication sector through a mix of virtual and live networking events to share ideas, best practices and trends. To date, Mobile Monday, or MoMo, chapters in the global South include those in Bangalore, Bogota, Buenos Aires, Caracas, Chennai, Hyderabad, Islamabad, Jakarta, Johannesburg, Mumbai, New Delhi and Palestine. In March 2010, chapters in Kampala and Nairobi were launched as part of the Creating Sustainable Businesses project. Beyond regular networking meetings, online collaboration spaces and social networking via email, SMS, and web-based services are frequently used to connect entrepreneurs with peers, investors, advisors and competitors in the context of open innovation models. 4.2. Scope of Work The mobile social networking component of this project will: - Support the creation of social networking hubs, or sustain existing ones, in between two and four cities in each region, to engage applications developers with the wider community of colleagues, researchers, investors, operators, content providers, device manufacturers and other organizations through regular meetings in each city and through continued online interaction. - Establish mentorship opportunities for developers, by linking them with mobile industry professionals in their own regions as well as internationally. - Encourage the creation of an online social networking space that will allow the developers to stay in close touch with one another and the wider mobile community, in an informal and convenient way. This should also provide opportunities for south-south learning. - Create a competition for ideas to encourage entrepreneurship, for instance by using a mixture of incentives such as peer recognition, trips to international conferences, access to mentoring etc as ways of rewarding good idea. - Document and examine the above activities and produce a chapter for an analytical report (“Mobile Flagship report”) with the aim of informing future work in social networking within GiCT and the Bank Group more broadly. 4.3 City-based social networking hubs It is proposed that city-based hubs be created in a number of Finland’s partner countries and other areas of focus for the project as a whole. This could include: Uganda (Kampala), Kenya (Nairobi), Mozambique (Maputo) and Tanzania (Dar es Salaam), Kiev (Ukraine), Tbilisi (Georgia), Tashkent (Uzbekistan), Ho Chi Minh City (Vietnam) and Phnom Penh (Cambodia). 2 Mobile Monday (MoMo), originally founded in Helsinki in 2000, has established around 100 city-based chapters around the world that host events, typically on the first Monday of each month, that bring together entrepreneurs and established companies working in the field of mobile communications. These local chapters are responsible for organizing local events, but receive assistance in branding, advertising, competitions and support both from other chapters that are partners in the MoMo network and through MoMo Global Oy Ltd. This has enabled MoMo to grow into the world’s leading mobile community. MoMo’s aims are to foster cooperation and cross-border business development through virtual and live networking events to share ideas, best practices and trends from global markets. MoMo is seeking help to get its African chapters up and running so as to extend the model to global coverage. However, the final selection may be modified based on experience gained from scoping missions as the project progresses, and discussion with partners. - The African city chapters will be supported both by the Korea Trust Fund for ICT4D and the *Creating Sustainable Businesses* program. The Recipient-executed portion of the grant will be used, in part, for pilot programmes in four cities. The remaining funds will be used for the development of a regional support mechanism and a competition to identify promising applications that can be brought to scale through the African regional mobile applications lab. As the work is proceeding first in Africa, some of this funding will also be used for overall project development (including the recruitment of a consultant under a cross-support arrangement) which will benefit the later phases. - In the ECA region, as elsewhere, the creation of MoMo chapters will be preceded by a series of scoping missions, which will consider the levels of local demand and specific needs of the mobile community, specifically in the Ukraine, Georgia and Uzbekistan. Anecdotal experience has indicated that the social networking model may need to be adjusted to account for cultural differences in trust-building, communication and cooperation within the sector in this region. The overall ECA component of the project was launched at the *Knowledge Economy Forum*, held in Berlin, 5-7 May 2010. - Following the creation of social networking hubs in Africa and ECA, two further MoMo chapters will also be established in Asia, based on the countries where project activities are being undertaken, probably in Vietnam (Ho Chi Minh City) and Cambodia (Phnom Penh). The key deliverables of this component will include: - The hosting of events (around 40 in seven cities within the next 24 months) to promote innovation in the mobile sphere; - Targeting around 1'000 participants from 400 or so companies, NGOs, government agencies and other organizations that would participate in the online and offline activities; - Promoting the visibility of the regional chapters at a global level, for instance through a dedicated webspace, features in the *infoDev* newsletter, competitions with winners to gain entry to international events, such as the GSM World Conference or *infoDev*’s Global Forum for Business Incubation in Helsinki 30 May-3 June 2011. ### 4.4 Mentorship opportunities Mobile applications developers will have the opportunity to connect with experienced peers and colleagues locally, regionally and internationally. Mentors will be encouraged to provide advice to entrepreneurs on professional development activities, offer new ideas and suggestions related to strategic and business planning and further reach out to their own networks to connect entrepreneurs to investors, partners or peers. The mentors will be sought primarily in the mobile sector, but every effort will be made to include specialists from other fields, if desired. Since informal interaction, trust and comfort are essential to effective mentorship, mentors will be invited to participate in social networking meetings and other events organized within the m-apps labs community, along with being included in an online mentorship program. In later stages of product development, mentors will be closely involved in another component of the *Creating Sustainable Businesses* program – the provision of global co-incubation activities to assist SMEs in identifying export opportunities in developed markets and providing a “soft landing” for entrepreneurs when they internationalize their operations. 4.5 Online social networking space An online social networking space will be developed in close consultation with the mobile applications developers and others in the m-apps labs community. A combination of a web-platform, a mobile platform and social networking services will be employed, with the overall goal to encourage informal interaction that will build trust and comfort between developers, and knowledge and idea sharing within and across teams. The planned analytical report will contribute to the larger “mobile flagship” project that was approved for funding from the Korean Trust Fund on ICT for Development. The objectives of the mobile flagship project are to: a. Summarize trends and usage in mobile services and applications for development, including an analysis of specific sectors (payment system, education, entrepreneurship, health etc); b. Provide practical operational cases/examples and analyses of how mobile can be used, by sector, to improve development outcomes; c. Analyze the mobile “ecosystem” in developing countries and how this might be optimized to develop viable sectoral m-applications; and d. Identify innovative new mobile services and applications including candidates for scaling-up and replication, for instance in the proposed mobile applications laboratories. 5 Project team This project is being undertaken as part of the broader project on Creating Sustainable Businesses in the Knowledge Economy, a joint program of work between infoDev, the Government of Finland and Nokia, which will run from 2010-2012. Specifically it forms tracks 1A and 1B and will follow the management and governance structure of the broader program. For the activities listed above, the following management team is proposed: <table> <thead> <tr> <th>Team Member</th> <th>Role</th> </tr> </thead> <tbody> <tr> <td>Tim Kelly (infoDev)</td> <td>Task team leader</td> </tr> <tr> <td>Toni Eliasz (ETC, infoDev)</td> <td>Task manager, Mobile Applications Labs</td> </tr> <tr> <td>Maja Andjelkovic (cross-support to infoDev and Oxford Internet Institute)</td> <td>Research and Project Officer, Mobile Social Networking</td> </tr> <tr> <td>Ellen Olafsen</td> <td>Operations Officer, infoDev</td> </tr> <tr> <td>Kingori Gitahi, Nokia Research Centre, Nairobi</td> <td>Research and Project Office, Mobile Application Labs (for Africa)</td> </tr> <tr> <td>TBD, Nokia</td> <td>Regional focal points in Asia and ECA</td> </tr> <tr> <td>Jussi Hinkkanen, Nokia</td> <td>Peer Reviewer</td> </tr> <tr> <td>Andi Dervishi, CIT</td> <td>Peer Reviewer</td> </tr> <tr> <td>Katrin Verclaz, MobileActive.org</td> <td>Peer Reviewer</td> </tr> </tbody> </table> Please note that other members of the infoDev team will assist with specific activities (eg Communications, Procurement, Administration etc) and other local team members will be added as host organizations are selected, grants and contracts are awarded and as the project is launched in more regions.
{"Source-Url": "http://www.infodev.org/infodev-files/resource/InfodevDocuments_909.pdf", "len_cl100k_base": 6825, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 34030, "total-output-tokens": 7386, "length": "2e12", "weborganizer": {"__label__adult": 0.0007524490356445312, "__label__art_design": 0.0010862350463867188, "__label__crime_law": 0.0006165504455566406, "__label__education_jobs": 0.0245819091796875, "__label__entertainment": 0.0003788471221923828, "__label__fashion_beauty": 0.0004940032958984375, "__label__finance_business": 0.0830078125, "__label__food_dining": 0.0006055831909179688, "__label__games": 0.0018167495727539065, "__label__hardware": 0.006893157958984375, "__label__health": 0.0010213851928710938, "__label__history": 0.000988006591796875, "__label__home_hobbies": 0.00033736228942871094, "__label__industrial": 0.0013561248779296875, "__label__literature": 0.0006022453308105469, "__label__politics": 0.001468658447265625, "__label__religion": 0.0007047653198242188, "__label__science_tech": 0.07171630859375, "__label__social_life": 0.0004456043243408203, "__label__software": 0.05584716796875, "__label__software_dev": 0.74267578125, "__label__sports_fitness": 0.0004897117614746094, "__label__transportation": 0.0015401840209960938, "__label__travel": 0.0005173683166503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37291, 0.01346]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37291, 0.12343]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37291, 0.92903]], "google_gemma-3-12b-it_contains_pii": [[0, 65, false], [65, 1835, null], [1835, 4998, null], [4998, 7979, null], [7979, 11191, null], [11191, 14212, null], [14212, 18199, null], [18199, 21513, null], [21513, 23733, null], [23733, 27253, null], [27253, 30431, null], [30431, 33890, null], [33890, 36991, null], [36991, 37291, null]], "google_gemma-3-12b-it_is_public_document": [[0, 65, true], [65, 1835, null], [1835, 4998, null], [4998, 7979, null], [7979, 11191, null], [11191, 14212, null], [14212, 18199, null], [18199, 21513, null], [21513, 23733, null], [23733, 27253, null], [27253, 30431, null], [30431, 33890, null], [33890, 36991, null], [36991, 37291, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37291, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37291, null]], "pdf_page_numbers": [[0, 65, 1], [65, 1835, 2], [1835, 4998, 3], [4998, 7979, 4], [7979, 11191, 5], [11191, 14212, 6], [14212, 18199, 7], [18199, 21513, 8], [21513, 23733, 9], [23733, 27253, 10], [27253, 30431, 11], [30431, 33890, 12], [33890, 36991, 13], [36991, 37291, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37291, 0.07534]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
e0893b7596c3478d3f727051115ac26936723b13
Mitigating JIT Compilation Latency in Virtual Execution Environments Citation for published version: Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Proceedings of the 15th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Mitigating JIT Compilation Latency in Virtual Execution Environments Martin Kristien University of Edinburgh Edinburgh, UK m.kristien@sms.ed.ac.uk Tom Spink University of Edinburgh Edinburgh, UK tspink@inf.ed.ac.uk Harry Wagstaff University of Edinburgh Edinburgh, UK hwagstaff@inf.ed.ac.uk Björn Franke University of Edinburgh Edinburgh, UK bfranke@inf.ed.ac.uk Igor Böhm Synopsys Inc. Austria igor.boehm@synopsys.com Nigel Topham University of Edinburgh Edinburgh, UK npt@inf.ed.ac.uk Abstract Many Virtual Execution Environments (VEEs) rely on Just-in-time (JIT) compilation technology for code generation at runtime, e.g. in Dynamic Binary Translation (DBT) systems or language Virtual Machines (VMs). While JIT compilation improves native execution performance as opposed to e.g. interpretive execution, the JIT compilation process itself introduces latency. In fact, for highly optimizing JIT compilers or compilers not specifically designed for JIT compilation, e.g. LLVM, this latency can cause a substantial overhead. While existing work has introduced asynchronously decoupled JIT compilation task farms to hide this JIT compilation latency, we show that this on its own is not sufficient to mitigate the impact of JIT compilation latency on overall performance. In this paper, we introduce a novel JIT compilation scheduling policy, which performs continuous low-cost profiling of code regions already dispatched for JIT compilation, right up to the point where compilation commences. We have integrated our novel JIT compilation scheduling approach into a commercial LLVM-based DBT system and demonstrate speedups of 1.32× on average, and up to 2.31×, over its state-of-the-art concurrent task-farm based JIT compilation scheme across the SPEC CPU2006 and BioPerf benchmark suites. CCS Concepts • Hardware → Simulation and emulation; • Software and its engineering → Simulator / interpreter; Just-in-time compilers. Keywords JIT compilation, performance, compilation latency, scheduling ACM Reference Format: 1 Introduction Many VEEs, ranging from DBT systems [1, 3, 12] through language VMs [4, 10, 25] to specialized accelerator runtime environments [16], rely on JIT compilation as a key performance enabler. While some of these VEEs use JIT compilers specifically designed for this purpose, the LLVM [19] compiler framework finds increasing use in both academic projects, and commercial products as a JIT compiler. The list of examples is long: From Julia [2] and R [22] to Facebook’s HHVM [21], Azul’s Falcon Java JIT [23] and Apple’s Nitro JavaScript engine, and virtually every OpenCL compiler (e.g. Intel, Apple, Arm), LLVM is used as a JIT compiler. However, while actively supported and equipped with strong optimizations, the LLVM compiler was not originally designed as a fast JIT compiler. In fact, it is well known that the use of LLVM as a JIT compiler can introduce a large compilation overhead [5]. This is because some traditional algorithms used in static compilers are too slow to be used in JIT compilers [6]. As a way of hiding this JIT compilation latency, and to increase the throughput of the JIT compiler, parallel and concurrent JIT compilation task farms have been developed [3, 18] and, later, adopted by industry, e.g. to drive data center software infrastructure like in HHVM [21], to improve the start-up times of managed application as in Microsoft’s .Net Multicore JIT [7], or to power browser based applications like in Google’s Chrome v8 JavaScript engine. While effective, we show how this decoupled approach to JIT compilation introduces a new problem: Maintaining the right balance between the selection of code regions for JIT compilation, and scheduling regions for compilation is non-trivial. We show how state-of-the-art scheduling approaches [3, 18] can arrive at non-optimal decisions resulting in poor application performance as hot code regions are compiled too late. In this paper we propose a novel scheme for scheduling JIT compilation units in a VEE. The key idea is to separate the dispatch of a compilation unit into a compilation queue from scheduling the next unit for compilation. When the compilation threshold is met, compilation units are dispatched to a compilation queue, but their heat is continually updated as execution progresses. When the JIT compiler fetches the next compilation unit from the queue, we select the hottest unit to be compiled. This scheme enables us to make scheduling decisions based on accurate and up-to-date heat information, leading to better runtime performance. We have extended a commercial multi-threaded and LLVM-based DBT system (Synopsys DesignWare ARC nSIM [14]) with our new JIT scheduling technique and demonstrate its viability. In our evaluation against the SPEC CPU2006 and BioPerf benchmark suites we demonstrate overall average speedups of $1.32 \times$ and $1.2 \times$, respectively, and up to $2.31 \times$ over the default scheme. 1.1 Contributions In this paper, we make the following contributions: - We show that existing FIFO and heat-and-recency based JIT scheduling policies lead to sub-optimal compilation schedules; - We introduce a novel JIT compilation queue scheduling policy, Dynamic Heat; - We perform a detailed analysis of this new scheduling policy, and compare it to existing policies across a range of industry standard benchmarks. 2 Background and Motivating Example Hybrid interpreter/DBT systems [17] offload the expensive JIT compilation of work-units to threads [3], whilst still making forward progress in the interpreter, and thus hiding the latency of JIT compilation. Such set-ups (described as Asynchronous Mixed-mode Translation by [24]) have a greater scope for implementation, and raise questions such as what, when, and how guest code should be translated. Figure 1 contrasts a typical configuration for a hybrid interpreter/DBT-based VEE against our novel scheme. In this example, the main execution loop starts by checking to see if the code to be executed (i.e. the code residing at the current program counter (PC)) has already been translated. If a translation exists, it is used for execution. Otherwise, the guest code is executed by the interpreter. Following this, in typical asynchronous implementations, if the code has been marked for translation (i.e. it has been dispatched), then the main execution loop continues as normal in the interpreter, until eventually the translation is available. If the code has not been dispatched, its profile is updated, and its heat (the demand for this code to be executed) is measured. Code that passes a heat threshold is dispatched, and thus scheduled for compilation. Typically, the code to be translated (the work unit) will be added to a queue, and a compilation worker thread will remove and process the work unit, usually in FIFO order. We refer to a particular ordering as the compilation schedule, and the policy dictates how this schedule is formed. 2.1 Existing Scheduling Policies In this paper, we make reference to the Default scheduling policy as the one supplied with the Synopsys DesignWare ARC nSIM product. This policy relies on both heat and recency for determining the compilation schedule [3]. The most prevalent policy found in literature is FIFO [9, 15, 21], which although does not prioritize compilation units in any way, maintains a strong sense of fairness, and prevents a compilation unit from being stuck in the queue indefinitely. 2.2 Motivating Example To motivate the research of compilation scheduling, we demonstrate the effect of two different scheduling policies in Figure 2. This figure depicts the execution of perlbench from the SPEC CPU2006 [11] suite as heatmaps showing execution in different regions of the application’s address space over time. The horizontal axis represents the number of executed instructions, as this is the time seen by the executed application. Note, instruction time is not skewed by different execution speeds resulting from different scheduling policies. A red color represents execution in interpretive mode. A blue color represents execution in native mode. The intensity of the color indicates the amount of execution in the corresponding space-time region. A good policy should produce heatmaps that are more blue overall, by turning high-intensity red regions into blue quickly. The Default compilation scheduling policy (Figure 2a) produces a heatmap that contains long horizontal interpretation lines (red). These indicate the policy has failed to recognize the “importance” of the corresponding code regions. On the other hand, the Dynamic Heat policy (Figure 2b) turns high intensity interpretation into native execution more quickly. This indicates that the policy selects important code regions for compilation with a relatively short delay. The two heatmaps in Figure 2 demonstrate the effect of the compilation queue scheduling policy on the performance of the whole system. In the case of perlbench, Dynamic Heat results in a speedup of $1.99 \times$ relative to Default. 3 Methodology The motivating example has shown clear sub-optimality of the Default policy for a particular work-load. The visualization (Figure 2a) shows code regions being interpreted Figure 1. Operation of an asynchronous DBT system, with the original profiling scheme, and our proposed dynamic scheme. Figure 2. Execution of the SPEC CPU2006 perlbench benchmark with different scheduling policies. Red indicates interpretive and blue indicates native execution. The intensity indicates the amount of execution in the corresponding space-time region. for a long time, without the policy recognizing the regions’ importance to the application. Although the same code regions were dispatched for compilation, the compilation order differed, as dictated by the respective scheduling policy. To tackle the issue of suboptimal compilation scheduling, we introduce a novel scheduling policy focused on the changing demands of applications. The policy relies on continuous profiling of already dispatched code regions, resulting in dynamic updates of heat of compilation units already present in the queue. The compilation units are then prioritized based on the values of this dynamically updated heat. Note, dynamic here means after-dispatch, rather than the conventional meaning of at-runtime. We distinguish from a typical heat policy as using static heat, i.e. at-dispatch. The Dynamic Heat policy targets the long red interpretation lines by allowing all compilation units in the queue that are being interpreted to increase in priority. Such dynamic priority updates can fast-track previously moderately hot units to the front of the compilation queue, preventing any code region from being interpreted for a long time. 3.1 Implementation We implement our novel policy in a state-of-the-art commercial DBT system, Synopsys DesignWare ARC nSIM. This DBT system implements the asynchronous compilation technique introduced previously. We make several changes to the JIT compilation system to implement our new policy. These were mainly in the profiling subsystem (to perform dynamic heat updates), and in the compilation queue organization (to take dynamic heat updates into consideration). 3.1.1 Profiler In the original DBT system, profiling of application’s basic blocks stopped after dispatch of the corresponding compilation units. During dispatch, a handle to contain the produced native code was registered with each basic block corresponding to a particular compilation unit. To allow dynamic updates to the heat of the compilation units, a handle containing the compilation unit was also registered with each dispatched basic block. Now, when the profiler reaches a basic block already dispatched but not yet compiled, the corresponding compilation unit’s heat is incremented through the handle on the basic block. During this update no synchronization is involved, as the heat is only an approximate metric and no error can arise from the data races in accessing this metric. Continued profiling of dispatched but not yet compiled basic blocks resulted in no observable performance penalty. Although more computation is being performed, it is done only for blocks which are being interpreted, and the actual interpretation is more costly than the counter increment. ### 3.1.2 Compilation Queue All previous compilation queue implementations ordered units during dispatch, as the priority metrics were fixed. This led to an inefficient implementation using standard C++ libraries, e.g. `std::queue` or `std::priority_queue`. However, no standard data structure allows for ordering elements when the ordering metric is not fixed at insertion. For simplicity of implementation, the compilation units ordering point was moved from dispatch to selection (i.e. `pop` time). This allowed the use of an unstructured data structure, in particular a `std::vector`. At the point of selection, a linear scan through this data structure is performed, finding a compilation unit with the maximal current (dynamic) heat. Although this increases the complexity of the compilation unit selection from $O(1)$ (for `std::queue`) or $O(\log n)$ (for `std::priority_queue`) to $O(n)$ in size of the compilation queue, no performance penalty was observed. Since the ordering has been moved from the application thread to the compilation thread, the application thread can dispatch faster and continue executing the application. On the other hand, the compilation threads do not suffer from the increased complexity, as the linear scan is negligible compared to the computational cost involved in the compilation itself and the synchronization overheads already present. <table> <thead> <tr> <th>System</th> <th>Supermicro</th> <th>Intel Xeon</th> </tr> </thead> <tbody> <tr> <td><strong>Architecture</strong></td> <td>x86-64</td> <td></td> </tr> <tr> <td><strong>Sockets/Cores</strong></td> <td>2/10</td> <td></td> </tr> <tr> <td><strong>L1 Cache</strong></td> <td>32 kB/32 kB</td> <td>256 kB</td> </tr> <tr> <td><strong>Frequency</strong></td> <td>2.4 GHz</td> <td></td> </tr> <tr> <td><strong>Model</strong></td> <td></td> <td></td> </tr> <tr> <td><strong>LLVM Model</strong></td> <td></td> <td></td> </tr> <tr> <td><strong>Threads</strong></td> <td>1,024 blocks</td> <td></td> </tr> <tr> <td><strong>I$32$ kB/D$32$ kB</strong></td> <td></td> <td></td> </tr> <tr> <td><strong>L2 Cache</strong></td> <td></td> <td>2.4 GHz</td> </tr> <tr> <td><strong>256 kB</strong></td> <td></td> <td></td> </tr> </tbody> </table> ### 4 Evaluation Our novel compilation scheduling policy was evaluated using SPEC CPU2006 and Bioperf benchmarking suites, using the host machine and DBT configuration described in Table 1. For SPEC CPU2006, due to compilation issues and long runtimes, we only use the `integer` benchmarks with the `test` input set. Benchmarks from the Bioperf suite are run with the class-A workloads. The benchmarks have been compiled with gcc 4.2.1, with `-O3` optimizations. Arithmetic mean and standard deviation of 15 runs of each experiment are depicted. #### 4.1 Key Results In our results, the baseline policy (used for comparison) is the `Default` policy, as implemented in the reference DBT system, and described in subsection 2.1. The results show speedups for both SPEC CPU2006 and Bioperf in Figure 3. Some benchmarks achieve up to $2.31 \times$ speedup compared to the baseline policy. On average, speedups for SPEC and Bioperf are $1.32 \times$ and $1.2 \times$, respectively. Previous research [3] (from an older version of the reference DBT system) suggests the `Default` policy improves over `FIFO` with a speedup of $1.04 \times$ and $1.13 \times$, for SPEC and Bioperf, respectively. Our `Dynamic Heat` policy achieves further improvements, with speedups over `FIFO` of $1.29 \times$ and $1.54 \times$ for SPEC and Bioperf, respectively. Furthermore, while both the `Default` and the `FIFO` can be outperformed by the `Random` policy for some benchmarks, `Dynamic Heat` is never outperformed by the `Random` policy. #### 4.2 Comparison to Parallel JIT The scheduling policy itself only aims at reducing the amount of interpretation by selecting the most important code regions. Another way to reduce the amount of interpretation is to increase compilation throughput by using multiple parallel JIT workers. Figure 4 compares the speedups achieved from parallelism, to the speedup achieved by using `Dynamic Heat`, for the SPEC and Bioperf benchmark suites. All speedups are relative to the `Default` policy with one JIT worker. As expected, introducing multiple workers improves performance on average. Interestingly, for most benchmarks, our novel scheduling policy results in better performance improvement than one additional JIT worker. The graphs clearly indicate the benchmarks that benefit the most from concurrent compilation, and our policy follows this trend. #### 4.3 Reduction of Interpretation Figure 5 shows the relative reduction in the number of interpreted instructions, when using the `Dynamic Heat` policy, compared to the `Default` policy. A smaller number of interpreted instructions indicates a larger proportion of native instructions, which is the desirable outcome. Using the `Dynamic Heat` policy results in a reduction of instructions being interpreted by more than 8% relative to the `Default` policy, on average. However, due to the long runtime, execution of some benchmarks (e.g., `bzip2`, `hmmer`) is dominated by native execution for all policies (see Figure 6). In these cases, further reductions in the proportion of interpreted instructions do Figure 3. Speedups over the baseline policy for both the SPEC CPU2006 and Bioperf benchmark suites. Higher is better. Figure 4. Speedups over the baseline policy, when used with different numbers of JIT worker threads. Higher is better. Figure 5. Proportion of interpreted instructions in each benchmark suite, relative to the Default policy. Lower is better. Figure 6. Proportion of interpreted instructions in each benchmark suite, relative to total instructions, for each policy. Shaded areas are interpreted, solid areas are native execution. Larger solid area is better. Figure 7. The percentage of static code JIT-compiled during execution. Lower is better. 4.4 Quantity of Translated Code Figure 7 shows the total quantity of translated code for each benchmark suite. For several benchmarks, the Dynamic Heat policy results in a significant reduction in the amount of translated code (e.g., gcc, gobmk, grappb) compared to the Default policy. Although less code is compiled with Dynamic Heat, more native execution is observed, indicating Dynamic Heat is better at selecting “important” code. 4.5 Compilation Queue Length The choice of scheduling policy also affects the length of the compilation queue. Counter-intuitively, speedups are associated with longer compilation queues. Since the queue consumption rate is fixed by the compilation throughput, different scheduling policies can only affect the queue production rate. New compilation units are added to the compilation queue when newly discovered code regions become hot. Therefore, fast native execution results in less time elapsed before new code is discovered, increasing the effective queue production rate. In other words, good policy speeds up application execution leaving less time for the JIT workers to consume the compilation queue. We observe this effect in Figure 8 for gcc, where the compilation queue is observed to be significantly longer for Dynamic Heat (nearly 1200) than for Default (peaking at approximately 600). 5 Related Work QEMU [1] is a widely-supported, portable binary translation tool, which uses its own block-based JIT compiler (TCG). QEMU performs translation synchronously with execution, and so does not need to perform any compilation scheduling. On the other hand, it is unable to extensively optimize the translated code, since it is only considering a small code region at a time. MAMBO-X86 [8] uses a tracing and translation scheme which maps guest call/return instructions to equivalent host instruction sequences in order to take advantage of the host system’s return address prediction mechanisms. Asynchronous DBT systems have been presented in a variety of contexts. These include HQEMU [12], an extension of QEMU which introduces an asynchronous trace-based optimizer based on LLVM. HQEMU profiles and translates traces, rather than the large code region which nSIM uses. [3] presents another asynchronous DBT system. Here, a parallel task farm is used to translate large code regions using LLVM. They use an interpreter for the profiling/tracing step, unlike HQEMU that uses tiered compilation, and traces continuously rather than only when the possibility of a hot region is encountered. [13] extends HQEMU with Intel Processor Trace, to reduce the overheads involved in trace forming. Here, modern tracing and profiling hardware built in to the CPU is used to reduce the cost of hot regions detection and selection for further optimization. However, the paper does not discuss the compilation scheduling policy. Ha et al.[10] present an asynchronous, trace-based JavaScript JIT compiler. This uses a simple FIFO queue to order the traces to be compiled. They also suggest that a lock-free queue could be used, although they also admit that this is unlikely to have a significant effect on performance. In [20], Namjoshi et al present a method for online predictive profiling of Java applications. This method attempts to detect the iteration count for each loop, and prioritize the compilation of loops that are predicted to become hot, as well as any methods called from such loops. 6 Summary & Conclusion In this paper we have developed a novel JIT compilation scheduling policy mitigating the negative impact of compilation latency in an asynchronous JIT system. Our Dynamic Heat policy represents a significant improvement over a state-of-the-art combined heat/recency policy used in a commercial LLVM based DBT system. We demonstrated that our novel policy consistently performs as good or better than the default policy and integrates well with the multi-threaded JIT compilation task farm system of the DBT system. In addition to average speedups of 1.32× and 1.2× for SPEC 2006 and Bioperf benchmark suites respectively, and up to 2.31×, we found that the use of our novel scheduling policy provides a performance boost roughly equivalent to that of adding one further JIT compilation thread. For systems with small numbers of host machine cores this is of particular significance as our Dynamic Heat policy is able to outperform the Default policy using two workers on every benchmark, and with three workers in many cases. Larger systems still benefit from lower machine utilization while delivering the same performance with fewer compilation threads. Future work will consider JIT compilation scheduling policies for multi-core guest systems with both private and shared JIT compilation threads. References
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/81056782/Mitigating_JIT_Compilation_Latency_KRISTIEN_DoA_040219_AFV.pdf", "len_cl100k_base": 4977, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27357, "total-output-tokens": 7462, "length": "2e12", "weborganizer": {"__label__adult": 0.00041747093200683594, "__label__art_design": 0.00026988983154296875, "__label__crime_law": 0.0003490447998046875, "__label__education_jobs": 0.00040793418884277344, "__label__entertainment": 7.37905502319336e-05, "__label__fashion_beauty": 0.00016891956329345703, "__label__finance_business": 0.0002644062042236328, "__label__food_dining": 0.0003662109375, "__label__games": 0.0006022453308105469, "__label__hardware": 0.0022602081298828125, "__label__health": 0.000576019287109375, "__label__history": 0.0003070831298828125, "__label__home_hobbies": 9.65595245361328e-05, "__label__industrial": 0.0005984306335449219, "__label__literature": 0.00021505355834960935, "__label__politics": 0.000301361083984375, "__label__religion": 0.0005755424499511719, "__label__science_tech": 0.052154541015625, "__label__social_life": 7.408857345581055e-05, "__label__software": 0.006893157958984375, "__label__software_dev": 0.931640625, "__label__sports_fitness": 0.00033164024353027344, "__label__transportation": 0.0007281303405761719, "__label__travel": 0.00024962425231933594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30507, 0.03188]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30507, 0.18243]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30507, 0.87473]], "google_gemma-3-12b-it_contains_pii": [[0, 1360, false], [1360, 5494, null], [5494, 11060, null], [11060, 13655, null], [13655, 19543, null], [19543, 20211, null], [20211, 24979, null], [24979, 30507, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1360, true], [1360, 5494, null], [5494, 11060, null], [11060, 13655, null], [13655, 19543, null], [19543, 20211, null], [20211, 24979, null], [24979, 30507, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30507, null]], "pdf_page_numbers": [[0, 1360, 1], [1360, 5494, 2], [5494, 11060, 3], [11060, 13655, 4], [13655, 19543, 5], [19543, 20211, 6], [20211, 24979, 7], [24979, 30507, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30507, 0.08219]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
18b36fa5ab16301b963fed545d810477e87d734a
Host: Nicole Huesman, Intel Guests: James Brodman, Intel; Tom Deakin, University of Bristol Nicole Huesman: Welcome to Code Together, an interview series exploring the possibilities of cross-architecture development with those who live it. I'm your host, Nicole Huesman. On our path to exascale, the next generation of supercomputers all have at least one thing in common: they will all contain a variety of hardware architectures. This is not only true in the world of supercomputers. It is increasingly true across the computing landscape. The ability to efficiently program for these environments is key and the need for performance portability is a hot, and as we'll see, tricky topic. Today's guests live at this forefront. Dr. Tom Deakin is a senior research associate and lecturer in the High-Performance Computing Research Group at the University of Bristol, and a contributor to the Khronos SYCL Working Group. He's well-recognized in the topic of performance portability, and we're thrilled to have him join us. Tom Deakin: Yeah, it's great to be here. Thanks for having me. Nicole Huesman: And Dr. James Brodman is a software engineer at Intel, and also a contributor to the Khronos SYCL Working Group. He focuses on languages and compilers for parallel programming. James, so great to have you on today's program. James Brodman: Thank you for having me. Nicole Huesman: The term 'performance portability' seems to mean different things to different people. And there's often this tension between performance and portability. Let's start there. I'll let the two of you dive in. James Brodman: So, what are your thoughts, Tom? Tom Deakin: Well, I think performance portability is often a very contentious subject. And part of this comes from, I think, what it really means for something to be performance portable, and the fact that what developers are willing to accept as performance portable. Part of this, of course, is because portability is a requirement for performance portability. You have to write code that is portable. It has to at least run on all the platforms that you might care about. And then you can start thinking about what the performance is like on each of those platforms and the degree of performance you're willing to accept for some degree of portability, I think, can mean this definition is different for different people. James Brodman: Yeah, I absolutely agree. It seems like the Holy Grail, of course, would be to be able to write one code that would run everywhere and get the absolute best performance, but we're not quite there yet, I think. So, there's this trade-off everyone has to make between how much common code they want to have, and how much code they're willing to have that's specific to whatever device or system they're targeting. Tom Deakin: Yeah, exactly. And hopefully the specialism you have to do is only to get the last few percent that you care about. That's the goal. But you know, there is some pragmatism there. You have to allow for some specialization to go that last few yards, if you need to. James Brodman: For sure. And that's what keeps everyone involved in HPC gainfully employed! Nicole Huesman: So, the research group that you're a part of at University of Bristol has done some fantastic work around how you define performance portability. Can you talk a little bit about that? Tom Deakin: So, for us in Bristol in the HPC Research Group, we've been publishing on performance portability for a number of years now, and one definition that we use—although there are lots of sort of hand-wavy concepts in it—is to say a code is performance portable where it achieves a similar level of performance efficiency across all the platforms that you care about. Now we're in HPC, so that's high performance, so we hope that that efficiency is high. We're hoping to get, say 80% of the peak performance that is possible on those architectures, but it's this idea of high performance and high, consistent performance that we'd look for when trying to measure performance portability. Nicole Huesman: So, James, you mentioned there's this Holy Grail. What role do open standards play in that? James Brodman: So I think open standards are very important here because when you're talking about defining portable code, you need everyone to agree on what they'll support on various systems. And if everyone has their preferred dialect of the way to program their own system, then your portability is going to drop because one dialect may not be supported in another platform. But when you have an open standard and everyone agrees to implement that standard on their system, then that gives you a common language and set of tools to use when writing your programs. Tom Deakin: Yeah, I think that's really important. You need to remember that this is performance portability, and portability has to come first. So if you write your program in a language that is only supported by one vendor, then you're going to have a really hard job convincing another vendor to make that run on their platform too. So with something like an open standard, it provides this, you know, fair playground for all the vendors to come together and help make something good for everybody. And if you write your code in that, then you can ask for it for when you buy systems, and you can be sure that there's going to be a healthy ecosystem around that with lots of choices of compilers and tools and people to help you out. And in that way, you can have a code that will be portable, and hopefully, as a flexible model that allows you to write something that's performant everywhere as well. James Brodman: Exactly. And open standards have come in many flavors over the years, the C language or the C++ languages are open standards and you can get a C or C++ compiler for pretty much any system these days. But those languages alone aren't necessarily going to get you all the performance that you need on your systems. So you also need standards that have these higher-level abstractions to enable use of all the different parallel components in today's systems. Things like SIMD units, multiple cores, accelerators, and things like that. So that's where something like the SYCL standard can come in and provide these higher-level abstractions that still give you the portability, but also start giving you hooks into all the performance that modern computing systems have today. Tom Deakin: Yeah, that's right. These open standards like SYCL and OpenCL, which SYCL was inspired by, give you this abstraction over what an HPC device, what a computing thing, looks like. It gives you these building blocks that are common across different architectures. So even down at the low level of abstraction, you have this portability, but then something like SYCL allows you to target those because they appear in the same language in the same way between vendors. You can write your code, targeting those particular features, and know that there'll be there and map to the sensible thing on the underlying hardware. James Brodman: Yes. And it's this expression of these higher-level abstractions and common parallel programming patterns that have kind of guided the evolution of these standards over the years. In particular, the latest version of SYCL, the 2020 provisional specification, added even more of these higher-level parallel building blocks, things like support for reductions or other kind of higher-level collective operations, which enables implementations of these to provide efficient versions of them across different targets. **Tom Deakin:** Yeah, these parallel patterns are an important concept. For those of you who haven't read Tim Mattson’s book on that subject, it's kind of a standard text that you should check out. But this idea that you want to paralyze a loop and all the iterations of that loop are independent. You just want a programming model that can represent that level of abstraction. That’s a portable concept between the different architectures. So something like SYCL with those high-level abstractions allows you to express that. But if you need to kind of break it down and start worrying about the things under the hood, then there is this hierarchy of abstraction that SYCL will give you that enables you to go deeper if you need to. **James Brodman:** Yes, definitely. Providing support for all these higher-level patterns is very, very useful. But since we're realists living in the real world, and if you have a large machine, and you do want to be able to get the best out of it, you have to find this trade-off between using these higher-level abstractions and having the ability to dive really deep down when you need to at the cost of potentially some portability. So I think the biggest thing that the SYCL Working Group is focused on these days is finishing the next version of the SYCL spec. We released the 2020 provisional specification. In some sense, it's a beta spec that provides a public guideline of how we're thinking and where we're looking to go. And so we've been addressing a lot of feedback from those issues and trying to fine-tune everything and make sure that the next version of the final specification is going to be a really great release with a lot of important new features. A lot of us come from the kind of implementation point of view, where we're working on building implementations of SYCL, but it's very important in doing all of this to get the perspective of the people who are actually going to be using it. And Tom represents the users here, and has provided a lot of valuable insight into what matters for the people using the specification, this language, at the end of the day. **Tom Deakin:** And I think you’re right there in that the main focus of the SYCL Working Group right now is to make big strides from SYCL 1.2.1 into SYCL 2020. There’s a lot of new features. All of them are really helpful in helping us write code both quickly for us as developers, but also in a way that allows us to write performance portable things. So like the reductions, for example. There's many ways we might want to implement a reduction, and that will be different on different architectures that you're targeting. But as developers, we just want to say, do a reduction for us. So this is the kind of interaction that happens within a working group. There's a feature that we'd like and implementers and users can all start kind of discussing around this and figuring out the best way to write this down. Another positive of the SYCL Working Group is this mixture of users and implementers and vendors as well. So there's people that make hardware, people that make compilers and run times, there's people that write libraries that sit on top of programming models, such as SYCL, and then there's users like me, that will go and write SYCL directly. **James Brodman:** Yeah, like you mentioned, in the working group, there's this great mix of people. And in particular, SYCL has really been developing over the last year in that there are several different implementations now that target pretty much all the major hardware vendors out there. You can write a SYCL program today and run that on Intel hardware, on NVIDIA hardware, AMD hardware, Xilinx hardware, ARM hardware. And it's really kind of exciting because you actually can now write a single program that can run across a great many different types of systems. So we're actually starting to see the portability promise delivered in practice. Tom Deakin: Yeah, a lot of my work has been involved in testing out that ecosystem, in taking a code that I've written in SYCL and trying all the different implementations that are there, and seeing what performance is possible on all the different architectures that are supported. And with this mixture of implementations, there's also more than one path from a SYCL application to a particular architecture. So if you're going to be running on a particular GPU, say from one vendor, there may be two or three of those implementations that support that particular device. So that means that we can then start trying these things and seeing which one's going to work best for us. This is something we're very much used to even just regular programming that we have a choice of compiler. There's always more than one compiler that we can use to build our C++ program. So having this choice of implementation as well, really kind of shows this strong and healthy and growing ecosystem around SYCL. James Brodman: So how have you been evaluating this? Do you have a set of applications that you've been using to kind of test on all the different platforms? Tom Deakin: Yeah, so there's a number of mini apps that we use. And these mini apps are sort of a distillation of much bigger codes, and they really capture the computational and memory access patterns of these larger codes, but they're in a much more agile state that allow you to try them out on different systems. So we'll take these applications and we'll maybe port them to SYCL, and then we will get them running on as many different systems as we can. So I have a paper coming out, for a workshop at SC [Supercomputing 2020], the P3HPC Workshop. This paper takes a couple of applications. We look at 15 different architectures and we take a number of programming models, SYCL included, and we try to get those codes running in each programming model on all of those 15 different architectures and measure the performance efficiency that we can achieve. And then we can take those efficiencies and start trying to understand the performance portability of that data set that we've just collected. How consistent are we? How close to peak performance do we get across our set of architectures? James Brodman: Oh, that sounds very interesting. Can you give us a teaser as to what you've seen or should we wait for the workshop? Tom Deakin: Well, obviously you should sign up for the workshop, but as a teaser, what we're finding is that the open standard programming models are doing really well. They help with the portability side of things. This helps us get the coverage across the 15 platforms. SYCL is definitely growing and there's some room for improvement on some of the architectures, but some of the models that have had this ecosystem growing for a little bit longer, like OpenMP is showing really nice results as well. James Brodman: Are these codes that anyone can just go check out, try on their system, if they want? Tom Deakin: Yeah. So the codes are all open source, as are the scripts that we use to download, build and run those codes on all the different platforms. So it's no mean feat taking a code and getting it to run on 15 different architectures. So we capture our workflow of how we've done this, and we make those available as well. James Brodman: Very cool. It's definitely useful to have tools and infrastructure like that available for people to try it out themselves. Nicole Huesman: To that point, if I haven't programmed using SYCL yet, how easy or difficult is it for me to get started? James Brodman: I think it's pretty easy, although I might be a little biased. In particular, if you get started with the latest version of SYCL—SYCL 2020—the working group made a lot of improvements to really make SYCL approachable to people that haven't done it. SYCL is based on modern C++, so if you're not familiar with C++, there might be a slight learning curve there, but hopefully not one that's insurmountable. Over the last year, there's been a lot of effort on the parts of multiple people in the community to start putting together really high-quality training materials and introductory materials for SYCL. In particular, Codeplay, another implementer of SYCL, has been very active in this field. They have this website, SYCL.tech that has lots of tutorials, presentations, links to talks available for people. Intel, as well, has put together a lot of training material. A lot of it is available as part of the Intel® DevCloud. If you sign up for that, which I believe is free and has lots of different hardware you can try out, there are a set of training modules based on Jupyter notebooks. So it's an interactive session available in your browser that will kind of guide you through learning all the different components of a SYCL application, and the things you need to know to really get started. Tom, what's been your experience getting up to speed and writing applications with SYCL? Tom Deakin: Well, the first thing to remember is that writing parallel programs is hard. Finding parallelism in programs is a tricky thing to do. And actually this is true, no matter what language you might write in. So, taking a vanilla serial code that doesn't have any parallelism in, and getting it to run in parallel, and updating it so that it's possible to be correct when running in parallel, those are challenges that we all face, no matter what we write it in. The next stage is then, how do we express that in parallelism? And we've found something like SYCL is pretty straightforward to actually express the parallelism in once you've found it. You have these very high levels of abstraction. You can just say parallel four, and you give it the iteration space that you'd like to run your loop over in parallel, and that's it. You're pretty much away at that point. There's much more flexibility built into SYCL, if you need it. You can express that parallelism on your hardware in a kind of hierarchical manner, if you like. But also, you can then start using the tasking model that's built into SYCL as well to express more complicated dependencies between those parallel loops. So, there's a lot of flexibility to delve in once you need it. But actually writing a parallel program in SYCL to start with, we've found pretty straightforward. So for those mini apps in our studies, we find it typically takes someone a couple of weeks to port from one program to another, SYCL included. Nicole Huesman: Let's shift the conversation a bit and talk about what's next. What are you both looking forward to? Tom Deakin: Well, for me, the thing I'm looking forward to is watching the ecosystem around SYCL continue to grow. Our work, where we've been exploring it as much as we can, is showing that the support across different vendors is certainly growing. And the compilers and tools are becoming much more mature and able to give us that portability across platforms that is important as a first step. The results do show that there is some more work to do. And on some platforms, we do see there is some improvements in terms of the performance that we can see. So that's what I'm looking forward to next. The other big important thing for me is the release of the next specification, the next SYCL 2020. This is a big step in what SYCL is going to look like going forward. And the working group is working hard to make sure that it looks right and it looks good and is going to do the right job for what we need. James Brodman: Yeah, I absolutely agree. I'm really looking forward to SYCL 2020 being finished and released into the world because I think it's really going to improve a lot of things for a lot of users. I'm also looking forward to the growing ecosystem. We have multiple implementations of SYCL now, and each of them have been getting more and more mature. Probably the most recent up-and-comer is hipSYCL out of Heidelberg. And recently Intel and Heidelberg launched this joint Center of Excellence, which I find rather exciting because hipSYCL has been rapidly iterating and is now adding support for a lot of the new features defined in SYCL 2020. So it's going to be nice to see those features available on many different platforms. Nicole Huesman: It has been so wonderful to have both of you here today, and so exciting to see how the SYCL community is coming together to advance portability in parallel programming. I know Tom, you mentioned the P3HPC Workshop that certainly I'm looking forward to. I know James you're intimately involved in a workshop at Supercomputing [Supercomputing 2020] as well. There are so many exciting things happening at Supercomputing this year. So with that, where can listeners go to learn more about all of the exciting developments? James Brodman: So you've already mentioned the P3HPC Workshop. I think that's going to be very, very interesting this year. There are several SYCL focused activities that SC [Supercomputing 2020] this year as well. There are two tutorials that will cover various aspects of SYCL. There is also going to be a Birds of a Feather (BoF) session about SYCL and heterogeneous C++. Intel's implementation of SYCL, the Data Parallel C++ compiler, has been developed as an open source project, and that's available on GitHub for anyone who wants to check it out, or even send a pull request if you want to add support for a new system or implement a new feature. oneapi.com is always a great resource. SYCL.tech is also a very good resource for learning more about SYCL. Nicole Huesman: And Tom, where can listeners go to learn more about what you're doing at University of Bristol? Tom Deakin: Yeah, so all of my work you can find on my website, hpc.tomdeakin.com. And you can find links to the research group on GitHub there—that's uob-hpc.github.io—that has links to all the source code and our benchmark repository, where you can see how to run all of our applications. Nicole Huesman: Tom, thanks so much for being here today to share your insights with us. Tom Deakin: Thanks for having me. Nicole Huesman: And James, so great to have you on today's program. James Brodman: Thanks Nicole, and thanks Tom. Nicole Huesman: For all of you listening, thanks so much for joining us. Let's continue the conversation at oneapi.com. Until next time!
{"Source-Url": "https://media20.connectedsocialmedia.com/intel/11/18937/In_Pursuit_Holy_Grail_Portable_Performant_Programming.pdf", "len_cl100k_base": 4728, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21741, "total-output-tokens": 5023, "length": "2e12", "weborganizer": {"__label__adult": 0.0004088878631591797, "__label__art_design": 0.0003368854522705078, "__label__crime_law": 0.00035071372985839844, "__label__education_jobs": 0.0005769729614257812, "__label__entertainment": 0.00010889768600463869, "__label__fashion_beauty": 0.00017440319061279297, "__label__finance_business": 0.0002315044403076172, "__label__food_dining": 0.0003919601440429687, "__label__games": 0.0008325576782226562, "__label__hardware": 0.005096435546875, "__label__health": 0.0005192756652832031, "__label__history": 0.0002219676971435547, "__label__home_hobbies": 0.00013744831085205078, "__label__industrial": 0.0007238388061523438, "__label__literature": 0.00016641616821289062, "__label__politics": 0.0002465248107910156, "__label__religion": 0.0006241798400878906, "__label__science_tech": 0.05426025390625, "__label__social_life": 9.071826934814452e-05, "__label__software": 0.0081787109375, "__label__software_dev": 0.9248046875, "__label__sports_fitness": 0.0003859996795654297, "__label__transportation": 0.000720977783203125, "__label__travel": 0.0002312660217285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21970, 0.00218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21970, 0.04674]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21970, 0.96741]], "google_gemma-3-12b-it_contains_pii": [[0, 3245, false], [3245, 7340, null], [7340, 11466, null], [11466, 15208, null], [15208, 19352, null], [19352, 21970, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3245, true], [3245, 7340, null], [7340, 11466, null], [11466, 15208, null], [15208, 19352, null], [19352, 21970, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21970, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21970, null]], "pdf_page_numbers": [[0, 3245, 1], [3245, 7340, 2], [7340, 11466, 3], [11466, 15208, 4], [15208, 19352, 5], [19352, 21970, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21970, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
0fa22661c843f2c9d48a27030e837602b3df72fd
The Loop-of-Stencil-Reduce paradigm This is the author's manuscript Original Citation: Availability: This version is available http://hdl.handle.net/2318/1523738 since 2016-11-19T16:52:40Z Publisher: IEEE Published version: DOI:10.1109/Trustcom.2015.628 Terms of use: Open Access Anyone can freely access the full text of works made available as "Open Access". Works made available under a Creative Commons license can be used according to the terms and conditions of said license. Use of all other works requires consent of the right holder (author or publisher) if not exempted from copyright protection by the applicable law. (Article begins on next page) The Loop-of-Stencil-Reduce paradigm M. Aldinucci*, M. Danelutto†, M. Drocco*, P. Kilpatrick‡, G. Peretti Pezzi§ and M. Torquati† *Computer Science Department, University of Turin, Italy. †Computer Science Department, University of Pisa, Italy. ‡Computer Science Department, Queen’s University Belfast, UK. §Swiss National Supercomputing Centre, Switzerland. Abstract—In this paper we advocate the Loop-of-Stencil-Reduce pattern as a way to simplify the parallel programming of heterogeneous platforms (multicore+GPUs). Loop-of-Stencil-Reduce is general enough to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop. It transparently targets (by using OpenCL) combinations of CPU cores and GPUs, and it makes it possible to simplify the deployment of a single stencil computation kernel on different GPUs. The paper discusses the implementation of Loop-of-Stencil-Reduce within the FastFlow parallel framework, considering a simple iterative data-parallel application as running example (Game of Life) and a highly effective parallel filter for visual data restoration to assess performance. Thanks to the high-level design of the Loop-of-stencil-reduce, it was possible to run the filter seamlessly on a multicore machine, on multi-GPUs, and on both. Keywords—skeletons, fastflow, parallel patterns, multi-core, OpenCL, GPUs, heterogeneous platforms I. INTRODUCTION Since their appearance in the High-Performance Computing arena, GPUs have been widely perceived as data-parallel computing machines. This belief stems from their execution model, which prohibits any assumption about work-items/threads execution order (or interleaving) in a kernel execution. This in turn requires the avoidance of true data dependencies among different parallel activities. It quickly became clear that the best approach to programming GPUs is to “think data-parallel” by way of “data-parallel building blocks” [1], i.e. data parallel skeletons [2]. For this reason, GPUs kernels are typically designed to employ the map-reduce parallel paradigm, where the reduce is realised as a sequence of partial (workgroup-level) GPU-side reduces, followed by a global host-side reduce. Thanks to GPUs’ globally shared memory, a similar pattern can be used to map computation over stencils (i.e. data overlays with non-empty intersection), provided they are accessed in read-only fashion to enforce deterministic behaviour. Often, this kind of kernel is called in host code in a loop body (e.g. up to a convergence criterion). In this work we introduce the Loop-of-Stencil-Reduce pattern, as an abstraction of this general parallelism exploitation pattern in heterogeneous platforms. Specifically, Loop-of-Stencil-Reduce is designed as a FastFlow [3] pattern, which can be nested in other stream parallel patterns, such as farm and pipeline, and implemented in C++ and OpenCL. We advocate Loop-of-Stencil-Reduce as a comprehensive meta-pattern for programming of GPUs because it is sufficiently general to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop, i.e. implementing the previously mentioned “data-parallel building blocks”. Also, as discussed in Sec. III, it is more expressive than previously mentioned patterns. Moreover, it simplifies GPU exploitation. In particular, it takes care of device detection, device memory allocation, host-to-device (H2D) and device-to-host (D2H) memory copy and synchronisation, reduce algorithm implementation, management of persistent global memory in the device across successive iterations, and enforces data race avoidance due to stencil data access in iterative computations. It can transparently exploit multiple CPUs or GPUs (sharing host memory) or a mix of them. Also, the same host code can exploit both a CUDA and OpenCL implementation (whereas the kernel functions should match the selected language). While this paper builds on previous results [4], it advances them in several directions: 1) The Loop-of-Stencil-Reduce pattern is an evolution of the stencil-reduce pattern [4]. Specifically, Loop-of-Stencil-Reduce has been refined to explicitly include the iterative behaviour and the optimisations enabled by the knowledge of iterative behaviour. They are related to the GPU persistent global memory usage, stencil and reduce pipelining. 2) The Loop-of-Stencil-Reduce pattern has been uniformly implemented in OpenCL and CUDA, whereas stencil-reduce was implemented only in CUDA and using CUDA-specific features not supported in OpenCL, such as Unified Memory. Its implementation in OpenCL is particularly important in the perspective of using the pattern in heterogeneous platforms including different hardware accelerators, such as FPGAs and DSPs. 3) Support for the exploitation of iterative, locally-synchronous computations (by way of halo-swap) across multiple GPUs has been introduced, whereas in previous works usage of multiple GPUs is possible only on independent kernel instances. The structure of the paper is as follows: in the next section related work is presented; and a recap of the FastFlow programming framework is given. Section III introduces the Loop-of-Stencil-Reduce design principles, its API, and its implementation within the FastFlow framework. Experimental results are discussed in Sec. IV: the performances of different deployments of an effective but computationally-demanding video restoration application [5] are presented. Section V presents concluding remarks. II. RELATED WORK Algorithmic skeletons have been around since the '90s as an effective means of parallel application development. An algorithmic skeleton is a general-purpose, parametric parallelism-exploitation pattern [6]. Most skeletal frameworks (or indeed, high-level parallel programming libraries) eventually exploit either low-level tools such as NVidia CUDA or OpenCL to target hardware accelerators. CUDA is known to be more compliant to C++ and often more efficient than OpenCL. On the other hand, OpenCL is implemented by different hardware vendors such as Intel, AMD, and NVIDIA, making it highly portable and allowing the code written in OpenCL to be run on different graphical accelerators. OpenMP is a popular thread-based framework for multi-core architectures mostly targeting data parallel programming. OpenMP supports, by way of language pragmas, the low-effort parallelisation of sequential programs; however, these pragmas are mainly designed to exploit loop-level data parallelism (e.g. do independent). OpenMP does not natively support either farm or Divide&Conquer patterns, even though they can be implemented by using tasking features. Intel Threading Building Blocks (TBB) [7] is a C++ template library which provides easy development of concurrent programs by exposing (simple) skeletons and parallel data structures used to define tasks of computations. Also, several programming frameworks based on algorithmic skeletons have been recently extended to target heterogeneous architectures. In Muesli [8] the programmer must explicitly indicate whether GPUs are to be used for data parallel skeletons. StarPU [9] is focused on handling accelerators such as GPUs. Graph tasks are scheduled by its run-time support on both the CPU and various accelerators, provided the programmer has given a task implementation for each architecture. Among related works, the SkePU programming framework is the most similar to the present work [2]. It provides programmers with GPU implementations of several data parallel skeletons (e.g. Map, Reduce, MapOverlap, MapArray) and relies on StarPU for the execution of stream parallel skeletons (pipe and farm). The FastFlow stencil operation we introduce in this paper behaves similarly to the SkePU overlay skeleton (in some ways it was inspired by it). The main difference is that the SkePU overlay skeleton relies on a SkePU-specific data type and, to the best of our knowledge, it is not specifically optimised for being used inside a sequential loop. Another similar work in terms of programming multi-GPU systems is SkelCL, a high-level skeleton library built on top of OpenCL code which uses container data types to automatically optimize data movement across GPUs [10]. Also, the FastFlow parallel programming environment has recently been extended to support GPUs via CUDA [4] and OpenCL (as described in the present work). FastFlow CPU implementations of patterns are realised via non-blocking graphs of threads connected by way of lock-free channels [11], while the GPU implementation is realised by way of the OpenCL bindings and offloading techniques. Also, different patterns can be mapped onto different sets of cores or accelerators and so, in principle, can use the full available power of the heterogeneous platform. III. THE LOOP-OF-STENCIL-REDUCE META-PATTERN IN FASTFLOW In the following the semantics and the FastFlow implementation of Loop-of-stencil-reduce are introduced. The well-known Conway’s Game-of-life is used as simple but paradigmatic example of locally synchronous data-parallel applications (running on multiple devices). A. Semantics of the Loop-of-stencil-reduce meta-pattern Let map $f[a_0, a_1, . . . , a_n] = [f(a_0), f(a_1), . . . , f(a_{n-1})]$ and reduce $\oplus [a_0, a_1, . . . , a_{n-1}] = a_0 \oplus a_1 \oplus . . . a_{n-1}$, where $f : T \rightarrow T$ is the elemental function, $\oplus : T \times T \rightarrow T$ the combinator (i.e. a binary associative operator) and $a = [a_0, a_1, . . . , a_{n-1}] \in T^n$ an array of atomic elements. Let stencil $g k a' = [g(S_0), g(S_1), . . . , g(S_{n-1})]$, where $S_i = [a_i', . . . , a_{i+k}]$ is the $i$-th neighbourhood, and $a'$ is the infinite extension of $a$ (i.e. $\perp$ where $a$ is not defined). In this work we consider a more general formulation of the stencil pattern, namely: stencil $g k a' = [g(a', 0), g(a', 1), . . . , g(a', n)]$, which allows the function $g$ to access an arbitrary neighbourhoods of elements from the input array. Notice that in both formulations some care must be taken to deal with undefined values $a_i' = \perp$. We remark that, under a functional perspective, map and stencil patterns are very similar, the only difference being the fact that the stencil elemental function takes as input a set of atomic elements rather than a single atomic element. Nevertheless, from a computational perspective the difference is substantial, since the semantics of the map leads to in-place implementation, which is in general impossible for stencil. These parallel paradigms have been proposed as patterns both for multicore and distributed platforms, GPUs, and heterogeneous platforms [12], [2]. They are well-known examples of data-parallel patterns, since the elemental function of a map/stencil can be applied to each input element independently of the others, and also applications of the combinator to different pairs in the reduction tree of a stencil can be done independently, thus naturally inducing a parallel implementation. The basic building block of Loop-of-stencil-reduce is the stencil-reduce pattern [4], which applies a reduce pattern to the result of a stencil application (i.e. functional composition). The stencil-reduce computation is iteratively applied, using the output of the stencil at the $i$-th iteration as the input of the $(i+1)$-th stencil-reduce iteration. Moreover, it uses the output of the reduce computation at the $i$-th iteration, together with the iteration number, as input of the iteration condition, which decides whether to proceed to iteration $i + 1$ or stop the computation. We remark that, under a pure functional perspective, the Loop-of-stencil-reduce can be simply regarded as a chain of functional compositions. A 2-D formulation follows directly by replacing arrays with matrices. Since the stencil pattern is a generalisation of map, it follows that any combination of the aforementioned patterns (e.g. map-reduce, Loop-of-map-reduce etc.) is subsumed by Loop-of-stencil-reduce. B. The Game of Life example We use Conway’s Game of Life cellular automaton [13] as a running example in order to show the expressiveness of Loop- The building blocks of the Loop-of-stencil-reduce meta-pattern can be easily extracted from the above pseudo-code in order to build a Loop-of-stencil-reduce formulation of Game of Life, which is illustrated in Fig. 2: - stencil elemental function (lines 1–10); - reduce combinator (lines 7–10); - iteration condition (lines 15–16). C. The FastFlow Loop-of-stencil-reduce API In FastFlow, the Loop-of-stencil-reduce is aimed at supporting CPU only and CPU+GPU platforms by using OpenCL (or CUDA). The FastFlow framework provides the user with constructors for building Loop-of-stencil-reduce instances, i.e. a combination of parametrisable building blocks: - the OpenCL code of the elemental function of the stencil; - the C++ and OpenCL codes of the combinator function; - the C++ code of the iteration condition. The language for the kernel codes implementing the elemental function and the combinator – which constitute the business code of the application – can be device-specific or coded in a suitably specified C++ subset (e.g. REPARA C++ open specification [14]). Functions are provided that take as input the business code of a kernel function (elemental function or combinator) and translate it into a fully defined OpenCL kernel, which will be offloaded to target accelerator devices by the FastFlow runtime. Note that, from our definition of elemental function (Sec. III-A), it follows that the Loop-of-stencil-reduce programming model is data-oriented rather than thread-oriented, since indexes refer to the input elements rather than the work-items (i.e. threads) space, which is in turn the native programming model in OpenCL. In order to build a Loop-of-stencil-reduce instance, the user also has to specify two additional parameters controlling parallelism: 1) the number of accelerator devices to be used (e.g. number of GPUs in a multi-GPU platform) and, 2) the maximum size of the neighbourhood accessed by the elemental function when called on each element of the input. Note that the second parameter could be determined by a static analysis on the kernel code in most cases of interest, i.e. ones exhibiting a static stencil (e.g. Game of Life) or dynamic stencil with reasonable static bounds (e.g. Adaptive Median Filter, [5]). Once built, a Loop-of-stencil-reduce instance can process tasks by applying the iterative computation described in Sec. III-A to the input of the task, by way of the user-defined building blocks. An instance can run either in one-shot (i.e. single task) or streaming (i.e. multi-task) mode. In streaming mode, independent tasks can be offloaded to different GPUs, thus exploiting inter-task parallelism. Moreover, intra-task parallelism can be employed by offloading a single task to a Loop-of-stencil-reduce instance deployed onto different GPUs. Although this poses some challenges at the FastFlow implementation level (see Sec. III-E), at the API level it requires almost negligible refactoring of user code. That is, when defining the OpenCL code of the elemental function, the user is provided with local indexes over the index space of the device-local sub-input – to be used when accessing the input – along with global indexes over the index space of the whole input – to be used to e.g. check the absolute position with respect to input size. Fig. 1 presents the pseudocode of a sequential algorithm for Game of Life. The initial $N \times N$ binary matrix is randomly initialised (line 1); then a do-while loop iterates over generations, until either every cell is dead or an upper bound $G$ on the number of iterations has been reached. During a transition from one generation to the next, the events on each cell occur simultaneously in an atomic step of time (tick), and so each generation is a pure function of the preceding one. At each tick, each cell interacts with its eight neighbours and might turn into a live or dead cell depending on the number of live cells in its neighbourhood. The universe of the game is a matrix of cells (for simplicity, we consider a finite non-toroidal world), where each cell can be in two states: alive or dead. The first generation is created by applying a set of transition rules to every cell in the initial matrix. The process is iterated for generating generation $i+1$ from generation $i$, until either every cell is dead or an upper bound $G$ on the number of iterations has been reached. During a transition from one generation to the next, the events on each cell occur simultaneously in an atomic step of time (tick), and so each generation is a pure function of the preceding one. At each tick, each cell interacts with its eight neighbours and might turn into a live or dead cell depending on the number of live cells in its neighbourhood. D. Loop-of-stencil-reduce expressiveness In the shared-memory model, Loop-of-stencil-reduce exhibits an expressiveness similar to that of the well-known map-reduce paradigm, where the map is apply-to-all to a set of elements (list, array) and computations of different elements are independent. In fact there exists a quite straightforward way to express one in terms of the other. The Loop-of-stencil-reduce pattern can be trivially configured to behave as map-reduce (i.e. the set of neighbours per element is the element itself). Also, there are several methods to exploit a stencil with a map pattern can be trivially configured to behave as map-reduce (i.e. the set of neighbours per element is the element itself). The Loop-of-stencil-reduce pattern general schema. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{fig4.png} \caption{Loop-of-stencil-reduce pattern general schema.} \end{figure} We advocate Loop-of-stencil-reduce adoption because it explicitly exposes data dependencies at the pattern declaration level (see Fig. 3, line 20). This naturally describes a wide class of data parallel applications. Also, making the stencil explicit at the API level enables the kernel developer to reason about optimisations related to local memory, memory alignment, and static optimisation of halo buffers in the distributed memory space of multiple GPUs. E. The FastFlow implementation The iterative nature of the Loop-of-stencil-reduce computation presents challenges for the management of the GPU’s global memory across multiple iterations, i.e. across different kernel invocations. The general schema of the Loop-of-stencil-reduce pattern is described in Fig. 4. Its runtime is tailored to efficient loop-fasion execution. When a task is submitted to be executed by the devices onto which the pattern is deployed, the runtime takes care of allocating on-device global memory buffers and filling them with input data via H2D copies. The naive approach for supporting iterative computations on a hardware accelerator device equipped with some global memory (e.g. GPU) would consist in putting a global synchronisation barrier after each iteration of the stencil, reading the result of the stencil back from the device buffer (full size D2H copy), copying back the output to the device input buffer (full size H2D copy) and proceeding to the next iteration. FastFlow in turn employs device memory persistence on the GPU across multiple kernel invocations, by just swapping on-device buffers. In the case of multi-device intra-task parallelism (Sec. III-C), small device-to-device copies are required after ``` std::string stencilKernel2D_OCL( "unsigned char", "in", // element type and input "N", "M", // rows and columns "i", "j", // row–column global and local indexes std::string("") + "/ begin OpenCL code */ "+" n_alive = 0; \n" "+" n_alive += (i>0 && j>0 ? m[i-1][j-1] : 0); \n" "+" return ( n_alive == 2 ); */ / end OpenCL code */ ) ``` ``` while cond before (...) // On host, iteration initialisation, possibly in parallel on CPU cores prepare (...) // On device, swap I/O buffers, set kernel args, d2d–sync overlays stencil <SUM_kernel,MF_kernel> (input, env) // On GPU, stencil and partial reduce reduce op data // On host, final reduction after (...) // On host, iteration finalisation, possibly in parallel on CPU cores read(output) // d2h–copy output ``` FastFlow does not provide any automatic facility to convert C++ code into OpenCL code. It does, however, facilitate this task via a number of features including: - Integration of the same pattern-based parallel programming model for both CPUs and GPUs. Parallel activities running on CPUs can be either coded in C++ or OpenCL. - Setup of the OpenCL environment. - Simplified data feeding to both software accelerators and hardware accelerators (with asynchronous H2D and D2H data movements). - Orchestration of parallel activities and synchronisations within kernel code (e.g. reduce tree), synchronisations among kernels (e.g. stencil and reduce in a loop), management of data copies (e.g. halo swap buffers management). - Transparent usage of multiple GPUs on the same box (sharing the host memory). --- Fig. 3 illustrates a Game of Life implementation on top of the Loop-of-stencil-reduce API in FastFlow. Source-to-source functions are used to generate OpenCL kernels for both stencil elemental function (lines 1–12) and reduce combinator (lines 14–15). The source codes – OpenCL versions of the pseudocode in Fig. 2 – are wrapped into fully defined, efficient OpenCL kernels. The user, in order to enable exploitation of intra-task parallelism, has to use local indexes i and j to access elements of the input matrix. C++ codes for iteration condition and reduce combinator are not reported, as they are trivial single-line C++ lambdas. The constructor (lines 17–19) builds a Loop-of-stencil-reduce instance by taking the user-parameterised building blocks as input, plus the identity element for the reduce combinator (0 for the sum) and the parameters for controlling intra-task parallel behaviour, namely the number of devices to be used over a single-task (NACC) and the 2D maximum sizes of the neighbourhood accessed by the elemental function (Game of Life is based on 3-by-3 neighbourhoods). Finally, the constructor is parametrised with a template type goTask which serves as an interface for basic input-output between the application code and the Loop-of-stencil-reduce instance. FastFlow does not provide any automatic facility to convert C++ code into OpenCL code. It does, however, facilitate this task via a number of features including: - Integration of the same pattern-based parallel programming model for both CPUs and GPUs. Parallel activities running on CPUs can be either coded in C++ or OpenCL. - Setup of the OpenCL environment. - Simplified data feeding to both software accelerators and hardware accelerators (with asynchronous H2D and D2H data movements). - Orchestration of parallel activities and synchronisations within kernel code (e.g. reduce tree), synchronisations among kernels (e.g. stencil and reduce in a loop), management of data copies (e.g. halo swap buffers management). - Transparent usage of multiple GPUs on the same box (sharing the host memory). each iteration, in order to keep halo borders aligned, since no device-to-device copy mechanism is available (as of OpenCL 2.0 specification, device-to-device transfers). Global memory persistence is quite common in iterative applications because it drastically reduces the need for H2D and D2H copies, which can severely limit the speedup. This also motivates the explicit inclusion of the iterative behaviour in the Loop-of-stencil-reduce pattern design which is one of the differences with respect to solutions adopted in other frameworks, such as SkePU [2]. As a further optimisation, FastFlow exploits OpenCL events to keep Loop-of-stencil-reduce computation as asynchronous as possible. No dependencies exist between stencil and reduce computations at different iterations. Put another way, stencil and reduce computations can be pipelined (i.e. stencil at iteration $i$ + 1 can run in parallel with reduce at iteration $i$). Moreover, in the case of multi-GPU intra-task parallelism, sub-tasks running on different GPUs at the same iteration are independent of each other, and so can run in parallel. By exploiting the OpenCL events API, an almost arbitrary graph of task dependencies can be implemented, thus fully exploiting all the available parallelism among operations composing a Loop-of-stencil-reduce computation. We remark that providing the user with the low-level, platform-specific optimisation mentioned above, is one of the key features of the skeleton-based parallel programming approach. IV. EXPERIMENTAL EVALUATION Here we present a preliminary assessment of the Loop-of-stencil-reduce FastFlow implementation on top of OpenCL. For this two applications are used: the Game of Life application, described in Sec. III-B; and the two-phase video restoration algorithm. For more details on the video restoration algorithm we refer to [5], [4]. All experiments were conducted on an Intel workstation with 2 eight-core double-context (2-way hyper-threading) Xeon E5-2660 @2.2GHz, 20MB L3 shared cache, 256K L2, and 64 GBytes of main memory (also equipped with two NVidia Tesla M2090 GPUs) with Linux x86_64. \textbf{a) Game of Life:} Table I reports execution times of different deployments of the Game of Life application on 1) a CPU deployment with multiple threads running on the cores of a multi-core CPU and relying on the OpenCL runtime for exploiting parallelism; 2) a 1xGPU deployment running on a single M2090 NVidia GPU; 3) a 2xGPU deployment exploiting intra-task parallelism (as discussed in Sec. III-E) over two M2090 devices. First, performance usually benefits from offloading data-parallel computations onto GPU devices, as demonstrated by the fact that in all cases execution times on the GPU are faster than the respective execution times on CPU. The main factor limiting the GPU-vs-CPU speedup is the ratio of the time spent in H2D and D2H memory transfers over the effective computing time. If few iterations and/or small matrices are considered, then the overhead due to memory transfer becomes relevant and limits the impact of parallelism, in accordance with Amdahl’s law. The same considerations apply to the impact of intra-task parallelism (2xGPU vs 1xGPU). The benefit of exploiting two boards are limited if the population matrix is too small, since the amount of memory to be transferred D2D after each iteration in general scales up with a fraction of the input size. In particular, for the Game of Life application, the amount of memory to be transferred is $O(N)$, thus it scales up with the square root of the input size $N \times N$. When the overhead becomes almost negligible (e.g. last row of Table I), intra-task parallelism provides almost ideal speedup. \textbf{b) Two-phase video restoration:} Two kinds of experiment are reported: 1) Performance over a video stream of different deployments (i.e. different parallelisation schemas) of the restore stage. 2) Performance on a single image of both single-device and multi-device configuration of the Loop-of-stencil-reduce. Table II shows the observed results. The upper part reports the throughput (i.e. frames per second) obtained by running different deployments of the restore stage over a video stream under different noise-level conditions, which in turn require different numbers of iterations for convergence. The “CPU” deployment is the baseline: each frame is passed through a Loop-of-stencil-reduce OpenCL version of the filter, deployed onto the (cores of the) CPU. Defining a single video frame as a task, this configuration exploits intra-task data parallelism on each frame. The baseline is compared against different GPU deployments of the Loop-of-stencil-reduce. The “1 GPU” version exploits the same intra-task parallelism as the baseline version but runs on the GPU. The “2 GPUs intra-task” version exploits intra-task data parallelism by splitting single frames on two GPUs, and finally the “2 GPUs” version exploits the “1 GPU” version on successive (independent) frames of the video stream, each offloaded to one of the two GPUs (by way of a FastFlow pattern). The performance ratio among different versions is consistent with a hand-tuned development “1 GPU” [5]. For applications of this kind, the GPU deployment is not surprisingly several times faster. The deployment on 2 GPUs exhibits 65% more throughput with respect to the single GPU version. Also, the 2 GPUs version on the same video frame exhibits almost the same performance as the 2 GPUs version working on independent kernels, suggesting that the Loop-of-stencil-reduce succeeds in keeping the halo swap overhead quite limited. The lower part of Table II reports the execution time of the filter when applied to a single large image. Here there is no opportunity to exploit parallelism among different frames. In this case, using a multi-device deployment of the OpenCL Loop-of-stencil-reduce restore stage can lead to the full exploitation of the aggregated computational power of multiple GPUs, as shown by the almost linear speedup observed. V. Conclusions and Future Works In this work we have presented Loop-of-stencil-reduce, a parallel pattern specifically targeting high-level programming for heterogeneous platforms. The Loop-of-stencil-reduce pattern abstracts a common data parallel programming paradigm, which is general enough to subsume several popular patterns such as map, reduce, map-reduce. It significantly simplifies the development of code targeting both multicore and GPUs by transparently managing device detection, device memory allocation, H2D/D2H memory copy and synchronisation, reduce algorithm implementation, management of persistent global memory in the device across successive iterations, and data race avoidance. The same code using Loop-of-stencil-reduce can be deployed on multiple GPUs and on combinations of CPU cores and GPUs. It should be noticed, however, that this latter deployment requires a careful (and not always possible) planning of load to be distributed to GPU cores and GPUs due to their difference in performance [15]. The Loop-of-stencil-reduce pattern has been tested on a real-world application, i.e. an image restoration application, which is typically too slow to be actually usable when implemented on a single CPU or even on a 32-core platform. Also, the application requires access to the image along three successive filtering iterations to determine the convergence of the process, thus needing a quite complex design with large temporary data sets that should be moved across different memories as little as possible. The presented design based on FastFlow Loop-of-stencil-reduce makes it possible to easily implement the application with comparable performance to hand-optimised OpenCL code. Despite the fact that currently the Loop-of-stencil-reduce pattern is provided to programmers by way of C-style macros, we have already planned to substantially improve the embedding of the Loop-of-stencil-reduce pattern into the C++ language by way of C++ demacrofication process [16] and/or the C++11 attributes mechanism. This process is already ongoing within the REPARA project. ACKNOWLEDGMENT This work has been supported by the EU FP7 REPARA project (no. 609666) and by the NvidiA GPU Research Center. REFERENCES
{"Source-Url": "https://iris.unito.it/retrieve/handle/2318/1523738/52857/15_RePara_ISPA.pdf", "len_cl100k_base": 6912, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24151, "total-output-tokens": 8416, "length": "2e12", "weborganizer": {"__label__adult": 0.0005483627319335938, "__label__art_design": 0.0006928443908691406, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.00045418739318847656, "__label__entertainment": 0.0001366138458251953, "__label__fashion_beauty": 0.00023818016052246096, "__label__finance_business": 0.00025391578674316406, "__label__food_dining": 0.0004355907440185547, "__label__games": 0.0011625289916992188, "__label__hardware": 0.005157470703125, "__label__health": 0.0006971359252929688, "__label__history": 0.00047898292541503906, "__label__home_hobbies": 0.00016987323760986328, "__label__industrial": 0.0008521080017089844, "__label__literature": 0.0003266334533691406, "__label__politics": 0.0003674030303955078, "__label__religion": 0.0009007453918457032, "__label__science_tech": 0.137939453125, "__label__social_life": 8.577108383178711e-05, "__label__software": 0.00841522216796875, "__label__software_dev": 0.83837890625, "__label__sports_fitness": 0.0004584789276123047, "__label__transportation": 0.001140594482421875, "__label__travel": 0.00032210350036621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35498, 0.0183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35498, 0.46657]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35498, 0.88114]], "google_gemma-3-12b-it_contains_pii": [[0, 666, false], [666, 6328, null], [6328, 12880, null], [12880, 17643, null], [17643, 23979, null], [23979, 28444, null], [28444, 35498, null]], "google_gemma-3-12b-it_is_public_document": [[0, 666, true], [666, 6328, null], [6328, 12880, null], [12880, 17643, null], [17643, 23979, null], [23979, 28444, null], [28444, 35498, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35498, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35498, null]], "pdf_page_numbers": [[0, 666, 1], [666, 6328, 2], [6328, 12880, 3], [12880, 17643, 4], [17643, 23979, 5], [23979, 28444, 6], [28444, 35498, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35498, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
983727a26ff4b45d3b37410ac05de7d201303ccc
Advanced Programming Classes in Imagine Daniela Lehotska Faculty of Mathematics, Physics and Informatics Comenius University, Bratislava, Slovakia lehotska@fmph.uniba.sk Abstract Future teachers of informatics at the Faculty of Mathematics, Physics and Informatics take Interactive programming and visual modelling course in the second year of their study. The aim of the course is to familiarize with the Imagine environment and its use for the development of educational applications, to learn about the principles of developing interactive, visual and open microworlds. In the academic year 2004 - 2005 for the first time we offered students a continuation of this course – an optional seminar Programming classes in Imagine. The aim of this seminar is to apply more advanced programming methods for the development of medium-sized educational projects intended for children. In our paper we present the topics of the seminar. We give special attention to one class focused on working with programmable pictures. We show how to create a microworld for creating and editing the Escher tiles. We hope to illustrate several advanced Imagine Logo procedures and techniques (like higher-order procedures, modifying colours in shapes etc.), which the reader may productively apply in his/her own development of other microworlds. Keywords advanced programming in Imagine, tessellation, programmable pictures 1. Introduction There are three streams of courses in the present concept of informatics teachers study at the Faculty of Mathematics, Physics and Informatics: essential mathematics, special informatics courses and didactics of informatics. Special informatics courses can be structured into three categories: programming category, applied subjects, and optional subjects. Programming category is based above all on the Delphi programming environment. In the second year of the study students take Interactive programming and visual modelling course that introduces the Imagine environment. This programming in the Imagine environment builds a connection among the modern trends in programming (object, visual, parallel, event based programming), environment for the development of educational applications and a programming environment/language intended for teaching programming (Blaho & Kalas 2004). In the academic year 2004 – 2005, for the first time, we offered students a continuation of the course – an optional seminar Programming classes in Imagine. The aim of the seminar is to apply more advanced programming methods for the development of medium-sized educational projects intended for children, to learn some tips and tricks in Imagine. Two lessons have been allocated to this seminar per week. During these two lessons students together with the teacher developed one ore more small projects oriented to some topic. Students were asked to do homework every week, which was either completing a project from the seminar or creating a new small project using the same method as they learned during the class. With some projects we worked on for more seminars – we gradually improved them, added more functionality and settings, e.g. first we made a project for a simple player, later we created its net version etc. 2. Seminar topics During the term we met students nine times. Corresponding nine topics are described briefly in the following table. Table 1. Topics of the seminar, methods used to solve the problems and description of projects. <table> <thead> <tr> <th>topic</th> <th>instructions, problems</th> <th>projects description</th> </tr> </thead> <tbody> <tr> <td>programmable shapes</td> <td>spline, drawspline, outline map, generate</td> <td>screensaver – curves created as a spline from positions of several random moving turtles</td> </tr> <tr> <td>cutting and stamping pictures</td> <td>putPicture, getImage, createSelection, define, choosers for opening and saving files</td> <td>jigsaw maker – choosing a picture for a puzzle, cutting it into adjustable count of pieces, creating turtles (jigsaw pieces) with the shapes of these cuttings</td> </tr> <tr> <td>working with audio and video</td> <td>wave, midi melody and video objects, play, playWave</td> <td>playing icicles – recording and playing melodies created by clicking on icicles; each icicle represents a certain tone</td> </tr> <tr> <td>working with mouse cursor and keymenu</td> <td>simulating big mouse cursor, setmc, keymenu</td> <td>playing icicles – improving the previous version: adding stick as mouse cursor to strike on icicles and possibility to control the icicles with keyboard</td> </tr> <tr> <td>changing the colour of a picture (not of a programmable shape)</td> <td>shapecolor, tintcolor saving for web, saving project segments</td> <td>building blocks – creating “constructions” using several types of building blocks which can be coloured by a colour from a palette saving the project for web, what are the restrictions in such case</td> </tr> <tr> <td>working with net 1: sending text messages</td> <td>net object and its properties, connect, user, connected?, send</td> <td>simple chat project for sending (and receiving) text messages to all or specified net users</td> </tr> <tr> <td>working with net 2: sending and receiving objects and</td> <td>sending and receiving objects and instructions sendObject,</td> <td></td> </tr> <tr> <td></td> <td></td> <td>building blocks – net version of the project</td> </tr> <tr> <td></td> <td></td> <td>net drawing – every net user has its own turtle for drawing, drawing is send to all in the net</td> </tr> <tr> <td></td> <td></td> <td>bomb game – anybody in he net can create a bomb</td> </tr> </tbody> </table> 2 You can use your own picture (also a programmable one) for a mouse course, but the size of the picture can be at most 32x32. So if you want to use a bigger cursor, you have to simulate it by a turtle. instructions sendRun, onReceiveObject, runEnabled that starts ticking from 10 to 1; while / when clicking on the bomb, the user sends it to random player; if the bomb reaches 0, it explodes working with net 3 defining net object style during the run of the program, different functionality of server and client In all previous net projects, a net object is created at the beginning and its style (server, client) is fixed. The project does not depend on the net object style - server and client do the same. In the moodmeter project, the net object is created during the run of the program and the style is defined according to the decision of the user. The client can choose its mood from five given smiles. Its selection is sent to the server, which processes the client’s data, creates a graph and sends it to all clients. graphs graph representation project for representation and visualization of graphs, simple graph algorithms 3. Tessellation project The geometric meaning of the word tessellate is to cover the plane with a pattern in such a way as to leave no region uncovered (Schwartzman, 1994). Tessellations occur naturally in the world, and are frequently used in designs for works of art and architecture. The most famous art tessellations come from M. C. Escher who can be regarded as the 'Father' of modern tessellations. Tessellation can assist students in conceptualising infinity, learning about the different types of symmetry, and making observations about how colours and shapes affect perception. Our tessellation project should enable creating a tessellating tile by square modification (deformation) and subsequently doing tessellation with it. Let us do a simple analysis of a project. A tile will be represented by a turtle with a shape of a square at the beginning. Several small black points (turtles) will be placed on one side of the square. It will be possible to drag these points, except vertices, and deform the square in this way. The deformation will be applied in a certain modified way to another side of the square as well. There are many possibilities of modification, e.g. translation, rotation around a vertex, reflection through the diagonal, etc. First of all we realize the modification, where the left vertical side is translated to the right vertical side. Horizontal sides remain without change. 3.1. Creating a tile First, we create a turtle-tile at the position \([0 \ 0]\), with a shape of a yellow square sized 100 and name it *tile*: \[ \text{new "turtle [name tile pos [0 0] fillcolour yellow} \text{shape [polygon [repeat 4 [fd 100 rt 90]]]]} \] It would be nice to enable resizing of the square. To do this we put the slider named *sWidth* with the range from 2 to 10 on the page. The width of the tile can be 10 times the value of the slider. The width of the tile will be used in many procedures later, so we set up a variable *tileWidth* that will be updated when the value of the slider is changed. We define a tile’s procedure *myShape0* for changing the shape of the square according to the actual value of *tileWidth* and we define onChange event for the *sWidth* slider like this: make "tileWidth 10*value tile’myShape0. \[ \text{to myShape0} \text{setShape ![polygon [repeat 4 [fd :tileWidth rt 90]]]} \end \] We create a new class for the small dragable points: with common name prefix *point*, switched on autodrag property, shape of a point of size 8 and pen up: \[ \text{newClass "turtle "point [common:namePrefix point autoDrag true} \text{shape [point 8] pen pu]} \] Now we generate some points placed on the left vertical side of the square with the 10 pixels spacing. If the square width is 100, then we must create 11 points (in order to include both segment vertices). Vertices should be fixed, so we switch off their autodrag property. As the count of the points depends on the slider’s value, we define a procedure for generating them – newTile. \[ \text{to newTile} \text{eraseObject allOf "point} \text{repeat (sWidth'value + 1)} \text{[new "point [xCOR 0 yCOR (10*(repc-1))]}} \text{ask se first allOf "point last allOf "point [setAutodrag "false]} \end \] Now it’s time to make it possible to modify the tile’s shape while dragging the points. We add onDrag event to the points with the definition tile’s setShape1, where setShape1 is a tile’s procedure for changing shape using the translation modification (see figure 2). The shape of a tile is created by connecting corresponding points with line segments (see figure 3). Let \( v = [:\text{tileWidth} 0] \) be a vector of translation. Let \( A_0, \ldots, A_{10} \) (see figure 3) be the positions of point-turtles on the left side of the square. We collect them in the \( z_1 \) list. By translation of points \( A_0, \ldots, A_{10} \) (given by vector \( v \)) we get points on the right side of the square \( A'_i = A_i + v \). They are stored in \( z_3 \) list. The outline of the tile is made by connecting the points \( A_0, A_1, \ldots, A_{10}, A'_{10}, A'_{9}, \ldots, A'_0 \) (in this order!) with line segments. We use the `outline` command to define a proper shape with a list created as a connection of \( z_1 \) list and reversed \( z_3 \) list as a parameter. If we use – as an extension – the `polygon` command, we get not just an outline but a filled tile. ```lisp ; left vertical side (positions of point-turtles) let "z1 map [ask :% [pos]] allOf "point let "translation ![:\text{tileWidth} 0] ; right vertical side is created by translation [:\text{tileWidth} 0] from the left one let "z3 map [:% + :translation] reverse :z1 ; merging z1 and z3 lists let "shape se :z1 :z3 setShape ![polygon [outline :shape]] end ``` Now, if we move the slider, the square shape is changed, but the points remain at the same positions. By changing the size of the tile, the shape of the tile must be changed and all the point turtles must be regenerated. We define these two actions in the `newTile` procedure and then change the `onChange` event of the slider to: `newTile`. ```lisp to newTile make "\text{tileWidth} 10*\text{sWidth}'value eraseObject allOf "point tile'myShape0 repeat (\text{sWidth}'value + 1) [new "point [xCor 0 yCor (10*(repc-1)) onDrag [tile'myShape1]]] ask se first allOf "point last allOf "point [setAutodrag "false] end ``` Notice that we also add setting the `onDrag` event of the point-turtles to `tile'myShape1`. The shape of the tile is angular. What about to add a possibility of creating a rounded shape as well? We create two buttons named `bAngular` and `bRound` – both switch with the same group number (e.g. 1) and switched off `All Buttons May Be Off` property, possibly we draw some appropriate icons for them. We have to change the `myShape1` procedure. To create a rounded shape, we use the `spline` operation. Spline gets a list of points as input and outputs a modified list of points – several additional points are inserted into the list so that the whole curve is smooth. Important! The list of points on the left and right side must be splined independently. to myShape1 let "shape ifElse bAngular'down [se :z1 :z3] [se spline :z1 spline :z3] setShape ![polygon [outline :shape] end The shape of the tile should change by pushing the bAngular and bRound switches. So we add tile’myShape1 command to their onPush event. ![Figure 4. Example of an angular and a rounded tile.](image) 3.2. Tessellation The tile is ready, let’s do the tessellation. There are two possibilities of doing tessellation: automatically (we define a procedure for putting tiles properly to each other) or manually (we create many copies of the tile and let the user to put them on the plane). We show the automatic way. We add another page to the project – this will be the place for tessellation. The most simple solution for creating tessellation would be using the putPicture command with the shape of the tile-turtle from page1 as the first parameter and the style property set to the value tile:putPicture page1'tile'shape [style tile]. If we try that, we run into some limitations of our solution. First of them is that all tiles are of black colour due to the fact that putPicture command gets from the turtle only the drawing list without any fill colour information. Another limitation is that tiles do not fit into each other, see figure 5. The reason is that putPicture stamps the rectangle in which the shape is inscribed and shifts by the length of that rectangle, instead of shifting by the original width of the square that is the basis of the tile. ![Figure 5. Example of an incorrectly made tessellation using putPicture command.](image) So we cannot use the putPicture command, we have to define the tessellation by ourselves. We will use a turtle that moves by the step :tileWidth on page2 and stamps the tile’s shape. To make the calculation easier will set the origin of page2 so that it is in the lower left corner (X 0, Y 499 by default size settings). Here is how to create the stamping turtle: new "turtle [name stamp pen pu fillColour yellow pos [0 0] rangeStyle window shown false] By transition from page1 to page2, the stamp-turtle gets the tile’s shape and then realises the tessellation. This is defined in the prepare procedure which is called when the page2’onShowPage event comes up. The tessellation is realised as a two-dimensional cycle (through rows and columns). While stamping the tile, we switch between two colours, yellow and red. To change the colour of the stamp’s shape, we have to change its fillColour and then reset its shape. ```lisp to tessellate1 cs let "columns 1 + div (first page1'size) :tileWidth repeat 1 + div (last page1'size) :tileWidth [repeat :columns [;colours is a global variable with starting value [yellow red] setFc first :colours setShape shape stamp make "colours reverse :colours ;changing the colours order setXCor xCor + :tileWidth] if 0 = mod :columns 2 [make "colours reverse :colours] setXY 0 yCor + :tileWidth] end ``` Figure 6. Example of a correctly made tessellation using own procedure. To finish the first version of the project we link the two pages using buttons. ### 3.3. Other types of tiles In this part another three possibilities of tile modification are presented (see figure 7). ![Figure 7](image) Figure 7. Three tile modifications: a) rotation, b) rotation and translation c) rotation and reflection. To realise these three modifications, we define three procedures of tile-turtle: `myShape2`, `myShape3` and `myShape4`. We use local variables `z1`, `z2`, `z3` and `z4` to denote the lists of points of the left vertical, upper horizontal, right vertical, and bottom horizontal side (in that order). Procedures are commented in the code. Notice that each point transformation is made in parallel with all points by mapping the corresponding procedure to the whole list of points. a) Rotation (see Figure 7a) to myShape2 ; left vertical side (positions of point-turtles) let "z1 map [ask :% [pos]] allOf "point ; bottom horizontal side is created from the left vertical side by rotation around [0 0] about 90° let "z4 map [rotatePoint [0 0] 90 :%] reverse :z1 ; upper right corner let "b ![[:tileWidth :tileWidth]] ; merging z1 list, b and z4 lists let "shape ifElse bAngular'down [(se :z1 :b :z4)] [se spline :z1 spline :z4] setShape ![polygon [outline :shape]] end In this procedure we use our rotatePoint procedure, which outputs the coordinates of a point rotated around a given centre by a given angle. b) Rotation and translation (see Figure 7b) to myShape3 let "z1 map [ask :% [pos]] allOf "point let "translation ![tileWidth 0] ; right vertical side is created by translation [:tileWidth 0] from the left one let "z3 map [:% + :translation] reverse :z1 ; bottom horizontal side is created from the left vertical side by rotation around [0 0] about 90° let "z4 map [rotatePoint [0 0] 90 :%] reverse :z1 let "translation ![0 :tileWidth] ; upper horizontal side is created by translation [0 :tileWidth] from the bottom one let "z2 map [:% + :translation] reverse :z4 ; merging all the list to one let "shape ifElse bAngular'down [(se :z1 :z2 :z3 :z4)] [se spline :z1 spline :z2 spline :z3 spline :z4] setShape ![polygon [outline :shape]] end c) Rotation and reflection (see Figure 7c) to myShape4 let "z1 map [ask :% [pos]] allOf "point ; bottom horizontal side is created from the left vertical side by rotation around [0 0] about 90° let "z4 map [rotatePoint [0 0] 90 :%] reverse :z1 let "b ![tileWidth :tileWidth] ; upper horizontal is created by reflection (width a diagonal as the symmetry line) from the left vertical side let "z2 map [:b - reverse :%] reverse :z1 ; right vertical side is created from the bottom horizontal side by the same reflection let "z3 map [:b - reverse :%] reverse :z4 ; merging all the list to one let "shape ifElse bAngular'down [(se :z1 :z2 :z3 :z4)] We add four buttons for setting the type of the tile modification – these are switches with the group number set to 2 and All Buttons May Be Off property switched off. When any of these buttons is pushed, we set the global\(^3\) variable tileType to the number which determines the type of tile modification, and then call general procedure myShape of tile-turtle. This procedure decides – based on the tileType value – whether myShape1, myShape2, myShape3 or myShape4 should be run. So the onPush event for the first button looks like: make "tileType 1 tile'myShape. The myShape procedure can be defined in this short way. ``` to myShape run word "myShape :tileType end ``` Now we have to rewrite the onDrag event of the point-turtles and the onPush events of bAngular a bRound buttons to: tile’myShape. Also the way of tessellation has to be changed because the tessellation1 procedure is not appropriate for all types of tiles. E.g. the simple way of tessellation with the tile created by rotation (myShape2) is to stamp not 1, but 4 tiles in one step – the original one and its 90, 180 and 270 degrees rotations. ``` to tessellate2 cs repeat 1 + div (last page1'size) 2*:tileWidth [repeat 2 + div (first page1'size) 2*:tileWidth [repeat 4 [setFc first :colours setShape shape stamp rt 90 make "colours reverse :colours] setXCor xCor + 2*:tileWidth] setXY 0 yCor + 2*:tileWidth] end end ``` We define also tessellation3 and tessellation4 procedures for two remaining tile types. To run the proper version, we rewrite the prepare procedure on page2. ``` to prepare setShape page1'tile'getShape run word "tessellate :tileType end ``` ### 3.4. Further suggestions In this part we present suggestions for further development and improvement of the project. They are structured in three groups. 1. **Creating the tile.** - We can add more types of tiles. There are many combinations of congruent plane transformations that can be applied to one or more sides of the square. \(^3\) global because we need it also in page2 for determining the type of tessellation • We have used square as a basic shape, but it is also possible to start from some other plane figures, such as equilateral triangle, regular hexagon, parallelogram, diamond etc. • The draggable points were placed on the side evenly with the spacing of 10 pixels. The spacing (or count) of the points could be defined by a slider. Potentially, it could be possible to drag an arbitrary point of the square side. 2. **Automatic tessellation.** We used two strictly defined colours (yellow and red) for the tessellation. One way of changing it is to fill the tiles with random colours. But it would be nice to let the choice of the colours on the user. There are several possibilities how to realize it. • Before doing the tessellation the user chooses two colours from a palette. • We do the tessellation with blank filling (white in fact) and after doing the tessellation we offer the user a colour palette and a fill tool to fill the tiles as he/she wants. 3. **Manually tessellation** has already been mentioned. To do the tessellation manually, the user should have tools for rotating (by 90° will do in our project) and reflecting tiles, setting their colours and of course dragging them. To help the user, the tiles could stick to the closest grid position after releasing the mouse button. It’s apparent that project containing all these possibilities, functions and settings cannot be developed in one or two lessons. Probably we can elaborate on it for the whole term, but then the students will not meet many other interesting topics and ideas. So we can let these further suggestions as a motivation. You can try the project at [http://user.edi.fmph.uniba.sk/lehotska/tessellation.html](http://user.edi.fmph.uniba.sk/lehotska/tessellation.html) or download the imp-file from [http://user.edi.fmph.uniba.sk/lehotska/tessellation.zip](http://user.edi.fmph.uniba.sk/lehotska/tessellation.zip). **References**
{"Source-Url": "http://eurologo2005.oeiizk.waw.pl/PDF/E2005Lehotska.pdf", "len_cl100k_base": 5673, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23743, "total-output-tokens": 6356, "length": "2e12", "weborganizer": {"__label__adult": 0.000667572021484375, "__label__art_design": 0.00334930419921875, "__label__crime_law": 0.000732421875, "__label__education_jobs": 0.19287109375, "__label__entertainment": 0.00026702880859375, "__label__fashion_beauty": 0.0004794597625732422, "__label__finance_business": 0.000812530517578125, "__label__food_dining": 0.001125335693359375, "__label__games": 0.0013580322265625, "__label__hardware": 0.0017366409301757812, "__label__health": 0.0013751983642578125, "__label__history": 0.0013446807861328125, "__label__home_hobbies": 0.0008411407470703125, "__label__industrial": 0.0018014907836914065, "__label__literature": 0.0011358261108398438, "__label__politics": 0.0006604194641113281, "__label__religion": 0.0013360977172851562, "__label__science_tech": 0.1676025390625, "__label__social_life": 0.0006399154663085938, "__label__software": 0.018798828125, "__label__software_dev": 0.5986328125, "__label__sports_fitness": 0.0006589889526367188, "__label__transportation": 0.001148223876953125, "__label__travel": 0.0006136894226074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23657, 0.02263]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23657, 0.56638]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23657, 0.87126]], "google_gemma-3-12b-it_contains_pii": [[0, 3026, false], [3026, 5781, null], [5781, 8134, null], [8134, 10305, null], [10305, 12848, null], [12848, 15104, null], [15104, 16682, null], [16682, 18793, null], [18793, 20952, null], [20952, 23657, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3026, true], [3026, 5781, null], [5781, 8134, null], [8134, 10305, null], [10305, 12848, null], [12848, 15104, null], [15104, 16682, null], [16682, 18793, null], [18793, 20952, null], [20952, 23657, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23657, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23657, null]], "pdf_page_numbers": [[0, 3026, 1], [3026, 5781, 2], [5781, 8134, 3], [8134, 10305, 4], [10305, 12848, 5], [12848, 15104, 6], [15104, 16682, 7], [16682, 18793, 8], [18793, 20952, 9], [20952, 23657, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23657, 0.05]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a4f4ed0cb8d945b83ce645c19dded3cb34ab4c7a
Collaborative systems modeling and group model building: a useful combination?\(^1\) E.A.J.A. (Etienne) Rouwette\(^1\) S.J.B.A. (Stijn) Hoppenbrouwers\(^2\) \(^1\)Methodology Department Faculty of Management Radboud University Nijmegen Thomas van Aquinostraat 1.2.31 PO Box 9108 6500 HK Nijmegen The Netherlands tel +31 24 3611468, fax +31 24 3611599 e.rouwette@fm.ru.nl \(^2\)Computer Science Department Faculty of Science Radboud University Nijmegen Heijendaalseweg 135 PO Box 9020 6500 GL Nijmegen The Netherlands tel +31 24 3652645, fax +31 24 3653356 stijnh@cs.ru.nl Abstract Client involvement in modeling is the hallmark of simulation-based methodologies and applied fields such as information systems development and environmental modeling. Unfortunately, comparison of assumptions and exchange of practical guidelines has failed to take place between methodologies and fields of application. We hope to work towards such an exchange by making an initial comparison between collaborative techniques from information systems development and system dynamics. Collaborative systems modeling refers to client involvement in information systems development. The field has decades of experience in developing formal models of business \(^1\) The authors would like to thank Thomas Beck for his comments on a previous version of this paper. processes and related information structures, and has spawned a range of methods and tools to involve clients in modeling. There is ample evidence concerning the usefulness of alternative approaches. A large part of the literature on group model building covers similar topics. Recent discussions that raised attention in both fields point to further similarities: repeatability of the modeling process (versus dependence on skill of the modeler), quality of modeling and implementation of results. In this paper we explore whether both approaches to client involvement can learn from each other. We look at differences and commonalities between goals, modeling languages, procedures and methods, and tools and techniques. **Introduction** Stakeholders, experts or decision makers have always been a crucial information source for system dynamics modelers (Forrester, 1961, 1992). Data on central model elements, such as policies that show how information is converted into action, is typically only available in decision makers’ mental models. From the 1970s on, the role of client involvement in implementation of results received explicit attention (e.g. Roberts, 1973). Procedures for involving clients were discussed soon thereafter (for an overview of approaches, see Andersen, Vennix, Richardson, & Rouwette, 2007). Group model building emerged as a general term for system dynamics modeling in close cooperation with clients. In operational research and the systems field similar developments took place, and a suite of approaches developed under the name of Problem Structuring Methods (PSMs). In developing their approaches, practitioners from the system dynamics and PSM fields have frequently compared their approaches, borrowed from each other’s techniques or discussed their methodological assumptions (see for example the papers at the 1994 International System Dynamics Conference, the special issue of System Dynamics Review on group model building, Howick, Ackermann, & Andersen, 2006; Lane, 1994). A number of hybrid approaches, integrating elements of system dynamics and particular PSMs, have been described in the literature. Lane and Oliva (1998), for example, describe the theoretical basis for integrating system dynamics and soft systems methodology. Strategic Options Development and Analysis or cognitive mapping (e.g. Eden & Ackermann, 2001) offers tools and techniques that are also used in system dynamics studies. The similarities between system dynamics and operational research approaches goes so far that (in one of its meanings) the term systems thinking is used to cover both types of approaches (Forrester, 1994). In addition to combining different methods, approaches are sometimes also tailored to specific content areas. An example is Van den Belt’s (2004) mediated modeling, which combines insights from group model building and consensus building on environmental issues. Another example is the application of expert system dynamics to the medical and biological sciences (e.g. Sosnovtseva & Mosekilde, 2006). Here methodological insights are applied to a particular content area, and there is little exchange of methodological principles. Participative modeling methodologies outside of operational research or the system field have less often been compared to group model building. This is an important gap in the literature, since there is the potential of a fruitful exchange of ideas such as between the operational research and system dynamics fields, but this potential has not been realized so far. The purpose of this paper is to compare group model building and collaborative systems modeling. Collaborative systems modeling refers to client involvement in information systems development. Comparing group model building to collaborative systems modeling seems particularly useful, since the latter field has decades of experience in developing, most typically, formal models of business processes and the information structures related to them (and a large variety of other, related models). A range of methods and tools to involve clients in modeling is available; the usefulness of alternative approaches is reported in several studies. Part of such alternative directions are several (non system dynamics) causal modeling approaches for collaborative systems modeling, for example Soft Systems Methodology (Checkland & Holwell, 1998; Checkland & Poulter, 2006) and cognitive mapping (Ackermann & Eden, 2005). A more general discussion on the use of causal mapping for information systems development is given by Hodgkinson and Clarkson (2005). Banker and Kaufmann (2004) discuss the use of various simulation approaches, including discrete event simulation, to information systems development. An interesting parallel exists between the tailoring of GMB to a specific content field and, in information systems development, the tailoring of development methods to specific uses, called situational method engineering (Ralyté et al., 2007). Although similar topics are addressed in the group model building literature, there are clear differences between both fields. Whereas group model building aims to develop policies that improve the problematic situation, the intended result of collaborative systems modeling is an integrated information system design that captures essential requirements and structures of the information system at hand. We expect that exchange of ideas between collaborative modeling in system dynamics and in information systems development may enrich both disciplines, similar to the exchange of ideas that is taking place between the GMB and PSM fields. Since other causal modeling approaches are being used for collaborative systems modeling, an important topic is: what are specific benefits that group model building has in addition to these approaches. Two chief questions thus arise: - To what extent are the goals and means of GMB and CSM interchangeable or complementary? - Do these differences in goals lead to fundamental differences in facilitation of client involvement? In the following we address differences and commonalities between goals, modeling languages, procedures and methods and tools and techniques of both approaches. The aim of our paper is to show whether, and if so, where these approaches can learn from each other. Goals In this section we first describe the goals of both group model building and collaborative systems modeling. Next we address commonalities and differences. Goals of group model building Group model building can usefully be applied in dynamically complex situations. In these situations one or more indicators that are important to an organization develop in an undesired direction, while the reasons for this development are not directly clear. Typically these problems touch upon the expertise and responsibility of multiple actors and parties in an organization, and each party or actor only has a partial view of the problematic situation. By bringing the parties involved in the problem together and facilitating a joint modeling effort with these parties, group model building is expected to create a more shared view on the problem and on actions available to improve the situation (Andersen et al., 2007; Richardson & Andersen, 1995; Vennix, 1996; Vennix, Akkermans, & Rouwette, 1996). The system dynamics model created in this joint effort aims to explain problematic behavior by capturing the essential structure of the problem. Thus group model building aims for two sets of outcomes. The first are outcomes related to participants’ direct involvement in modeling: - improved quality of communication, - mental model change or learning, - consensus and - commitment with regard to proposed actions in the problem. These goals are important not only because they guarantee a high quality input for modeling but more so because each decision maker is thought to have a degree of discretion in implementing options: commitment of those involved in the problem is instrumental in implementing conclusions of the modeling effort. The second set of outcomes concerns the technical goals of modeling. The model should be technically correct in the sense that it passes a set of validation tests (e.g. Forrester & Senge, 1980). Simulations with the model should point to high leverage points for steering problematic behavior in the right direction, and these high leverage points should logically connect to proposed options in the real world. Goals of collaborative systems modeling In collaborative information systems modeling, business-IT alignment and human-IT alignment are central goals. The aim of the modeling effort is to develop an IT system that supports the essential business processes as understood by those involved. Models are crucial artifacts in the development of IT systems. In fact, prominent branches in contemporary systems development are Model Based Architecture (OMG, 2003) and Model Driven Systems Development (Stahl et al, 2006). In software development, perhaps the most famous example of a (combined) modeling approach and language are the Rational Unified Process (Kruchten, 2000) and the Unified Modeling Language (Booch et al., 1998). However, a number of such approaches/languages exist, some of which are focused less on (technical) software development and more on information/business/enterprise modeling, for example Business Process Modeling Notation (BPMN, a schema language; OMG, 2006) and Semantics of Business Vocabulary and Rules (SBVR, a standard structure for expressing business concepts and rules; OMG, 2005). In addition, the creation of various flavors of ontologies (put simply, conceptual networks aimed at sharing concept definitions) is a strongly related practice (Guarino, 1998). Whereas in work processes, business concepts are used, an IT systems design and implementation (also) need technical concepts. Specialized classes of model are used to address specific needs and audiences. In principle, mappings between, say, business-oriented models and technically oriented models should facilitate the alignment of the “business world” and the “technical world” (Hoppenbrouwers, 2008). A technically correct model should essentially be correct in formal syntactic sense, and be complete. Social goals boil down to agreement by stakeholders (validity in the opinion of both individuals and the group): - understanding (agreement on interpretation of models by stakeholders), - consent (agreement on the accurate and appropriate description of the domain by the model) and - commitment (agreement on actual implementation and deployment of systems based on the models). However, in practice the bridging of the gap between socially embedded processes of conceptual modeling and rational processes of system engineering has proven extremely hard and to some extent is still unsuccessful. Capturing business processes in a model requires both formal and informal language and communication. The model helps in eliciting the explicit and implicit knowledge used in operating and communicating about the business processes. Increasingly, modelers focus on capturing business rules: “organizational rules under jurisdiction of the business” (OMG, 2005; Ross, 2003). Business rules guide or automate decision makers’ behavior in a particular domain. An example of a business rule in banking is the following: withhold authorization of an increase in long-term credits if a company’s equity is below 20%. Similar business rules are specified to capture, for example, legal expertise, medical protocols, safety regulations, and tax regulations. Even in domains characterized by structured knowledge such as law, business rules are difficult to capture in a formal format (while from an implementation point of view, formalization is an absolute requirement). Translating the complexity of business processes and human communication into concepts and relations boils down to interpreting ideas from one world so they fit the more structured language/text of another world. This brings differences of values and interpretations of stakeholders to the fore, introducing into systems development elements of a negotiation (Rittgen, 2007). The challenge in reconciling differences within and between both worlds consists of creating a formal model that is acceptable and useful as well as based on business and human concepts. Thus, IT system development increasingly aims to reconcile technical and social goals. Development of an IT system is often the responsibility of computer engineers. Accordingly, techniques and methods for building these models typically have a technological background and are adapted for use in business environments. Clarifying work processes and the way decisions are made (e.g. through business rules) takes the form of communication about formal models. Eliciting and structuring concepts and validating formal models are thus central activities in IT systems development. Clearly, since information systems are often developed by teams of people, communication has always played a role in IT systems development. However, communication in teams of software developers is facilitated by their shared (technical) background and regular use of formal models to capture concepts and relations. Communication between IT system developers and non-technical stakeholders (e.g. users, commissioners, managers) typically involves clarifying informal communication and decision premises. As a result, techniques and methods for involving users in IT systems development typically do not involve formal models. Examples are Joint Application Development (Wood and Silver, 1995) and scenario-based approaches (Carroll, 1995). Recently, advanced applicers of technology in business have become aware of concrete advantages of applying formal techniques in business governance and decision making. Formal representations of business rules, Business Process Management (BPM), and business intelligence are examples of this approach. However, the burden of formalization is usually too great for practical application, and requires the involvement of highly specialized, highly expensive experts. In conclusion, collaborative systems modeling sets both technical and social goals, though the latter are underemphasized. Technical goals boil down to rationally capturing knowledge of people in the business, so that the business is accurately represented in a formal model. Based on these formal models, computational tools and techniques (e.g. information systems, business analysis and simulation) can be developed to support the business. **Comparison of goals** Group model building and collaborative systems modeling both have technical as well as social goals. Both approaches attempt to construct informal as well as formal models that are valid representations of the problem/domain under consideration. In group model building the subject is typically a dynamically complex problem. The intended use of the model is then primarily to identify high leverage points to alleviate the problem. Social goals primarily concern stakeholders’ consensus and commitment to the conclusions of the modeling project. The primary difference between both approaches is that SD tries to find an explanation why the current system behaves the way it does (Beck, 2008). Most of the effort is spent on building a formal model which represents the current system best and therefore reproduces the (unwelcome) behavior. Once you have a simulation model it is considered relatively easy to find the policies which will make everything better. This is usually where the SD intervention ends. IT systems development (collaborative or not) spends less attention to this first analysis step. Sometimes the process starts with describing the current situation (current business processes, data flow, functions etc.) after which the business analyst stares for hours at these AS-IS descriptions and determines what these business processes, data flows and functions have to look like in the future (TO-BE). In this phase the business analysts might involve the stakeholders, end-users or subject matter experts and collaboratively propose a TO-BE solution. However, these TO-BE processes are never dynamically checked to prove that they will indeed perform any better than the current AS-IS processes. It is a static comparison and arguing based on experience, gut-feeling and also of politics and selling. The purpose of the IT modeling in a first phase is then to make sure all the business people and IT people have the same understanding of the TO-BE solution so that the sponsor can sign it off and system developers have a clearly stated mandate, describing what is in and out of the scope of the system to be. It is not really about proving that TO BE will be better than AS IS (Beck, 2008). A secondary difference concerns the timeframe. Collaborative systems modeling is not so much focused on alleviating a particular problem which is bounded in time. Instead the approach aims to capture the essential elements and structures that form the basis for decisions on the design of an information system. These design decisions concern a longer time frame. Although in principle nothing prevents the use of system dynamics models over a longer time frame, the approach is more geared to capturing the properties of a specific problem, or class of problems, with regard to a predefined time horizon. **Modeling languages** *Modeling language in group model building* As described in the previous section, group model building and system dynamics aim to capture the structure of complex dynamic problems. In system dynamics, a set of interacting feedback loops forms the most important part of the model. As an example, imagine a government agency that experiences increasing delays in its service to clients. Representatives of all departments that have a role in the work processes involved will be invited to participate in group model building sessions. Typically the model will show how the work process in each separate department may be rational with regard to local goals, but create unexpected and undesired consequences for other departments. The structural explanation for the increase in service delays would need to show how each department reacts to the other and forms a part of a reinforcing feedback loop. Problematic behavior is said to arise in particular from feedback loops contained in the problem structure. An understanding of self-reinforcing and balancing loops is necessary to explain behavior over time. For creating this structural understanding, qualitative or quantitative models may be used. The hallmark of system dynamics are quantitative models consisting of a set of differential equations. These models are visually depicted in the form of so-called stock&flows models. Qualitative models are depicted in the form of (non-formalized) stock&flows models and causal loop diagrams. Recently a type of models has been proposed that offers a middle ground between conceptual and fully formalized models. These so-called Marvel models add three types of information to causal loop diagrams: current values of variables and strength and speed of relations (Van Zijderveld, 2007). Marvel models then allow the modeler to see the effect of a change in a parameter value on behavior patterns. System dynamics models of any kind typically consist of many variables and relations. In the system dynamics community several authors have addressed the benefits of qualitative versus quantitative models and causal loop diagrams versus stock&flows diagrams (Coyle, 2000, 2001; Homer & Oliva, 2001; Warren, 2004). Rouwette, Vennix and Van Mullekom (2002: 14) find that most system dynamics models are between 20-200 variables (with a minimum of 20-30 and a maximum of 200-1000 equations). In larger models part of the structure is often repeated, for example when three types of products in inventory are distinguished. However, even in large models feedback complexity is preferred over detail complexity. After dynamic behavior is explained in terms of interacting feedback loops, the model is used to identify policy interventions in the problem. The ultimate goal of group model building is to improve future behavior. Modeling language in collaborative systems modeling Whereas in system dynamics formal models are largely similar (based on differential equations), in collaborative systems modeling a vast number of academic and industrial approaches co-exist (both formal and informal, the formal ones based mostly on discrete mathematics), either peaceful or not. This has in fact led to what is sometimes referred to as the “YAMA syndrome” (Yet Another Modeling Approach). The field is divided in many specialized sub-disciplines that focus on different aspects of systems. An integrated view is thus difficult and does not seem to have high priority in the field. The ultimate goal of IT systems modeling is to construct a system that proves to be effective. Proof in this sense typically means either formal mathematical proof (for example, showing that some formal specification is realized by means of a formalized computational machine) or trial-and-error testing. For complex systems, in particular those involving or even including businesses and/or human beings, provable quality is very difficult to achieve. For instance, a language such as UML (Booch et al, 1998) is relatively generic and widely accepted, but not formal (i.e. cannot, in its standard version, be mapped 1:1 to an appropriate, standard formalism) and thus cannot be a common basis for automated model checking or software generation. Apparently IT systems modeling may focus on highly domain-specific mathematical quality, or generic, informal integrated complexity, but not on both. Comparison of modeling languages With regard to modeling languages, there is a striking difference between group model building and collaborative systems modeling. While the first field a consensus seems to exist on the preferred modeling language (causal loops, stock-flows; formalized as differential equations), in the second many different approaches are used in parallel throughout industry (though mostly based on discrete mathematics). In fact, considerable utilitarian overlap exists between many of the methods and languages used. Procedures and methods Procedures and methods in group model building System dynamics modeling consists of series of phases that each may be supported in different ways. We first describe the phases of system dynamics modeling in a general sense and in the next section focus on procedures to facilitate client participation in modeling. A general outline of the phases in system dynamics modeling is the following (Richardson & Pugh, 1981): 1. identification of the problem and model purpose; 2. system conceptualization; 3. formalization and parameter estimation; 4. analysis of model behavior: sensitivity analysis and testing; 5. estimation of model validity or evaluation; 6. policy analysis; 7. model use or implementation. In the first phase a preliminary problem definition is chosen, in which the problem boundaries, time horizon and the reference mode of behavior are identified. In the following phase, other concepts central to the problem are identified and typically captured in visual form. In this way the model structure grows as new variables and relationships are added. In the formalization phase each relationship is translated into a mathematical equation. The resulting set of equations allows the model to be simulated over time. Model behavior is then analyzed to understand model behavior and the influence of structure on behavioral patterns. Testing includes changing initial parameter values or changing relationships between variables, and observing the effects on model behavior (e.g. Ford, 1999). The phase of testing the model for its validity is crucial to the modeling process and widely discussed in the literature (see e.g. Forrester & Senge, 1980). Model validity concerns the adequacy of the model for representing the problem under study. Forrester and Senge (1980) refer to validation as the process of building confidence in a model. For this they identify a large number of structural and behavioral tests. Confidence in a model increases as more tests are successfully passed. In this phase a balance needs to be struck between adding more detail to the model structure and therefore increasing its complexity, and the ability to understand a model. In the policy analysis phase, parameters or larger sections of model structure are changed in order to see their impact on system performance. The goal is to identify changes that steer outcome variables in the preferred direction. In this phase a scenario analysis can be performed by running the model under different conditions for exogenous variables, which clarifies the robustness of policy interventions. Procedures and methods in collaborative systems modeling In IT systems modeling a general outline of phases is the following (many slightly differing versions of this list exist; we give our own version. Please note that the phases rarely follow each other up linearly, nowadays: the process is "iterative"): 1. problem definition: why should we develop this information system? 2. requirements analysis: what do we want from the information system? 3. functional design: what exactly will the system do? 4. technical design: what will the structure, the architecture of the system look like; what existing components are to be built in (if any)? 5. realization: actual construction (programming, generation) takes place 6. testing: both at a technical level and at a usage level 7. deployment: the actual introduction of the working system in the organization. The first two phases typically involve the most intensive interaction with users and stakeholders, though various other phases also require such interaction. Participative specification of requirements generally puts high demands on time and money. However, in situations where requirements analysis is not completed satisfactorily, IT projects typically fail to deliver the benefits users and developers hoped form (Standish Group, 1999). A common critique of participative requirements analysis is that is takes a long time and success is not guaranteed. Also, evolution of information systems (rapid development and change driven by changes in the business) is an increasing problem. So-called agile development of software (for example, using the SCRUM approach; Schwaber and Beedle, 2002) is thought to be helpful in addressing this problem, but does not readily extend to the requirements phase. Comparison of procedures and methods In conclusion, phases of model construction show similarities. In both approaches the client is most intensely involved in the first phases. A phase where elements of modeling might help IT system development is in testing (Beck, 2008). System developers frequently use prototyping, which can be considered a form of simulation model. Hence, there is not so much a need for formal simulation models during the implementation phase you just show the end users what you have already built and he or she will like it or ask you to change it. "Testing" is sort of running simulations as well. Software code is developed in a development environment. Once you have a piece of code, the developer conducts a functional unit test where he/she tests the functionality against the design specification (does it work as designed)? Later on system developers conduct functional integration tests where all software pieces are tested together. Then the software code is moved to a training environment which should be much closer to reality (the production environment). Some potential end-users then test (or simulate) the application in a user acceptance test. By using simulation, criteria such as user-friendliness and touch-and-feel can be tested as well. **Tools and techniques** *Tools and techniques in group model building* In many of the modeling phases described above, information contained in stakeholders’ mental models is crucial. In group model building a variety of procedures is available that facilitate client involvement and help to elicit and test mental models (see for example Vennix, 1996, 1999). As a foundation for choosing between different procedures, Andersen and Richardson (1997) develop a set of guiding principles and so-called scripts for group model building sessions. Guiding principles capture basic ideas in the interaction with clients, such as break task/group structure several times each day, clarify group products, maintain visual consistency and avoid talking heads. Scripts are more concrete instances of these principles and refer to small elements of the interaction process (Andersen & Richardson, 1997; Luna-Reyes et al., 2006). The following table shows scripts described in Andersen and Richardson’s (1997) original paper. <table> <thead> <tr> <th>Phase in modelling</th> <th>Script</th> </tr> </thead> </table> | Defining a problem | Presenting reference modes Eliciting reference modes Audience, purpose, and policy options | | Conceptualizing model structure | Sectors, a top down approach Maintain sector overview while working within a sector Stocks and flows, by sector Name that variable or sector | | Eliciting feedback structure | Direct feedback loop elicitation Capacity utilization script System archetype templates "Black box" means-ends script | | Equation writing and parameterization | Data estimation script Model refinement script "Parking lot" for unclear terms | | Policy development | Eliciting mental model-based policy stories Create a matrix that links policy levers to key system flows "Complete the graph" policy script Modeller/ reflector feedback about policy implications Formal policy evaluation using multi-attribute utility models Scripts for "ending with a bang" | Table 1. Group model building scripts (cf. Andersen and Richardson, 1997) The method developed by Hines (Otto & Struben, 2004) integrates the phases of modeling with available scripts and techniques for client involvement. Tools and techniques in collaborative systems modeling In line with the great number of existing languages and methods in IT systems modeling, the number of tools and techniques in the field is also vast; too vast to even attempt listing them. However, truly collaborative modeling methods are seriously underrepresented (which does not mean there is no collaboration going on). Collaboration support is very often reduced to sharing documents and organizing superficial group reviews in hindsight, leaving detailed design and implementation decisions to technical experts. Comparison of tools and techniques In group model building a development can be seen from an overall methodology to a set of more detailed methods and techniques from which a facilitator can choose when planning an intervention. Collaborative modeling methods in information systems development are less frequently described, and typically models are presented to and checked by stakeholders after they are developed. Conclusion and discussion In this paper we sought to compare collaborative systems modeling and group model building with regard to two questions: 1. To what extent are the goals and means of both fields interchangeable or complementary? 2. Do these differences in goals lead to fundamental differences in facilitation of client involvement? With regard to the first question, there seem to be more similarities than differences. Group model building and collaborative systems modeling both have technical as well as social goals. An important difference is that group model building and SD enable testing of the desired situation, whereas information analysts put less emphasis on the step from current to desired situation. Simulation of the desired situation might make it easier to test whether the TO BE situation indeed performs as expected (Beck, 2008). Group model building is typically used to provide answers to a particular problem, while models used in information systems design are expected to describe situations lasting on a longer time frame. Although goals of both approaches seem similar, modeling language and tools and techniques are clearly different: - in group model building there is one preferred modeling language, while in collaborative systems modeling many different approaches are used; - in group model building detailed scripts for involving clients in particular phases of the intervention are developed, while in collaborative systems modeling client participation often comes down to critiquing previously developed models. With regard to procedures and methods both approaches show similarities. The phases of model construction are alike and in both fields the client is most intensely involved in the first phases. So although goals are largely similar, the field of collaborative systems modeling seems to be more fragmented and less accustomed to joint development of models. SD modeling might add benefits in the phase of testing (Beck, 2008). It seems that modeling scripts developed in system dynamics could also be applied in structuring information systems development. In the future we intend to study this contribution by applying scripts in information system development cases. Group model building, in turn, can benefit from the micro level studies of communication processes in negotiating about models (Rittgen, 2007). An issue that deserves further attention is use of formal simulation for testing models for information system design. An issue we addressed in the introduction of this paper concerns the benefits group model building has to offer in relation to the application of causal modeling approaches to information systems development. An area where group model building seems to make a unique contribution to the information systems field is in its application to Enterprise Resource Systems (ERP) implementation. A master thesis study by Venderbosch (2007) focuses on an ERP system at ONEgas in the Netherlands, in particular on the optimization of the corrective maintenance process. Using five group model building sessions and data from the ERP system, the researchers show how maintenance may be improved. In this case the modeling sessions provided the platform for clients to develop a clear understanding of the structure behind their problem, which formed a basis for interpreting data from the ERP system. An extensive evaluation of the process shows that the clients’ insight into their work process improved and commitment to implementing recommendations is high. Implementation of ERP, often combining a generous availability of quantitative data with a lack of understanding of the core structure behind data, presents a clear opportunity for the use of group model building. For this range of problems, group model building’s base in system dynamics and its ability to integrate qualitative and quantitative data offer an advantage to conceptual modeling approaches such as SSM and cognitive mapping (see also Killingsworth, Chavez, & Martin, 2008). References Hodgkinson, G., & Clarkson, G. (2005). What have we learned from almost 30 years of research on causal mapping? Methodological lessons and choices for the Information Systems and Information Technology communities. In V. Narayanan & D. Armstrong (Eds.), *Causal mapping for research in Information Technology* (pp. 46-80). Hershey, PA: Idea Group Publishing.
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/68898/68898.pdf?sequence=1", "len_cl100k_base": 6997, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 34928, "total-output-tokens": 10092, "length": "2e12", "weborganizer": {"__label__adult": 0.0005006790161132812, "__label__art_design": 0.0011386871337890625, "__label__crime_law": 0.0005970001220703125, "__label__education_jobs": 0.0213165283203125, "__label__entertainment": 0.0001558065414428711, "__label__fashion_beauty": 0.0003113746643066406, "__label__finance_business": 0.006256103515625, "__label__food_dining": 0.0006861686706542969, "__label__games": 0.0010271072387695312, "__label__hardware": 0.0008449554443359375, "__label__health": 0.0014581680297851562, "__label__history": 0.0007557868957519531, "__label__home_hobbies": 0.0003108978271484375, "__label__industrial": 0.001392364501953125, "__label__literature": 0.0011997222900390625, "__label__politics": 0.0006222724914550781, "__label__religion": 0.0006709098815917969, "__label__science_tech": 0.241943359375, "__label__social_life": 0.0004470348358154297, "__label__software": 0.021728515625, "__label__software_dev": 0.69482421875, "__label__sports_fitness": 0.0004315376281738281, "__label__transportation": 0.0010833740234375, "__label__travel": 0.00035190582275390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43958, 0.03512]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43958, 0.2944]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43958, 0.91058]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1348, false], [1348, 4176, null], [4176, 7074, null], [7074, 9299, null], [9299, 12030, null], [12030, 15015, null], [15015, 18048, null], [18048, 20874, null], [20874, 23794, null], [23794, 26404, null], [26404, 29295, null], [29295, 31684, null], [31684, 34244, null], [34244, 36993, null], [36993, 39187, null], [39187, 41774, null], [41774, 43958, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1348, true], [1348, 4176, null], [4176, 7074, null], [7074, 9299, null], [9299, 12030, null], [12030, 15015, null], [15015, 18048, null], [18048, 20874, null], [20874, 23794, null], [23794, 26404, null], [26404, 29295, null], [29295, 31684, null], [31684, 34244, null], [34244, 36993, null], [36993, 39187, null], [39187, 41774, null], [41774, 43958, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43958, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43958, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1348, 2], [1348, 4176, 3], [4176, 7074, 4], [7074, 9299, 5], [9299, 12030, 6], [12030, 15015, 7], [15015, 18048, 8], [18048, 20874, 9], [20874, 23794, 10], [23794, 26404, 11], [26404, 29295, 12], [29295, 31684, 13], [31684, 34244, 14], [34244, 36993, 15], [36993, 39187, 16], [39187, 41774, 17], [41774, 43958, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43958, 0.01036]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
94d899849bc383bfe7d3766adc57c0de593dc4eb
Abstract This paper addresses issues in code generation of time critical loops for VLIW ASIPs with heterogenous distributed register structures. We discuss a code generation phasing whereby one first considers binding options that minimize the significant delays that may be incurred on such processors. Given such a binding we consider retiming, subject to code size constraints, so as to enhance performance. Finally a compatible schedule, minimizing latency, is sought. Our main focus in this paper is on the role retiming plays in this complex code generation problem. We propose heuristic algorithms for exploring code size/performance tradeoffs through retiming. Experimental results are presented indicating that the heuristics perform well on a sample of dataflows. 1 Introduction The trend in today’s embedded processor market is increasingly towards architecture specialization, i.e., towards developing Application Specific Instruction-Set Processors (ASIPs) with a datapath and instruction set tailored to a class of applications [12, 13]. Customization of an ASIP datapath for a class of applications, say in the areas of signal processing and/or multimedia, may be performed in a variety of ways. In particular, many ASIPs include small, distributed register files, placed at the inputs and outputs of ALUs and other functional units (see Fig.1), as opposed to having a large, shared register file [12, 13]. This is a key motivation for the work discussed in this paper. It is well known that ASIP’s specialized architectures pose difficult challenges to today’s compiler technology [12, 13]. At the root of the problem lies the fact that traditional code generation heuristics perform poorly in the context of such specialized architectures. For example, performing register allocation and assignment in the context of the small, distributed register files alluded to above adds a new dimension of complexity to the already difficult code generation problem. In [9, 8] we proposed a non-traditional approach to the problem of devising efficient code generation heuristics for VLIW ASIPs. These heuristics are intended to be used for time critical loops with single basic block bodies. We started by observing that a very large instruction word is a composition of elementary RTL instructions (microinstructions) that can be concurrently executed by the processor. Exploiting this fact, our approach reduces the first phases of the code generation problem to that of finding a minimum latency schedule and binding of the dataflow’s operations (activities) and data transfers (transactions) directly to the ASIP datapath’s resources. In this paper we will consider the role of dataflow retiming in the code generation process. The problem of finding a minimum latency schedule (including retiming) and corresponding binding of the code segments of interest directly onto the ASIP datapath is exceedingly complex[3, 1]. We propose to decompose the process into three steps: 1) determine a good binding of activities (operations) and data objects (operands/results) to functional units and register files, respectively; 2) given a binding, determine a retiming likely to minimize latency, subject to code size constraints; 3) finally, determine a compatible schedule for activities and transactions (i.e., data transfers) that minimizes latency. The paper is organized as follows. In §2 we describe the proposed phasing of the code generation process, and discuss the role of retiming in this context. In §3 we propose heuristics to determine appropriate retiming options, and discuss examples. We conclude with a discussion of related work §4, examples §5, and future work §6. 2 Binding, Retiming and Scheduling Problems 2.1 Binding We begin by briefly describing our approach to binding a dataflow to a datapath. A dataflow will be modeled by a DAG, G(A, T), where the nodes A represent activities, i.e., operations to be carried out on functional resources, and edges T represent transactions, i.e., data transfers associated with bringing data objects to the storage resources supporting the execution of a given activity, see Fig.1. As alluded to above, we will tackle the case of a dataflow corresponding to a single basic block within a loop body. Thus the dataflow shown in Fig.1 includes data objects with iteration indices, e.g., $y[i], y[i-1]$, indicating when a data object is used (shared) across several iterations. In characterizing the datapath we will focus on its functional and storage resources denoted F, S respectively. Storage resources are partitioned into register files (RF) and memory banks (MB), i.e., $S = RF \cup MB$. We let $I^1_f, I^2_f$ denote the storage resources where the first (second) operand could reside in order to execute an operation on $f \in F$. Similarly $O_f$ denotes storage resource(s) where the result of an operation carried out on $f$ could be stored. Thus $I^1_f, I^2_f$, and $O_f$ are subsets of RF corresponding to the register files associated --- *This work is supported by a National Science Foundation NSF Career Award MIP-9624321 and by Grant ATP-003658-088 of the Texas Higher Education Coordinating Board. with functional unit $f$. For example, the register files associated with A1 in the Fig.1 are $I_1^f = \{ R1 \}$, $I_2^f = \{ R2 \}$, and $O_1 = \{ R2 \}$. Our goal is to determine a binding of activities and their input/result data objects to functional/storage resources. A binding of activities is a function mapping each activity to an appropriate functional resource. To unambiguously bind an activity we must also specify a binding of data objects, i.e., the activity’s operands and results to register files. Thus, in our example, activity $a_1$, an addition, could be bound to $A_2$, and its input data objects $c_1.A_1$ must each be bound to either $I_2$ or $I_2'$, i.e., $R1$ or $R2$, but not the same. In [9] we proposed a novel approach to generating binding alternatives having reduced transaction costs. Recall that a transaction is associated with bringing a data object from one storage space to another during the execution of the dataflow. We say a transaction has “zero cost” when a binding for two activities sharing a data object is such that it remains in the same storage space during the execution of both activities. For example $a_1$ and $a_2$ share the operand $x[i]$ and thus if $a_1$, $a_2$ are bound to $A_2,M_1$ respectively, one should place $x[i]$ on $R3$, see Fig.1. With such a binding $x[i]$ would need to be loaded from memory only once. Each shared data object thus corresponds to an opportunity for eliminating a transaction, if an appropriate binding is selected. Given a specific datapath, we say that each data object shared by a pair of activities places a binding restriction on the set of bindings to be considered.1 In practice a set of such restrictions may include conflicting requirements. Thus the goal is to determine maximal sets of consistent restrictions, i.e., satisfying as many restrictions as possible. Such sets will in turn correspond to binding alternatives that maximize the number of potential zero cost transactions. The problem can be translated to an integer programming problem where the cost function reflects the number of restrictions that are satisfied and thus potential zero cost transactions that can be achieved, see [9] for details. From here on, we shall assume that such a binding has been obtained for the given dataflow/datapath. ### 2.2 Retiming problem For retimings purposes, a modified dataflow graph is used to represent loop body basic blocks. We will refer to the example shown on the top left in Fig. 2. The term iteration of a loop body is used to refer to the set of operations that are executed once for a given iteration index. The loop body basic block is modeled using a weighted directed graph $G(A, E, w)$ where the nodes $A$ represent activities, e.g., operations to be carried out on functional resources, and directed edges $E \subseteq A \times A$ represent data dependencies where the result of an activity serves as an operand for another. Non-negative integer weights $w = \{ w_{ij} : (i, j) \in E \} \subseteq \mathbb{Z}_+^n$ are associated with each edge, where $w_{ij}$ represents the relative distance, in number of iterations, between the time the data object is created by activity $i$ to the time it is consumed by the activity $j$ where the edge abuts [11]. Retiming refers to a transformation of the original dataflow aimed at pipelining several loop body iterations within the same execution cycle. Such transformations are carried out to reduce the execution latency. We define this formally as a transformation of the dataflow graph’s weights $w$, giving a retiming vector $f = \{ r_a : a \in A \} \subseteq \mathbb{Z}_+^n$, to a new set of weights $\tilde{w} = \{ \tilde{w}_i : i \in E \} \subseteq \mathbb{Z}_+^n$ where $\tilde{w}_{ij} = w_{ij} + r_a - r_b$, $\forall (i, j) \in E$. A retiming vector, $f$, is said to be admissible if the resulting weights are non-negative, i.e., $\tilde{w}_{ij} \geq 0$, [11]. Thus an admissible retiming of a dataflow graph results in new edge weights, $\tilde{w}_{ij}$, given by the sum of $r_a - r_b$, the relative iteration distance between retimed nodes $i$ and $j$, and $w_{ij}$, the iteration distance between the production and consumption of a data object shared by the two nodes when executed in the same iteration. The retiming example shown in Fig. 2 moves activity $a_1$ ahead of $a_2$ by one iteration. As a result, on each execution cycle, activities from two iterations, $a_1$ from $i+1$, and $a_2$ from $i$, are being pipelined. Thus in the original version of the dataflow graph the execution of $a_1$ had to precede $a_2$, while in the retimed version the two activities can be executed in parallel. In the sequel we will use the notion of clusters of activities, corresponding to a set of activities that have been retimed by the same amount. In particular let $C_n = \{ a \in A \mid r_a = n \}$ for $n = 0, 1, 2, \ldots m-1$, where $C_n$ denotes the set of activities that have been pushed forward $n$ iterations. Thus, in our example, there are two clusters $C_0 = \{ a_2 \}$ and $C_1 = \{ a_1 \}$, shown in Fig.2. Unfortunately the improvements in performance achieved via retiming came at a significant cost. In order to allow execution of the retimed loop body a prolog and epilog code sections are required to fill and empty the pipeline. One can show that a retiming of a dataflow which includes $m$ clusters will result in a code size which is $m$ times that of the original dataflow. The example shown in Fig.2 has two clusters, leading to a prolog and epilog that double the code size of the original dataflow. Code size is an important factor in ASIP based embedded systems since on-chip memory is limited and expensive. ### 2.3 Scheduling We now briefly discuss the scheduling problem. Given a (possibly retimed) dataflow graph we can obtain a directed acyclic graph (DAG) $G(A, E')$ where $E' = \{ (i, j) \in E : \tilde{w}_{ij} = 0 \}$, i.e., $E'$ is the set of arcs in $E$ with zero weight. A zero weight arc between two activities in the loop body, means that one uses the result of the other and hence, must precede the other. By retaining only the arcs with zero weight, we keep only the important precedence constraints from the point of view of scheduling. Such graphs are shown on the bottom of Fig. 2 for the the original and retimed versions of the dataflow. Given the DAG $G(A, E')$ and the functional unit bindings of activities we have reduced the problem to a resource constraint scheduling problem. In [8] we proposed a solution approach to this problem, based on first establishing an ordering among activities, and then determining a transaction schedule using a number of register assignment policies. 3 Heuristics for resource constrained retiming of dataflows Execution rate can be improved by jointly optimizing over all feasible bindings, retimings and schedules. Unfortunately this is an exceedingly complex problem. As discussed in §2.1 we propose to first determine a binding that maximizes the number of zero cost transactions. In principle this results in zero cost transactions and reduced latency. A key observation is that our approach to binding in fact realized in effect, binding has been essentially decoupled from retiming. Now given a binding of activities to the datapath’s resources, one can investigate tradeoffs achieved through retiming and scheduling. Since bindings are selected to maximize the number of potentially zero cost transactions, we shall assume that these savings are in fact realized. During retiming, non-zero cost transactions will be explicitly accounted for and modeled as activities to take place on steering resources. We propose to optimize over feasible retimings (for a given binding) so as to minimize the execution latency of a schedule for these activities. Specifically we envisage two problems of interest in the context of embedded systems: Problem 1 Find a retiming that minimizes latency subject to a code size constraint of m clusters. Problem 2 Find a retiming that minimizes code size (i.e., number of clusters) subject to a latency constraint \(L_{\text{max}}\). In the following we propose a heuristic approach to solve these two problems. 3.1 Inputs The inputs to the algorithm include: 1) a retiming graph \(G'(A, E, \bar{w})\) representing the (single basic block) loop body and a corresponding binding; and, 2) for Problem 1, a max number of allowed clusters \(m \geq 1\), and for Problem 2 a max latency \(L_{\text{max}}\). The retiming graph, \(G'(A, E, \bar{w})\) is specified as discussed in §2.2. The nodes represent activities (operations) as well as non-zero cost transactions (data transfers). The latter may correspond to load/store operations executed for primary inputs/outputs, and move/load/store operations executed on shared data objects whose binding restrictions (to register files) had to be relaxed in the initial binding process. Activities and transactions are bound to functional units and steering resources (of a given width) respectively. For simplicity, we shall assume single cycle operations and transactions. However this is not an inherent limitation of the algorithm. The retiming graph for a 2nd order IIR filter, shown at the top in Fig.3, will be used throughout to illustrate the algorithm. Nodes \(t_1\) and \(t_2\) represent transactions – specifically, the load of a primary input and the store of a primary output. They are bound to bus B1, which has a width of 1. No other transaction nodes are included in the dataflow, indicating that a binding solution with zero-cost transactions was found for the target datapath. The remaining nodes correspond to operations executed on the datapath’s functional units, including two multipliers and two ALUs, labeled M1, M2 and A1, A2 respectively. 3.2 Pre-processing steps Determine latency lower bound for given binding. For each datapath resource, determine the number of nodes that use it, and the amount of time it would take to execute them if, in the best case, they were executed serially one after the other. For example, \(a_4\) and \(a_5\) are bound to multiplier M1, and thus their execution will take at least two cycles. The maximum, over all resources, of these bounds gives a lower bound on the execution delay of the retimed dataflow. For our example we obtain a bound of two clock cycles. For each cycle in the dataflow graph, a lower bound can be computed as the sum of all execution delays in the cycle divided by the number of iteration-distance delays in the cycle, rounded to highest integer. The retiming dataflow graph in Fig. 3 has two cycles each giving a lower bound of 2. The maximum over all bounds previously computed is the lower bound of the graph. This bound is used as an initial target latency in solving Problem 1. Determine path/urgency ranks of dataflow nodes Next we perform an ASAP scheduling (ignoring resource constraints) of the precedence DAG \(G(A, E')\) (see §2.2.) associated with the input retiming dataflow graph. We let the step at which each node \(a \in A\) appears in the resulting schedule be its path-rank \(\text{path-rank}(a)\). Fig.3 shows the resulting ranks for the nodes of our example. (For clarity the edges in \(E'\) are shown in bold.) This ranking will be used to give preference to nodes lying on long paths in the original dataflow graph. The urgency ranks discussed below are crude estimates for the extent to which resource conflicts will change the path ranks obtained above. The estimate is obtained as follows. For each node \(a \in A\) determine the number of additional nodes, denoted \(\text{local}(a)\), on the same step of the ASAP schedule that are bound to the same resource. Note that \(\text{local}(a)\) is a local estimate of the delay, relative to that its scheduled step, that will be required to accommodate resource conflicts associated with \(a\). For the example in Fig. 3 there ![Retiming dataflow graph G(A,E,\bar{w})](image) allo wed. that a solution will always be found if arbitrarily large latencies are mum L decreasing urgency-rank. If there are ties, they are broken based a retimed cluster of nodes pernode - a contraction of the subgraph including these cycle(s). of a cycle. Thus for each cycle or set of cycles sharing common G dataflow graph play a special role. In particular one can easily show 3. Furthermore, we shall compute the total iteration distance of cy- cles within supernodes, and assign each node a in the supernode a cycle-delay given by the minimum iteration distance around the cycles a belongs to. Our example includes two cycles with delays 1 and 2. Thus node a1, which belongs to both cycles, will have a cycle-delay(a1) = min[1, 2] = 1. 3.3 Algorithm We first present the outer loops used to solve Problems 1 and 2 and then describe their common greedy engine. Problem 1 is solved as follows. The target latency is initially set to the lower bound determined in §3.2. The greedy algorithm is then executed. If the number of clusters in the resulting solution exceeds the maximum m, the target latency is incremented by one, and the algorithm is executed again. The process repeats until a so- lution is found or an (optional) maximum latency is reached. Note that a solution will always be found if arbitrarily large latencies are allowed. For Problem 2 the target latency is set to be the specified maxi- mum Lmax. The algorithm is executed, and a retiming solution with a “minimum” (heuristically speaking) number of clusters is found, or infeasibility is detected. The greedy engine used to solve both problems defines the re- timing value for each node in the graph. It includes a main loop – where each loop iteration n = 0, 1, defines the nodes belonging to a retimed cluster of nodes Cn. Recall that Cn corresponds to a set of nodes in the graph that are retimed n times, see §2.2. The key idea is to greedily add pending nodes with highest urgency rank to the current cluster if they can be scheduled within the current target latency, but ensuring no resource conflicts with nodes previously scheduled. Two data structures are maintained. The first includes nodes which are pending, i.e., not yet placed in a cluster. The second keeps track of nodes in current and previously scheduled clusters. Nodes in the pending set are eligible for the the current cluster if they have no direct successor nodes and can be scheduled within the current target latency. Eligible nodes which are not part of a supernode are added to the current cluster according to the following criteria. Eligible nodes are considered first for insertion in the cluster in order of decreasing urgency-rank. If there are ties, they are broken based on (highest) path-rank. If there are again ties, selection is done arbitrarily. After node insertion the set of eligible nodes is up- dated, and the process repeats until no further additions can be made. At this point, the cluster’s nodes are considered to be defined, and their schedule is fixed until the end of the process. The incremental scheduling of each cluster is performed using a mod- dified list scheduling algorithm that accounts for the resource con- straints posed by the previously scheduled clusters, i.e., that does not modify the scheduling of such clusters. The intuition motiv- ating this heuristic is to use clusters to slice the nodes on the dataflow’s “longest path” (high urgency rank) resulting from data dependencies and resource conflicts, so as to reduce latency. The selection criteria discussed above is modified when nodes belonging to supernodes are eligible for inclusion in a cluster. Re- call that nodes in cycles cannot be arbitrarily retimed. Thus when a supernode is reached, our heuristic objective is to enter as many nodes that are part of supernode as possible attempting to avoid infeasibility. Specifically, when a node in a supernode becomes eligible, it is given highest priority with respect to eligible nodes with the same urgency-rank. Once a node in a supernode has been included in the cluster the selection process proceeds as before, but considering only eligible nodes within the supernode. If an urgency-rank tie occurs, one first gives priority to nodes with low- est cycle-delay, then to nodes with highest path-rank, and finally one breaks ties arbitrarily. When no further nodes are eligible in the supernode the selection process reverts to the usual process. If two supernodes are reached simultaneously, both are attempted in- dependently, and the attempt that succeeds in entering most nodes in the cluster is retained. After entering all schedulable nodes of a given supernode, a second supernode may be eligible for inclusion in the same cluster, using an identical procedure. If infeasibility occurs (i.e., an invalid retiming with respect to the nodes of a given cycle is reached), the solution is dropped and the target latency is increased. We found this relatively simple heuristic policy to work well for all filters and transforms we have experimented with. The solution determined for the IIR example, in the case of Problems 1 and 2, with m ≥ 2 and Lmax = 2 is the same and shown on the bottom in Fig. 3. 4 Related work For related work, and contrasts of our approach to decomposing the code generation problem, see [9] and references therein. Herein we shall focus on related work on retiming. A number of approaches have been proposed to determine re- timings and/or loop unfoldings that minimize latency (maximize execution-rate) but do not consider resource constraints, e.g., [2, 14]. An algorithm for retiming with a view on minimizing resource requirements subject to latency constraints, can be found in [7]. By contrast herein we considered minimizing latency under code size and resource constraints, and code size minimization under latency and resource constraints. A number of approaches, including those of [5] and [10] con- sider both resource and timing constraints. In the high level syn- thesis system Cathedral II [5], the dataflow graph is first retimed to meet an estimated schedule length without considering resource constraints. Then, a second graph is constructed (based on the origi- nal DFG and the obtained retiming function) which is used for scheduling the loop under resource constraints. An upper bound on the schedule length is obtained using list scheduling, and then iter- atively decreased, one step at a time. In general, there are many re- timed graphs with the same schedule length, and thus the first step of this approach may find an actual retiming function that is not particularly good with respect to the specific resource constraints to be considered in the second step. Our approach derives a valid Retiming by simultaneously considering both, resource and timing constraints. In [10], a software pipelining algorithm is proposed for optimizing compilers. A data/control flow graph is first analyzed to find connected components. Each connected component is scheduled individually and the original graph is reduced to an acyclic graph by contracting such components into single nodes. Then the acyclic graph is scheduled, using list scheduling – nodes (simple or contracted) are placed in the earliest possible time slot that satisfies all timing and resource constraints with respect to the partial schedule constructed so far. In case of failure, the initiation interval (and thus latency) is increased. Our approach has similarities to this one, in that it also identifies cycles in the graph, and treats their scheduling preferentially. However, our supernodes are treated as gray-boxes, in that their retiming and scheduling under resource constraints is still performed together with that of the nodes in the feed-forward (acyclic) part of the dataflow. This additional flexibility increases our ability to explore and construct (hopefully) optimal solutions. Finally, [6] and [4], explore the idea of improving (compacting) a legal schedule by incrementally rotating source nodes of the scheduled loop body (i.e., operations currently at the start of the schedule) to the end of the current schedule, and then percolating these operations up, to the earliest possible scheduling step. (Note that such rotation scheme is basically an implicit retiming.) In [4], a single instruction is moved at a time, and an enhanced percolation algorithm is used to actually re-schedule the entire loop body after each move. In [6], a set of nodes can be moved at a time, and those operations are rescheduled. This last approach, even if conceptually different, bear some similarities to our approach. A fundamental difference between both heuristic strategies is that, in [6], the rotation size (i.e., the number of operations rotated at a time) is heuristically determined up front, starting from a largest admissible value (related to the current schedule latency) and eventually converging to a rotation size of 1. In our case, a specific target latency is assumed (and incremented on failure), and thus the algorithm basically tries to slice uniformly the various dataflow graph paths so as to achieve the target latency, hopefully with a minimum number of clusters. 5 Examples In this section we present a number of examples illustrating the performance of the retiming heuristics proposed in the paper. We started by considering specific datapath bindings for the three characteristic loops shown in Table 1. Then, the algorithm was applied to solve Problem 2, i.e., to find a retiming solution minimizing code size for the resource constraints posed by the datapath binding, and assuming two different latency constraints. For all the examples, except the Avenhous filter with \( L_{\text{max}} = 5 \), the optimal number of clusters was obtained. The sub-optimal solution was due to the scheduling of two multiplication nodes (with large slacks) on their earliest valid positions, during the creation of Cluster 0, later precluding the scheduling of the last two nodes in Cluster 1. 6 Conclusion We have discussed a code generation phasing for time-critical loops of VLIW ASIPs, which is particularly suitable for processors with highly heterogeneous register structures. In this phasing, one first considers binding, so as to minimize the significant delays that may be incurred from data transfers, then retiming, and finally, detailed scheduling of operations and data transfers. The focus of this paper is on the role played by retiming in this framework. Retiming heuristics were proposed to achieve: 1) minimum latency under code size and resource constraints; and 2) minimum code size under latency and resource constraints. Experimental results show that the heuristics perform well on characteristic loops of signal processing applications. We are currently enhancing the binding algorithm described in §2.1, so as to properly account for the impact on performance of serializing operations (that could otherwise be executed in parallel) by binding them to common functional units. Table 1: Results of retiming algorithm on sample dataflows. <table> <thead> <tr> <th>Dataflow characteristics</th> <th>Datapath resources</th> <th>( L_{\text{max}} ) cycles</th> <th># clusters</th> </tr> </thead> <tbody> <tr> <td>2nd order IIR filter: 10 nodes</td> <td>2 Mult, 2 Adders, 1 Bus (w=1)</td> <td>2 (lb)</td> <td>4 (opt)</td> </tr> <tr> <td>4th order Avenhous filter: 20 nodes</td> <td>3 Mult, 3 Adders, 1 Bus (w=1)</td> <td>4 (lb)</td> <td>3 (opt)</td> </tr> <tr> <td>FFT Butterfly: 16 nodes</td> <td>2 Mult, 2 Adder, 1 Bus (w=2)</td> <td>4 (lb)</td> <td>2 (opt)</td> </tr> <tr> <td>4 Mult, 6 Add, 6 transactions no cycles</td> <td>2 Mult, 2 Adders, 1 Bus (w=1)</td> <td>4 (lb)</td> <td>2 (opt)</td> </tr> </tbody> </table> References
{"Source-Url": "http://www.cecs.uci.edu/~papers/compendium94-03/papers/1999/codes99/pdffiles/1_3.pdf", "len_cl100k_base": 6570, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 20050, "total-output-tokens": 7496, "length": "2e12", "weborganizer": {"__label__adult": 0.0006785392761230469, "__label__art_design": 0.0007748603820800781, "__label__crime_law": 0.0007181167602539062, "__label__education_jobs": 0.0006046295166015625, "__label__entertainment": 0.00014448165893554688, "__label__fashion_beauty": 0.00031638145446777344, "__label__finance_business": 0.0004954338073730469, "__label__food_dining": 0.0006766319274902344, "__label__games": 0.0012311935424804688, "__label__hardware": 0.015838623046875, "__label__health": 0.0010652542114257812, "__label__history": 0.0005245208740234375, "__label__home_hobbies": 0.00030112266540527344, "__label__industrial": 0.0021457672119140625, "__label__literature": 0.0002639293670654297, "__label__politics": 0.00063323974609375, "__label__religion": 0.001087188720703125, "__label__science_tech": 0.2919921875, "__label__social_life": 8.624792098999023e-05, "__label__software": 0.0061187744140625, "__label__software_dev": 0.6708984375, "__label__sports_fitness": 0.0007352828979492188, "__label__transportation": 0.0022716522216796875, "__label__travel": 0.00038814544677734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30791, 0.01912]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30791, 0.55904]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30791, 0.91516]], "google_gemma-3-12b-it_contains_pii": [[0, 5193, false], [5193, 11750, null], [11750, 17185, null], [17185, 23969, null], [23969, 30791, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5193, true], [5193, 11750, null], [11750, 17185, null], [17185, 23969, null], [23969, 30791, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30791, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30791, null]], "pdf_page_numbers": [[0, 5193, 1], [5193, 11750, 2], [11750, 17185, 3], [17185, 23969, 4], [23969, 30791, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30791, 0.03226]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
b6857245ad63bd580ddcf20d4a1c306e504fc17d
A Runtime System for Logical-Space Programming Eloi Pereira* Systems Engineering UC Berkeley, CA, USA eloi@berkeley.edu Clemens Krainer Dept. of Computer Sciences Univ. of Salzburg, Austria clemens.krainer@cs.uni-salzburg.at Pedro Marques da Silva Research Center Air Force Academy, Portugal posilva@academiafa.edu.pt Christoph M. Kirsch Dept. of Computer Sciences Univ. of Salzburg, Austria ck@cs.uni-salzburg.at Raja Sengupta Systems Engineering UC Berkeley, CA, USA sengupta@ce.berkeley.edu ABSTRACT In this paper we introduce logical-space programming, a spatial computing paradigm where programs have access to a logical space model, i.e., names and explicit relations over such names, while the runtime system is in charge of manipulating the physical space. Mobile devices such as autonomous vehicles are equipped with sensors and actuators that provide means for computation to react upon spatial information and produce effects over the environment. The spatial behavior of these systems is commonly specified at the physical level, e.g., GPS coordinates. This puts the responsibility for the correct specification of spatial behaviors in the hands of the programmer. We propose a new paradigm named logical-space programming, where the programmer specifies the spatial behavior at a logical level while the runtime system is in charge of managing the physical behaviors. We provide a brief explanation of the logical-space computing semantics and describe a logical-space runtime system using bigraphs as logical models and bigActors as computing entities. The physical entities are modeled as polygons in a geometrical space. We demonstrate the use of logical-space programming for specifying and controlling the spatial behaviors of vehicles and sensors performing an environmental monitoring mission. The field test consisted of an Unmanned Aerial Vehicle and GPS drifters used to survey an area supposedly affected by illegal bilge dumping. Categories and Subject Descriptors D.3.1 [Programming Languages]: Formal Definitions and Theory—Semantics; F.1.1 [Computation by Abstract Devices]: Models of Computation—Relations between models, Self-modifying machines Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SWEC ’15, April 13, 2015, Seattle, Washington Copyright 2013 ACM 978-1-4503-1996-6/13/04 $15.00. 1. INTRODUCTION Computation is becoming ubiquitously and spatially embedded in our environment. Mobile cyber-physical systems such as smartphones and robots are equipped with sensors and actuators that observe and manipulate their spatial environment. This kind of computation that exhibits a behavior in space is commonly known as spatial computing [6]. Spatial computing often involves defining the behavior of machines in a geometrical location model such as GPS coordinates or indoor local coordinates [7]. We call this physical-space programming. Example 1. Consider the example of an Unmanned Aerial Vehicle (UAV) collecting imagery of an oil-spill due to a suspicious illegal bilge dumping activity by an oil tanker. The exact location of the oil-spill is unknown a priori, although, due to Automatic Identification System (AIS) information collected from the tanker, it is known to be within a given rectangular area parametrized by its North-East and South-West GPS locations, (37.04, –8.59) and (36.94, –8.79). The UAV operator performs the following steps: select an UAV to perform the mission; specify a searching pattern comprised by the sequence of GPS locations to be visited inside the suspected area; as soon as the operator gets the information of the oil-spill location by some source, specify a new location to be visited. The mission is specified as a sequence of waypoints using a given format such as the Waypoint File Format (WFF) of the Mavlink Waypoint Protocol (MWP) [2]. The bottom row of images in Figure 1 depicts the physical-space execution of this example. Physical-space programming provides full control of the physical capabilities of the involved computing devices. This puts the responsibility for the correct specification of spatial behaviors in the hands of the programmer. For example, a mistake in the specification of the GPS coordinates of the waypoints can lead to an unexpected behavior. Moreover, physical-space models do not entail explicitly relational information of spaces. For example, it would be important for the UAV operator to know if the UAV is at the search area without the need to perform any further calculations. The literature presents several programming models that approaches spatial computing from a physical level, such as Amorphous Computing [3], Spatial Programming [8], and the framework Gaia [18]. Another common approach on spatial computing is to model space symbolically, where locations are defined as symbols and explicit relations over those symbols [7]. An everyday example of a location model is the set of streets names, cities, and countries organized by their containment relation. We call it symbolic-space programming when computation is defined over symbolic-space models. One of the pioneer symbolic-space programming models is the Ambient Calculus by Luca Cardelli [9], where mobile processes can compute over bounded locations and are allowed to communicate if they share the same location. In Ambient Calculus, locations have a tree-like structure. Inspired by Cardelli’s model and by its own π-calculus, Robin Milner introduced the Bigraphical model [10] that combines a nested location model with a model of connectivity. A bigraph changes to another bigraph upon the application of Bigraph Reaction Rules (BRR). The top row of images in Figure 1 shows the bigraphical execution for Example 1. Symbolic-space programming provides an abstract spatial model with explicit relations between locations. These models are in general convenient for specifying and formally verifying high-level spatial behaviors. However, they abstract away the physical behaviors of machines and their environment, which are necessary to operate the machines. For example, the application of a BRR that moves an UAV from its current position to another location is executed in the symbolic model as soon as it is requested. At the physical level, a control action does not execute instantaneously. It might not even execute at all due to some adversarial action of the environment. A programmer must be able to write programs that can cope with this asynchrony and react upon inadvertent behaviors. In this paper we introduce logical-space computing. In logical space computing, the programmer manipulates a symbolic abstraction of the world, named the logical-space model, while the runtime system is in charge of manipulating a physical abstraction of the world, named the physical-space model. Both abstractions are loosely coupled by the logical-space semantics, which provides an asynchronous semantics for the execution of control actions and for the observation of the structure. In this paper we present informally the semantics of logical-space computing. A complete formal treatment is introduced in [12]. We show how the BigActor model [11] can be used for logical-space programming and describe a runtime system for programming mobile robots using bigActors. We conclude this paper with experimental results, where this paradigm is used to control an UAV and sensors performing an oil-spill monitoring mission. 2. LOGICAL-SPACE COMPUTING In logical-space computing the programmer handles a symbolic spatial abstraction with a well-defined semantics of mobility, while the runtime-system handles the physical execution. Logical-space computing provides a semantics to bridge these two spatial models. 2.1 Semantics The logical-space computing semantics is modeled as a transition system over spatial-computing configurations. A spatial-computing configuration is denoted as \((\alpha \mid S \mid \eta)\), where \(\alpha\) is a set of spatial agents, \(S\) is a spatial structure, and \(\eta\) is a set of pending requests. A spatial agent is a computing entity with local state that can perform three commands: \(\text{observe}(q)\), \(\text{react}(x)\), and \(\text{control}(x)\). Command \(\text{observe}(q)\) requests an observation of the logical model specified by a query \(q\). Command \(\text{react}(x)\) assigns to a local variable \(x\) the value of a requested observation. Command \(\text{control}(r)\) requests the execution of a control action over the logical model specified by the reaction rule \(r\). We define a logical model as \(L = (\text{dom}(L), \sigma_L)\) where \(\text{dom}(L)\) denotes the set of locations of \(L\), i.e., the set of symbols, and \(\sigma_L\) is a set of relations over \(\text{dom}(L)\). Likewise, a physical model \(P\) has a set of physical locations \(\text{dom}(P)\) and a set of relations \(\sigma_P\) over \(\text{dom}(P)\). A spatial structure binds these two abstractions together. A spatial structure \(S\) is a tuple \((L, P, \beta, \gamma)\) where \(L\) is a logical model, \(P\) is a physical model, \(\beta : \text{dom}(L) \rightarrow \text{dom}(P)\) is the physical interpretation function that maps logical locations from \(L\) into physical locations in \(P\), and \(\gamma : \sigma_L \rightarrow \sigma_P\) provides an interpretation of the relations in \(L\) into relations in \(P\). We say that a structure \((L, P, \beta, \gamma)\) is consistent if the interpretation of locations from \(L\) to \(P\) preserves the relations in \(L\), e.g., in Figure 2 the parenting of the nodes in the bigraph is consistent with the containment relation over polygons. A structure \((L, P, \beta, \gamma)\) is locally consistent with respect to \(L'\) if \(L'\) is contained by \(L\) and \((L', P, \beta, \gamma)\). Locally consistency is an important property for correctness. of logical-space executions. This topic is discussed in depth in [12]. The logical-space computing semantics is modeled abstractly in order to fit different logical and physical models. Figure 2 shows an example of a bigActor as a spatial agent that operates over a bigraphical model of the world. The physical world is modeled as polygons defined using GPS coordinates. The bigActor specified in Figure 2 uses a query language to observe the logical space for oil-spills and BRRs to move the UAV from its current location to a new one. \( \beta \) maps each bigraph node to a polygon. \( \gamma \) maps the bigraph parenting relation to a containment relation over polygons. The semantics is written in an operational style, largely influenced by [4, 5, 11]. It is formalized as a transition system over the space of spatial-computing configurations, specified by seven inference rules. ### 2.1.1 Computation The rule denoted as \( \text{fun} : a \) models an internal computation performed by agent \( a \), i.e., the change of the local state of \( a \) specified by the semantics of a host programming language. ### 2.1.2 Observation There are three rules for modeling observations. Rule \( \text{req} : a, \text{observe}(q) \) models an agent \( a \) requesting an observation defined by query \( q \) of the logic-space model. The request is defined as \( \text{OBS}(a, q) \) and it is stored in the set of pending requests \( \eta \). Rule \( \text{sense} : \text{OBS}(a, q) \) models the runtime system taking an observation request \( \text{OBS}(a, q) \) from the set of pending requests, interpreting the query over the physical structure, and generating a new logical abstraction \( L_q \). The result is stored in the set of pending requests as \( \text{READY}(a, L_q) \). This rule is responsible for the keeping the logical model and the physical model locally consistent with respect to the observed space, i.e., if two observed physical locations are related, then their logical counterparts are also related. Rule \( \text{rcv_\text{obs}} : a, \text{react}(x) \) delivers an observation \( \text{READY}(a, L_q) \) to \( a \) by assigning \( L_q \) to the local variable \( x \). Note that observation is asynchronous, i.e., an agent first requests an observation, the runtime system gets the necessary data from sensors, and delivers the result as soon as possible. ### 2.1.3 Control There are two rules for modeling control actions from spatial agents and one from environmental sources. Rule \( \text{req} : a, \text{control}(R \Rightarrow R') \) models an agent \( a \) requesting a control action over the the logical structure specified by the reaction rule \( R \Rightarrow R' \), where \( R \) specifies the part of the logic model to be changed and \( R' \) specifies how it is intended to be changed. The rule generates a request \( \text{CTR} : a, R \Rightarrow R' \) in the set of pending requests. Rule \( \text{actuate} : \text{CTR}(a, R \Rightarrow R') \) models the runtime system taking a request \( \text{CTR}(a, R \Rightarrow R') \), checking if it can be applied over the logical and physical space models, and executing the rule over both models. The rule requests the spatial structure to be locally consistent with respect to \( R \) and keeps the structure locally consistent with respect to \( R' \). Note that if a single agent observes first the space that is willing to control, locally consistency is ensured and control action can be successfully executed. Nonetheless, in the presence of concurrency, one must ensure that the space being controlled is by agents is free of race-conditions. In [11] we provide sufficient conditions to cope with these concurrency issues. The effects of the environment are modeled by rule \( \text{env} : P' \), which changes the physical model to \( P' \). The semantics of the logical-space program of Figure 2, which logical-space execution is depicted in Figure 1, is the following sequence of configurations: \[ \begin{align*} C_0 \quad \text{req}_1 : a, \text{observe}(q) \quad &\Rightarrow C_1 \quad \text{actuate}_1 : \text{CTR}(a, R \Rightarrow R') \\ \text{req}_2 : a, \text{observe}(q) \quad &\Rightarrow C_2 \\ \text{req}_3 : \text{observe}(q) \quad &\Rightarrow C_3 \\ \text{req}_4 : \text{observe}(q) \quad &\Rightarrow C_4 \\ \text{req}_5 : \text{observe}(q) \quad &\Rightarrow C_5 \\ \text{req}_6 : \text{observe}(q) \quad &\Rightarrow C_6 \\ \text{req}_7 : \text{observe}(q) \quad &\Rightarrow C_7 \end{align*} \] where \( C_i = (a_i \mid S_i \mid \eta_i) \). The analysis of the execution trace shows the asynchronous nature of both observation and control. ### 3. RUNTIME SYSTEM Next we present a runtime system for programming in logical-space, where spatial agents are specified as bigActors, logical spaces are modeled as biggraphs, and physical spaces are modeled as polygons defined by GPS coordinates. Figure 3 shows the overall runtime system. The left-hand side of Figure 3 depicts bigActor instances running over the BigActor Runtime System (BARS). BARS provides means for symbolic-space programming with bigActors. Our former implementation used BARS over a model checker responsible to simulate the biggraphical execution. The right-hand side of Figure 3 depicts the Logical-Space Execution Engine (LSEE) that extends BARS with a logical-space computing semantics. In this paper we present an implementation of the LSEE for programming UAVs and sensors used in an oil-spill monitoring scenario. Nonetheless, the implementation can easily extended with plugins to address other kind of sensors and actuators. ### 3.1 BigActor Runtime System BigActors [11] are mobile agents that are embedded in biggraphical [10] models of space. The concurrency is modeled as per Hewitt and Agha’s actor model [4], i.e., they have local state and communicate by asynchronous message-passing. Location and mobility of bigActors are modeled... using the bigraphical formalism. A bigActor is able to asynchronously observe locally the bigraph and control by requesting the execution of BRRs. The BigActor Language is implemented as a Scala embedded Domain Specific Language (DSL) [1]. It is an extension of the Scala Actor library with BigActor commands plus implicit conversions for achieving a domain-specific syntax. We use Scala for two reasons: (1) The concurrency model of Scala is the Actor Model. (2) The type system together with high-order functions and implicit conversions makes Scala a powerful language to implement DSLs. Scala Actor library became recently deprecated in favor of the Akka Actor library. In order to cope with these changes in the Scala ecosystem, we are currently migrating the implementation to Akka Actors. BigActor instances are Scala actors. The instances send requests to the BigActor Scheduler, which schedules them according to a First-Come First-Served policy, and manages their execution. Instances can send communication, observation, control, and migration requests. The interactions between the BigActor Scheduler and the bigraphical model of space is mediated by the BigGraph Manager. The BigGraph Manager is responsible for delivering fresh bigraphical observations and for executing control actions specified as BRRs. ### 3.2 Logical-Space Execution Engine The LSEE has three roles: serve as a middleware for sensors and actuators, generate bigraphs out of physical properties, and interpret BRRs into control commands that can be executed by a given actuator. Next we present the components that are responsible for these tasks. #### 3.2.1 Middleware The middleware component named ros_vehicle contains software drivers named plugins. Plugins are responsible to interact with components that provide or consume spatial information. These components can be for example a GPS device, an autopilot, a computer vision system, or a cloud-based location service accessed over the internet. A plugin has a well-defined interface. It can subscribe to mobility commands, e.g., a waypoint command for an autopilot, and publish physical properties. A physical property may contain static information, such as a polygon describing the border of a city, or dynamic information, such as the location and connectivity of a UAV. Plugins are implemented over the Robot Operating System (ROS), which provides a publish-subscribe communication mechanism. For the oil-spill scenario we implemented five plugins. The Autopilot Plugin handles the execution of GPS waypoints over the autopilot and fetches the UAV state information, like GPS location, velocity, and control authority. The AIS Plugin receives, decodes, and filters AIS messages from an onboard AIS receiver. The Camera Plugin uses a video camera driver to capture and process video frames. The ros_vehicle is also equipped with a Naming Service and a Communication Service. The Naming Service is responsible for assigning unique names to physical properties and can be implemented using different naming conventions. For example, the Naming Service implemented for the oil-spill scenario uses the autopilot serial number to identify the location of the UAV and the AIS Maritime Mobile Service Identity (MMSI) to identify the locations of the drifters. The Communication Service is responsible for sharing local observations between ros Vehicles. For the oil-spill scenario, the Communication Service is implemented using UDP as the transport protocol over a 3G network. The service is used to share physical properties between different ros Vehicles at different ground stations. #### 3.2.2 Generation of bigraphs The BigGraph Driver subscribes physical properties from the ros_vehicle and generates bigraphical abstractions. The parenting of a bigraph node \(b\) is calculated by finding the smallest polygon that totally contains \(\beta(b)\), i.e., its physical interpretation. In order to cope with Milner’s bigraph definition we must enforce that the resulting parenting relation forms a tree. As such, a polygon can not be partially contained on another polygon, otherwise, the resulting parenting relation may form a cycle. This limitation can be removed by using Sevegani’s BigGraphs with Sharing [14]. Figure 4 depicts an example of the generation of a bigraph from physical properties produced by a network of vehicles and sensors. Example 2. Consider that $uav0$ that starts a mission under the control of $gcs0$ and, at a given point, hands control over to $gcs1$ on-board of a navy vessel. Figure 5 depicts this situation. Each operator has a local and limited observation of the world. The operator on the vessel does not know where the UAV is located until the handover has been successfully completed. The use of the Communication Service allows the operators to have access to an extended bigraphical abstraction. With this information, both operators have access to the location of the UAV before and after hand-over. Bigraph observations “flood” over the network of robots and will eventually converge to a distributed bigraph estimate. 3.2.3 Generation of control commands The BRR Driver translates BRRs to mobility commands that can be executed by devices that are interfaced by ros_vehicle plugins. In order to synthesize commands, the BRR Driver needs the physical interpretations of the nodes in the BRR. To derive the physical interpretation, the BRR Driver subscribes physical properties from ros_vehicle. For example, the generation of a waypoint command to move an UAV to a given destination needs the GPS location of the UAV and the destination. Figure 6 exemplifies the execution of a BRR for moving an UAV to the oil-spill location. The BRR Driver generates a mobility command that specifies a GPS waypoint command to the centroid of the polygon that defines the oil-spill. The mobility command is subscribed by the Autopilot Plugin from the ros_vehicle instance. The Autopilot Plugin is responsible for managing the execution of the waypoint. 4. OIL-SPILL CASE STUDY The field test uses an UAV with a camera and drifters with AIS modems and GPS to monitor an oil-spill. The oil-spill is emulated by a Navy vessel dropping 100 kg of pop-corn 6 km south of the shore of Portimão, Portugal. This is a small spill of the kind that might be created by a large ship flushing its oil tanks, also known as bilge dumping. Bilge dumping is a major problem for small countries like Portugal with large maritime zones. Bilge dumping evidence is currently collected using satellite images correlated with AIS information from proximate vessels [15]. The field test aimed to assess the role of unmanned vehicles and sensors as complements to satellites for the collection of evidence. We used two kinds of UAVs developed at the Portuguese Air Force Academy under the PITVANT project, the Alfa and the Alfa-Extended (Figure 7(a)). The Alfa-Extended is a gas-powered UAV with 3 m wingspan, equipped with a Piccolo autopilot for stable low-level control, and a PC-104 computing board for high-level control and vision processing. Each UAV is equipped with a gimbaled optic camera and an AIS receiver. The Unmanned Aerial System included three ground control stations (GCS) denoted as gcs0, gcs1, and gcs2. gcs0 was situated in an air field and was responsible for take-off and landing maneuvers, gcs1 was located at the shore and took control authority of the UAVs during emergencies, and gcs2 was located at the shore and was responsible for the UAV mission. In one particular scenario, gcs2 was located on a Navy vessel to extend the operational range of the UAV. The drifters used in this demonstration were AIS beacons commonly used for locating fishing nets. They were equipped with GPS and transmitted their position up to a range of 10 miles. Drifters were identified by unique MMSI numbers. The communication infrastructure included wireless communication links to connect the UAVs and the GCSs: 3G internet access to connect all GCSs, UHF communication radios for inland communication, and VHF radios for maritime communication. The European Maritime Safety Agency (EMSA) participated in the scenario, tasking a satellite to take a high-resolution optical picture of the oil-spill. Next we present the lessons learned from programming the oil-spill mission in logical-space. Recall the bigActor defined in Figure 2. The bigActor requests to move the UAV to logical locations, e.g., oilSpill10110. Since the oil-spill moves over time, each execution of the same instruction at the logical level maps to a different instruction at the physical level. In other words, a new waypoint command must be generated each time the log- ical location moves its physical location. Prior to the use of logical-space programming, the operator had to manually specify these new waypoints in physical-space, which was inconvenient and easy to make mistakes. With logical-space programming, the command \texttt{MOVE\_HOST\_TO \texttt{u1iSp1110}} is the only one needed. The logical-space execution engine ensures that waypoints have always the correct physical coordinates. Consider another bigActor defined in Figure 8. The bigActor observes the bigraph with a query \texttt{LINKED\_TO\_HOST} and displays the result. The bigActor also has alternatives for matching a \texttt{handover} message, which results in a BRR handing over the control authority for \texttt{uav0} to ground station \texttt{gcs0}. In our field test these messages were sent by another bigActor implementing a graphical user interface. \begin{verbatim} 0 BigActor hosted_at gcs2 with_behavior { 1 observe(LINKED\_TO\_HOST) 2 loop { 3 case obs: Observation => display(obs) 4 observe(LINKED\_TO\_HOST) 5 case "handover" => control(HAND \texttt{uav0} TO gcs0) 6 } 7 } Figure 8: Code for bigActor handover. \end{verbatim} Our operators were able to watch bigraphs evolve as the field test progressed. The logical abstraction proved particularly useful for UAV handovers, since it provided the operators with means to be constantly aware of the UAV location and connectivity regardless of which ground station had control authority. This was provided by our distributed bigraph estimation protocol executing over the Internet, which allowed synchronizing the bigraphs at both ground stations. The prior practice was to watch the control screen provided by the autopilot vendor that would only display any information if the UAV was under the control authority of the respective ground station. Correct termination used to be ensured by radio communication between operators. This communication was discontinued as the operators came to understand and trust the displayed bigraph. 5. CONCLUSION In this paper we introduce a new paradigm for spatial computing named logical-space programming. In logical-space programming, programmers manipulate a logical-space model, while the runtime system is responsible to mediate this abstraction with the physical space. We introduce the logical-space computing semantics informally and describe the BigActor Runtime System for logical-space programming. The runtime system uses bigActors as spatial agents that operate over a bigraphical space model. The physical space model consists of polygons defined using geometrical coordinates. We demonstrated the use of the logical-space programming in a case study where vehicles and sensors performed an environmental monitoring mission. Acknowledgment This work has been supported by the National Science Foundation (CNS1136141), by the National Research Network RISE on Rigorous Systems Engineering (Austrian Science Fund S11404-N23), by the Fundação para a Ciência e Tecnologia (SFRH/BD/43596/2008), and by the Portuguese MoB - PITVANT. The authors want to thank the Portuguese Air Force, the Portuguese Navy, the European Maritime Safety Agency, and the Portimão Airfield. 6. REFERENCES
{"Source-Url": "http://cpcc.berkeley.edu/papers/SWEC15.pdf", "len_cl100k_base": 6090, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21754, "total-output-tokens": 7353, "length": "2e12", "weborganizer": {"__label__adult": 0.0005002021789550781, "__label__art_design": 0.0004270076751708984, "__label__crime_law": 0.0005283355712890625, "__label__education_jobs": 0.0009708404541015624, "__label__entertainment": 0.00010865926742553712, "__label__fashion_beauty": 0.00022935867309570312, "__label__finance_business": 0.0003688335418701172, "__label__food_dining": 0.0004856586456298828, "__label__games": 0.0006871223449707031, "__label__hardware": 0.0018453598022460935, "__label__health": 0.0007905960083007812, "__label__history": 0.0004646778106689453, "__label__home_hobbies": 0.0001583099365234375, "__label__industrial": 0.0008144378662109375, "__label__literature": 0.0004413127899169922, "__label__politics": 0.0004744529724121094, "__label__religion": 0.0006442070007324219, "__label__science_tech": 0.15185546875, "__label__social_life": 0.00014150142669677734, "__label__software": 0.00807952880859375, "__label__software_dev": 0.8271484375, "__label__sports_fitness": 0.0004096031188964844, "__label__transportation": 0.0020503997802734375, "__label__travel": 0.000293731689453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30490, 0.03163]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30490, 0.71636]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30490, 0.88949]], "google_gemma-3-12b-it_contains_pii": [[0, 4873, false], [4873, 10219, null], [10219, 16170, null], [16170, 20569, null], [20569, 24888, null], [24888, 30490, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4873, true], [4873, 10219, null], [10219, 16170, null], [16170, 20569, null], [20569, 24888, null], [24888, 30490, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30490, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30490, null]], "pdf_page_numbers": [[0, 4873, 1], [4873, 10219, 2], [10219, 16170, 3], [16170, 20569, 4], [20569, 24888, 5], [24888, 30490, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30490, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
d903219d5780868c573e22fd00780c55c20028ff
This article shows that the TEI tag set for feature structures can be adopted to represent a heterogeneous set of linguistic corpora. The majority of corpora is annotated using markup languages that are based on the Annotation Graph framework, the upcoming Linguistic Annotation Format ISO standard, or according to tag sets defined by or based upon the TEI guidelines. A unified representation comprises the separation of conceptually different annotation layers contained in the original corpus data (e.g., syntax, phonology, semantics) into multiple XML files. These annotation layers are linked to each other implicitly by the identical textual content of all files. A suitable data structure for the representation of these annotations is a multi-rooted tree that again can be represented by the TEI and ISO tag set for feature structures. The mapping process and representational issues are discussed as well as the advantages and drawbacks associated with the use of the TEI tag set for feature structures as a storage and exchange format for linguistically annotated data. 1 Introduction This article presents a representation format for the exchange of documents that contain complex markup. It is based on the TEI tag set for encoding feature structures and shows that this TEI tag set not only qualifies as a well-suited meta-format for annotated linguistic corpora, but that it can also serve as a method to use XML for the annotation of the otherwise unannotatable. XML’s most fundamental data structure is a tree. While trees have several advantages for software developers as well as for users who mark up textual data, they are able to express nested annotation structures only. The annotations of a document may constitute one or several logical layers, as long as the bracketings within a single layer or across layers never cross one another. Linguistically annotated corpora, however, do not necessarily satisfy this constraint. They may contain crossing edges and, thus, require a data structure that is more complex than a simple tree. Several solutions for this problem have been proposed (see, for example, DeRose, 2004; Carletta et al., 2007). One is to factor such complex and possibly multi-layered annotations into a multi-rooted tree, i.e., into several trees spanning over the same leaves. Multi-rooted trees constitute a data structure that is more general than a single tree, but not as unrestricted as an Annotation Graph (Bird and Liberman, 2001). This article shows how multi-rooted trees can be represented in an integrated way, by using the TEI tag set for the annotation of feature structures. The paper structures as follows: section 2 presents the underlying technological and methodological framework, i.e., an architecture with the aim of fostering the sustainability of linguistic resources, and describes the task of representing linguistically analysed corpora. Section 3 illustrates the use of the TEI tag set for the representation of feature structures as a storage- and interchange format for multi-layer annotations. In section 4 two prominent XML-based approaches on modelling multi-layer annotation, XCONCUR and the Nite Object Model (NOM) are briefly described and compared to the feature structure based representation. Section 5 concludes the article with a critical discussion of the practical usability of this approach. 2 The GENAU Approach The work presented in this article is part of a research effort on the sustainability and preservation of language data. A generic framework for assuring the long-term accessibility of heterogeneous linguistic resources was developed within the project Sustainability of Linguistic Data (see, for example, Wörner et al., 2006; Rehm and Witt, 2007; Witt et al., 2007; Rehm et al., 2008a,b). An important aspect of the overall architecture of this project is a specific approach to handling and processing several corpus representation formats, the Generalised Architecture for Sustainability of Linguistic Data (GENAU, see Fig. 1). It includes a mechanism for the representation of complex linguistic corpora and a component for the mapping of linguistic tag sets into an ontology. This article only deals with the representation of corpora, visualised on the right hand side of Fig. 1. ![Figure 1: The two main phases of the GENAU approach](image-url) Since linguists investigate corpora from different theoretical points of view, linguistic corpora typically are annotated on multiple levels of description, such as, for example, morphology, syntax, and semantics. To represent these annotations uniformly, the data is XML-encoded concurrently. As a result of this encoding strategy, a separate document instance exists for each annotation level. This approach can be characterised as redundant encoding in multiple forms (Sperberg-McQueen and Burnard, 1994). However, the redundant encoding of different kinds of information does not account for the fact that there might be interrelations between the different annotation layers. This disadvantage can be avoided if the primary data, i.e., the textual content, is identical across the respective document instances (Witt, 2004). This guarantees that the text functions as an implicit link for the separately realised annotation layers. It shall be noted that an approach along such lines is somewhat controversial among members of the markup community. This is mainly due to criticism connected to issues such as data consistency, layer comparison, perceived redundancy and the availability of seemingly more attractive integrative formats. From the point of view of sustainability, the multiple encoding approach does have two overwhelming advantages: since the markup/text ratio is relatively low, the XML-encoded files can be used with off-the-shelf XML-software and, furthermore, they are human-readable. Secondly, since linguists are often interested in only one (or a small number) of the heterogeneous annotation levels, they can directly access those documents which only contain the markup of these annotation levels. The multiple annotation approach provides a very elegant solution to questions that are of importance with regard to general annotation problems: (1) how to handle the problem of annotating overlapping hierarchies and (2) how to deal with heterogeneous tag sets. Furthermore, most of the points of criticism can be rebutted. For example, the concerns regarding data consistency lose their bite with the advent of original editing tools for the creation of primary-data-identic annotation files. As a further example, much of the remainder of this article deals with the transformation from multiple annotation documents to an integrated representation format, i.e., one that is encoded within the constraints of the TEI tag set for feature structures. A point to be learned from this is that the advantages of other approaches can be married with the specific advantages of the multiple annotations approach by supplementing it with specific software tools. Though most linguistic corpora to be archived by the sustainability project are already encoded in XML-based formats, they are still heterogeneous from a conceptual point of view. The majority of corpora are annotated using markup languages that are based on the Annotation Graph framework (Bird and Liberman, 2001), the upcoming Linguistic Annotation Format ISO standard (Ide, 2007), or according to several tag sets defined by or based upon the TEI guidelines. The GENAU approach comprises the separation of individual annotation layers contained in the original corpus data into multiple XML files, so that each file represents a single annotation layer only. Several automatic or 1 Possible alternative solutions or workarounds to question(1) include those also mentioned in the TEI guidelines (CONCUR, milestone elements, fragmentation technique, virtual joins) and, probably the most widely applied technique, stand-off annotation (see also section 4). The namespace standard provides a possible solution to question (2), but not to question (1). semi-automatic tools and XSLT stylesheets have been developed to normalise and to transform the original data formats into multiple XML files (see Fig. 1). The description of the GENAU approach given above focuses on the representation of the data within multiple XML files. A different perspective on markup technology directs the abstract model instead of the syntax of the annotations used. From that point of view, an XML document is a tree structure, i.e., a set of nodes connected by directed edges. The nodes in the tree represent XML elements, the leaves of this tree are the characters the text consists of. All but one node of the tree must have a single parent. The node without a parent is called the root node. Of course, XML documents are only one of multiple ways to represent tree structures by means of a linear stream of text data. An alternative linearisation of trees is the labeled bracketing format often used in linguistics, e.g., (s (n mary) (vp (v supports) (np (det the) (n union)))). The abstract model of the multiple XML files used by the GENAU approach is not a single tree but several trees. Since each of these trees spans over the same leaves such a structure is called a multi-rooted tree. A multi-rooted tree does have as many roots as annotation layers are encoded. The multiple files used by the GENAU approach can be regarded as a linearisation of a multi-rooted tree. The next section describes an alternative approach to represent these structures. 3 The TEI Tag Set for the Annotation of Feature Structures as a Representation Format for Multi-rooted Trees In addition to the encoding in multiple files, other representation formats can be used to represent multi-rooted trees. One of these formats is based on the TEI tag set for the representation of feature structures (Sperberg-McQueen and Burnard, 2001). Although this additional tag set was already included in version P3 of the TEI guidelines (Sperberg-McQueen and Burnard, 1994) and adopted as an ISO standard (ISO 24610-1:2006, 2006) in 2006, it is used only rarely in academic applications. This tag set allows for the merging of all annotation information into a single XML document instance – at the same time it enables us to mark up phenomena that are hard or almost impossible to annotate using conventional approaches. In many branches of formal linguistics, feature structures are a common representation format. For example, several variants of generative grammar are grounded on the descriptive device of feature structures and the most important operation defined upon them: unification.2 From a mathematical point of view, feature structures can be described as partial functions from sets of features (also: attributes) onto sets of values. The values can be atomic or complex. As complex values are feature structure themselves, feature structures can be nested. Another mathematical stance on feature structures is the directed acyclic graph perspective. Feature structures can be visualised straightforwardly in the 2 Shieber (1986) gives an introduction on unification-based grammars, Carpenter (1992) provides the formal background on feature structures. Concerning the operation of unification, the result of the unification of two feature structures can be intuitively conceived as the fusion of the information contained in both feature structures, if the respective information packages are compatible with each other. Witt et al. (2005) describe an application of unification for XML documents with concurrent markup. Since XML documents and feature structures are variants of directed acyclic graphs, there might be a straightforward mapping from one type of structural configuration onto the other. On closer investigation, however, some important differences can be uncovered. Sequential order, for example, plays an important role among the branches of subtrees of XML document trees, but it does not among the corresponding attribute-value pairs situated on an identical level within feature structures. Nevertheless, it is still possible to realise the desired mapping from the more restrictive to the less restrictive format using special representational means which have to be interpreted specifically. The use of feature structures for the representation of multi-layered annotations is illustrated by means of a simplified example of a two tier annotation of a word. The German verb *geben* (“to give”) is annotated morphologically and phonologically. The first annotation in (1) – or, correspondingly, the first tree structure in (2) – depicts the morphological annotation, the second one shows the phonological structure. Both annotations are marked up as single rooted trees. (1) <w> <m type="lexical">geb</m> <m type="flexive">en</m> </w> <w> <syll n="s1">ge</syll> <syll n="s2">ben</syll> </w> (2) Let us compare the attribute value matrices visualised as trees in Fig. 2: in order to express the information contained in both trees by means of a single feature structure, the concurring annotation layers could be embedded into the different top-level features, e.g., into the features tier1 and tier2, whereas the primary data are segmented into single indexed characters and represented along the remaining top-level feature data. Generally, sequential relations as those among the indexed characters under data are expressed by means of appropriate first/REST value pair assignments that correspond to list notations. The solution for the representation of hierarchical relations consists in a similar mechanism: the exploitation of a special feature content which also embeds list-like feature structures such as those under data. Attributes are represented in a straightforward way by means of a mapping onto the value of the attributes feature. The anchoring of the annotation to the data is realised via a reference mechanism known as structure sharing or reentrancy, which is a common place among feature structures. It can be interpreted as token-identity and is indicated using co-indexed boxes here. The TEI tag set for the representation of feature structures can be used to encode this feature structure in an XML-based format. Fig. 3 shows the XML version of the attribute value notation depicted in Fig. 2.3 The backbone of the encoding consists in the use of fs elements for feature structures and f elements for features. From a conceptual point of view, this approach can be --- 3 Due to space restrictions, Fig. 3 only displays the representation of the first top-level feature of the attribute value matrix, i.e., the feature structure underneath data. Figure 3: TEI-based feature structure representation of the AVM example thought of as a “retranslation” to XML. However, at the level of the automatic methods devised in order to realise the transformation into this exchange format, both steps (the transformation into a feature structure format and the retranslation into XML) are broken down into a single step since the feature structure output can be directly represented as XML code that conforms to the TEI tag set standard. The automatic methods to bring about the transformation consist in the subsequent execution of code written in Perl\textsuperscript{4} and the application of XSLT stylesheet processing. The Perl code checks for the identity of the primary data among the multiple files corresponding to the different annotation layers to be integrated, while the XSLT stylesheet contains the actual transformation rules. 4 Comparison with Alternative Representation Formats The list of possible alternatives to the use of TEI feature structures includes, e.g., XCONCUR and the stand-off annotation based approach of the NITE project. XCONCUR (Hilbert et al., 2005; Schonefeld and Witt, 2006) can be characterised as a means of augmenting the XML standard with the optional CONCUR feature of the XML predecessor SGML – the syntax of XCONCUR is reminiscent of SGML with the CONCUR feature enabled. The basic mechanism is to prefix each element with an obligatory identity label for its respective annotation layer as conforming to this simple scheme: (layer-id)name. Here, of course, layer-id is a placeholder for the annotation layer label and name stands for the element’s name. XCONCUR documents have to be well-formed. This condition is related to XML well-formedness via a projection to a set of well-formed XML documents. Each member of such a set can be conceptualised as representing the information content of a respective annotation layer. It is generated by way of decomposition from the original XCONCUR document, i.e., stripping the non-pertinent parts (see Witt et al., 2007, also with respect to constraint-based cross-level validation). The above example of a morphological and syllabic annotation of the German verb geben (“to give”) can be represented in XCONCUR as follows: (3) \[ \text{<?xconcur version="1.1" encoding="utf-8"?>} \text{<(l1)w>} \text{<(l2)w>} \text{<(l1)m type="lexical">} \text{<(l2)syll n="s1">} \text{ge} \text{<(l2)syll>} \text{<(l2)syll n="s2">} \text{b} \text{<(l1)m>}} \text{<(l1)m type="flexive">} \] \text{Parts of the code are based on the NEXUS tool developed by Maas (2003).} In comparison to the not even completely reproduced TEI-based representation in Fig. 3, this representation format is leaner. Obviously, this XCONCUR document is not a well-formed XML document due to the overlapping elements. The members of the projectable set, however, are in fact well-formed XML documents. Finally, like the TEI-based format this is also an integrative one, i.e., the whole information is packed into a single document. Both formats can be used as storage and exchange formats for multi-hierarchically annotated linguistic data. The NOM format exemplifies an XML-based approach to stand-off annotation. In particular, the format separates each coding for every observation into a separate file (Carletta et al., 2003). A coding consists of one or more layers whose annotations can be arranged hierarchically as a tree structure. For example, we may have separate phonological, morphological, syntactic and pragmatic codings for natural language data. With regard to the “geben” example, we have simple morphological, syllabic and character codings. An observation consists of a piece of data to be annotated, e.g., a dialogue or, here, just a token of the verb “geben”. The different coding files have to conform to the XML format. Links between them can be expressed via XLink/XPointer-mechanisms or according to an older, project-specific syntax that is also used in our example representation below. (4) `<root id="01.charachters"> <charachter id="c_1" start="0" end="1" char="g" /> <charachter id="c_2" start="1" end="2" char="e" /> <charachter id="c_3" start="2" end="3" char="b" /> <charachter id="c_4" start="3" end="4" char="e" /> <charachter id="c_5" start="4" end="5" char="n" /> </root>` (5) `<root id="01.syllabic"> <w id="w_1"> <syll id="s_1"> <child href="01.charachters.xml#id('c_1')" /> <child href="01.charachters.xml#id('c_2')" /> </syll> <syll id="s_2"> <child href="01.charachters.xml#id('c_3')" /> <child href="01.charachters.xml#id('c_4')" /> <child href="01.charachters.xml#id('c_5')" /> </syll> </w> </root>` 5 Relations among different codings and the shared data give rise to the multi-rooted tree perspective. The representation is separated into the three coding files listed as (4), (5), and (6). The names of these files are 01.charachter.xml, 01.syllabic.xml, and 01.morphological.xml, respectively. The 01.-prefix binds the codings to the same observation piece. Annotations at the syllabic (5) and morphological (6) levels are grounded via a reference mechanism that exploits IDs that have been attached to elements at the “foundational” character coding level (4). Just as our TEI-based feature structures, but unlike to XCONCUR, the NOM format uses XML and, therefore, inherits its advantages. However, unlike its two representation alternatives, the NOM format separates the information across different document instances and could therefore be critised as being not integrative in a strict sense (at least in the sense of a narrow reading of that term). With regard to document length considerations, the NITE representation format seems to take middle ground. On the one hand, it is not as lean as an XCONCUR representation, on the other, the NOM representations are not as lengthy as those produced by the TEI feature structure format. 5 Conclusions We have shown that it is possible to use the TEI tag set for the representation of feature structures as a meta-representation format for linguistic annotation resources. The underlying architecture is described as well as the conceptual approach and issues in the transformation from multiple XML annotation files to single-file, XML-based, TEI-adherent feature structure representations. The move to a feature structure meta-format is an interesting research question in its own right, since feature structures are such a common representation formalism among linguists adhering to different grammar theories today. However, the ability to represent one’s data in that format should not only stir up some level of interest among researchers familiar with the formalism – it might also open up new possibilities with regard to subsequent algorithmic processing developed against that background: Witt et al. (2005) demonstrate an example of “crossing over” between classic themes in computational linguistics and new fields of application in text technology. However, the use of TEI-based feature structure representations also has a disadvantage. As the short and rather simple examples above illustrate, the respective output documents tend to get fairly long and they are also somewhat more cumbersome to inspect manually. Hence, this format seems to be more appropriate as a storage and analysis format for machines to process rather than as a human oriented presentation format. **Acknowledgements** The research presented in this article was funded by the German Research Foundation (DFG). References Rehm, Georg; Schonefeld, Oliver; Witt, Andreas; Lehmburg, Timm; Chiarcos, Christian; Bechara, Hanan; Eishold, Florian; Evang, Kilian; Leshtanska, Magdalena; Savkov, Aleksandar and Stark, Matthias (2008b): “The Metadata-Database of a Next Generation Sustainability Web-Platform for Language Resources”. In: *Proceedings of the 6th Language Resources and Evaluation Conference (LREC 2008)*. Marrakech, Morocco.
{"Source-Url": "http://georg-re.hm/pdf/Witt-et-al-SusTEInability.pdf", "len_cl100k_base": 4769, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 28437, "total-output-tokens": 6601, "length": "2e12", "weborganizer": {"__label__adult": 0.00109100341796875, "__label__art_design": 0.003660202026367187, "__label__crime_law": 0.0015506744384765625, "__label__education_jobs": 0.0166778564453125, "__label__entertainment": 0.0011224746704101562, "__label__fashion_beauty": 0.0006999969482421875, "__label__finance_business": 0.0015859603881835938, "__label__food_dining": 0.0008215904235839844, "__label__games": 0.001903533935546875, "__label__hardware": 0.0011758804321289062, "__label__health": 0.0015478134155273438, "__label__history": 0.0020294189453125, "__label__home_hobbies": 0.00025463104248046875, "__label__industrial": 0.0012378692626953125, "__label__literature": 0.10235595703125, "__label__politics": 0.0017242431640625, "__label__religion": 0.0020656585693359375, "__label__science_tech": 0.392333984375, "__label__social_life": 0.0007829666137695312, "__label__software": 0.0643310546875, "__label__software_dev": 0.398681640625, "__label__sports_fitness": 0.0006442070007324219, "__label__transportation": 0.0014410018920898438, "__label__travel": 0.00033545494079589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27018, 0.03145]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27018, 0.56561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27018, 0.87202]], "google_gemma-3-12b-it_contains_pii": [[0, 2061, false], [2061, 4358, null], [4358, 8102, null], [8102, 11281, null], [11281, 12600, null], [12600, 14758, null], [14758, 14830, null], [14830, 17376, null], [17376, 19657, null], [19657, 21539, null], [21539, 22414, null], [22414, 26531, null], [26531, 27018, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2061, true], [2061, 4358, null], [4358, 8102, null], [8102, 11281, null], [11281, 12600, null], [12600, 14758, null], [14758, 14830, null], [14830, 17376, null], [17376, 19657, null], [19657, 21539, null], [21539, 22414, null], [22414, 26531, null], [26531, 27018, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27018, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27018, null]], "pdf_page_numbers": [[0, 2061, 1], [2061, 4358, 2], [4358, 8102, 3], [8102, 11281, 4], [11281, 12600, 5], [12600, 14758, 6], [14758, 14830, 7], [14830, 17376, 8], [17376, 19657, 9], [19657, 21539, 10], [21539, 22414, 11], [22414, 26531, 12], [26531, 27018, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27018, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5bcdcac36f3bf67e5abdf24908583c662a08c8da
Thinking for a Change Origins The “Thinking Processes” originated from the Theory of Constraints, the ideas for process improvement developed by Elyahu Goldratt. He realized that he was becoming a bottleneck in the dissemination of the ideas behind the Theory of Constraints. The Thinking Processes are a set of tools and heuristics that Goldratt uses. The Theory of Constraints’ process optimisation technique “The 5 focusing steps” is easily applied to physical, logistical processes like manufacturing, because the bottleneck and flows are visible. Applying the same ideas to more abstract problems in knowledge work or to improve rules and organisations is a lot more difficult. The Thinking Processes tools allow us to visualize this kind of situation. The Thinking Processes were introduced in Goldratt’s second business novel “It’s Not Luck”. “Thinking for a Change” is the title of a book about the Thinking Processes, written by Lisa Scheinkopf. Goals of the tools - Verbalize and make explicit intuition about systems and situations - Allow a group to analyse and discuss situations, to come to a shared understanding - A structured method to uncover hidden assumptions and question them in a constructive manner - Create consensus before a major decision, by involving all affected stakeholders (“Nemawashi”) - Provide a structured, step-by-step approach to systems thinking that helps participants to focus on the goals to achieve. The different tools - Current Reality Tree: helps you to find one or a few root causes for problems you’re facing. Now you know where to intervene to really solve the problems. - Future Reality Tree: helps you to visualize the effects of a proposed intervention, including potential undesirable effects. Now you know if your intervention will result in the desired and effect. You know the extra interventions you will need to undo or avoid negative side effects. - Transition Tree: allows you to map a path from where you are to where you want to be, by laying out a series of actions that will bring you closer to the goal, via a series of intermediate milestones. - Prerequisite Tree: allows you to plan back from a desired state, by looking for actions that overcome obstacles. - Evaporating Cloud: allows you to resolve conflicts between different courses of action, by surfacing and examining assumptions. **Simple Notation** **Entity** An entity is an element of the system. It describes a certain state. - The battery is dead **Cause – Effect** - The battery is dead \(\rightarrow\) Car doesn’t start The car doesn’t start (effect) BECAUSE the battery is dead (cause). **And Connector** - The battery is dead \(\rightarrow\) Car doesn’t start - We have no spare battery The car doesn’t start BECAUSE the battery is dead AND we have no spare battery. **Assumption** - The battery is dead \(\rightarrow\) Car doesn’t start Cars need batteries to start The car doesn’t start BECAUSE the battery is dead IS ONLY TRUE IF cars need batteries to start. **Action (or injection)** - Charge battery \(\rightarrow\) Car starts BECAUSE we’ve charged the battery, the car starts. **Making a Current Reality Tree** Find the root cause of undesirable effects **Step 1: Describe the system, its goal and the symptoms** 1. Determine the scope of the system: what is the system we’re analysing? What are its boundaries? 2. What is the goal of the system? Why does it (continue to) exist? What are the major measures of success? 3. Brainstorm a few (< 5) undesirable attributes of this system. What’s bothering you? What could be done better? Don’t analyse, just write them down. Use simple, definite sentences. These are your initial entities. --- **Example:** 1. **System:** This is about the IT organisation (several hundred people) that supports the Belgian Postal system. More specifically, about the development teams that write the software and the operations teams (admins) that install and support the software. 2. **The goal of the system** is to create and maintain the IT systems that allow the business to offer its service and generate value. We can measure this by looking at “business value” generated vs cost. To make projects more manageable, more focused and to deliver value sooner, developers would like to make smaller releases, which are installed sooner and often, thereby increasing business value. However, this is not allowed: because installing software is difficult and risky, more frequent releases would increase costs for operations. 3. **The goal of the tree** is to find the root causes for the cost and risk of installations. If we can tackle those, we might be able to release more frequently. See the “Evaporating Cloud” later in this document. 4. **Initial undesirable entities:** - Installing is difficult - Installing is risky Step 2: Find effect-cause-effects. “Why does this happen?” 1. Start with the worst entity. Which one would you like to get rid of most? 2. Ask yourself: “Why <entity>?” • If the answer is a new entity, create it 3. Connect the cause to the effect 4. Repeat the question for the other effects to work in the breadth of the diagram 5. or ask the “Why” question for the causes to drill deeper • You might find more than one cause for an effect • You might find more than one effect from a cause Note: in the “Toyota Way” there is a technique called the “5 Whys”, indicating that you should look for the root cause approximately 5 levels down from the original symptom. Example: We start with the following entities: - Release is difficult to install - Release is risky to install Q: “Why is installing difficult?” A: “Installing is difficult BECAUSE it requires many manual steps” (new entity) A: “Installing is difficult BECAUSE it usually involves many systems” (new entity) Q: “Why is installing risky?” A: “Installing is risky BECAUSE it usually involves many systems” Q: “Why do installs require many manual steps?” (Digging deeper) A: “Installs require many manual steps BECAUSE developers don’t know how to automate tasks using scripts”. ``` Developers can’t script / \ Requires manual steps Involves many systems \ / Release is difficult to install Release is risky to install ``` Step 3: Legitimate reservations, testing the model The “legitimate reservations” are critical questions to ask when making a tree. When you’ve added a few entities and/or relations, stop to ask these questions, to clarify and simplify the tree. This is the moment to make assumptions explicit so that everybody participating in the exercise agrees on the current state of the tree, before going further. Important: only the legitimate reservations are allowed. Don’t accept any kind of complaint, “Yeah but” or “That won’t work”. There are two categories of reservations. Test them in the given order. 1. **Level 1 reservations** involve a single entity or relation at a time a. **Clarity**: does everyone understand the entity description the same way? Can you make the description clearer, simpler, less ambiguous? Restate the entity in a different way to verify if everyone understands the entity like you do. b. **Entity existence**: does everyone agree that the entity exists? How can we “see” the entity? What proof do we have of its existence? c. **Causality existence**: is everyone convinced that the entity really causes the effect? What are the assumptions behind that relation? 2. **Level 2 reservations** involved more than a single entity and relation a. **Additional Cause**: Is the given entity the only possible cause for the effect? What else could have that effect? Could that additional cause also exist in the system? If so, how could we tell? Add the additional cause if you think it plays a role in creating the effect. b. **Insufficient Cause**: is the given entity sufficient to create the given effect or must it be combined with another entity? If so, add the other cause and indicate that they must occur together to cause the effect. c. **Predicted Effect**: can we imagine another effect caused by a given entity? If so, is this additional effect visible in the system? If it is, that strengthens the case for the existence of the entity. How could we disprove the existence of the entity? Can we perform (simulate) this test? --- **Example:** **Clarity**: “Installing is difficult” => “Installing takes more than ½ hour” **Existence**: “Installing takes more than ½ hour” is easy to see. “Installing is risky” could be deduced from the number of installations that have to be redone. **Causality**: “Installations have many manual steps” BECAUSE “developers don’t know how to automate using scripts”. Assumption: most of the steps in the installation can be automated using scripts. Verification: some applications use similar technology, yet have almost fully automated installs. **Additional Cause**: “Installations have many manual steps” could also be caused by “Developers don’t have the time/motivation to automate their installation”. **Insufficient Cause**: “Installations have many manual steps” BECAUSE “Developers don’t automate them (for whatever reason)” AND “Nobody else but developers automates installs”. --- 5 **Predicted Effect:** IF “Developers don’t know how to automate tasks using scripts” WE EXPECT THAT “no other tasks (e.g. builds) are automated”. Can we verify that? Step 4: Digging deeper and pruning the tree to find the root cause 1. If an effect has multiple causes, verify the “weight” of each cause. If an effect is mostly caused by one or a few entities and rarely by other entities, prune the causes that do not contribute much to the effect. Use the 80/20 rule. 2. Dig deeper by asking WHY questions until you find one or a few entities that are responsible for causing most of the effects. 3. Take care not to create entities that are too abstract. Keep on applying the legitimate reservations. Example: Q: “Why are installations so risky?” A: “Because admins don’t understand the applications they install and maintain well” Q: “Why don’t developers know how to automate tasks using scripts?” A: “Because they’re never involved (and don’t know about) installing and maintaining servers” Q: “Why are developers not involved?” A: “Because the development and operational organisations are totally separate (separate management, separate budget)” Q: “Why don’t the admins understand the applications they install and maintain well?” A: “Because they’re not involved in the design, build and test of the application”. A: “AND Because the systems have many dependencies on other systems”. Q: “Why are admins not involved?” A: “Because the development and operational organisations are totally separate (separate management, separate budget)” We’ve cleared away some entities that don’t directly contribute to the problem. E.g. the predicted effect that no other tasks have been automated. This is indeed the case: teams that don’t automate their install have no other automated tasks. More importantly, we have found a core cause of many of the problems: the developers and admins are part of totally separate organisations, with separate budgets and management. Both organisations have different goals: - The goal of the development organisation is to create valuable systems, as fast and cheap as possible. In Throughput Accounting terms: to maximize **Throughput** (business value), while minimizing **Investment**. - The goal of the operations organisation is to keep maintenance costs as low as possible. In Throughput Accounting terms: to **minimize Operating Expense**. If we look at the diagram again, we can see another potential root cause: the architecture of the systems is very complicated, with many dependencies. This makes the systems harder to understand and harder to automate installs (as that might involve many servers). We can tie this back to the separation of the organisations: - As admins are not involved in architecture and design, they can’t influence the architecture. - As developers are not involved in maintenance, they don’t feel the pain of keeping these complicated architectures running. This strengthens the case against the root cause. What can we do about this problem? Making a Future Reality Tree Explore the intended and unintended consequences of an action. Step 1: Start the tree with an injection and a goal 1. Create an entity that represents the goal you want to reach. This could be the inverse of an undesired effect or root cause from a current reality tree 2. If you have more goals, state them as entities. Don’t try to reach too many goals at once! 3. Don’t compromise your goals, because you think they are unattainable! We’re trying to find out if and how they can be attained. Don’t admit defeat before you start. 4. Brainstorm a few actions you could take to achieve the goal(s). 5. Select the most promising action and create an entity that represents it. Write the entity as a simple sentence. This will help you imagine that you have already taken the action, so that you can explore its consequences. This entity is called the injection. 6. Put the injection entity at the bottom of the diagram 7. Put the goal entity (entities) at the top of the diagram. Tip: write using present tense and don’t use tentative phrasing (maybe, might, possibly…), this will help you imagine the future. Example: Goals: Let’s try do something about the problem described above. What are our goals? - Releases are installed reliably, first time, each time. - Installing a release takes less than ½ hour. These are just the undesirable effects from the CRT, reversed. Injection(s): How can we bring about these goals? We can’t do anything about the root cause (yet), because the way the company is organized is not something we can change (quickly). But… could we do something to involve developers in maintenance and admins in development? I propose two actions: - Developer and admin pair-install the system - Admins review the architecture of the application Release works Release is fast Pair install Architecture review Step 2: List consequences of the action 1. Starting with the action, list the effects it has (remember, think and write in present tense). 2. Apply the categories of legitimate reservations after adding a few entities and relations. 3. If any of the effects are negative or undesirable, or if someone starts to raise objections, stop and examine the diagram: a. “You can’t do that!” If an action has a desirable effect, but someone thinks it’s impossible to perform that action, don’t argue. **You have discovered an obstacle.** b. “You don’t want that to happen!” If an action has an undesirable (side) effect, don’t argue. **You have discovered a negative branch reservation.** 4. Note the obstacles and negative branch reservations; we’ll revisit them in the next step. 5. If you get stuck reasoning forward from the actions to the goal(s), try to reason backwards from the goals and vice versa. Examples: What are some inferences we can draw from the actions? - IF admin/developer pair-install THEN developer experiences installation problems firsthand - IF developers experiences installation problems THEN developer is motivated to avoid these problems - IF admin/developer pair-install THEN installation problems get resolved quickly, because developer knows application well - IF problems get solved during install THEN admin learns about the system - IF developer and admin pair-install THEN they get to know each other - IF developer and admin know each other THEN they work together to improve the system - IF developer and admin work together to improve the system AND admin learns about the system AND developer is motivated to avoid installation problems THEN they will make next release’s installation by automating more, by making the system simpler or by reducing configuration needs. - IF developer and admin perform architecture reviews THEN they get to know each other AND the admin learns more about the system AND they can improve the system together. Release works Release is fast Improves system together Dev motivated to avoid problems Dev experiences problems Problems solved quickly Get to know each other Pairs install Admin learns Architecture review Improve system together Dev motivated to avoid problems Dev experiences problems Problems solved quickly Get to know each other Pair install Architecture review Step 3: Yeah, but… Dealing with obstacles and negative branch reservations 1. Dealing with **obstacles**: examine the reasoning behind the obstacle. Is there some other action you could take to remove the obstacle? If yes, add it to the diagram as an additional cause for the effect and note the assumption that this action removes the obstacle. If you see no immediate way to remove the obstacle, note the obstacle. You can try to apply a **Prerequisite Tree** to remove the obstacle. 2. Dealing with **negative branch reservations**: examine the reasoning leading to the negative effect using the legitimate reservations. a. If the effect requires more than one cause, is there a way to remove one of the causes by taking some action? If so, add the action and remove the unintended effect. Note the assumption that taking this action removes the cause and thus the effect. b. Add the opposite of the negative effect as a goal. Use the same techniques as for the other goals to find actions that bring about this goal. If you succeed in reaching the goal, you can leave off the undesirable effect. Add the new action as a prerequisite to the intended effects of the action that caused the undesirable effect you removed. --- **Examples:** **Undesirable effects:** 1. If developers are involved in the installation, there will be even more hacks than before during installations. **Installations will become even less repeatable. You don’t want that.** 2. If developers are involved in the installation, the **developer spends time doing** (unplanned) work that’s not in their job description. **You don’t want that.** **Obstacles:** 1. “**You can’t pair install**”, production servers are off-limits for developers, for obvious security and privacy reasons. 2. “**Developers and admins aren’t motivated to work together**”. Because the two organisations are separated, there’s an “over the wall” culture. **Resolving the objections:** - **Obstacle 1** can be resolved by changing the role of the developer: they are “observers”. The observer responds to questions of the admin and notes where the installation instructions are unclear. In both cases, the observer then updates the installation document. => **Change injection “Pair install” to “Developer observes admin”**. - The previous change would also remove **Undesirable effect 1**: with this feedback, the installation document will become clearer and hacks will be required less often. - To remove **Undesirable effect 2**, the PM would have to put this installation time in the plan. But even if he doesn’t, the developers are always idle between two releases, so there’s no real time loss. If our releases become faster, developers and admins have more time. To remove **Obstacle 2**, the PM would have to motivate or tell developers and admins to work together. That’s feasible for the developers, but not the admins. A PM has no authority over people in other teams and organisations. There are two ways the PM can motivate developers and admins: 1. Involve admins from the start of the project, so that they know what they’re working on and their input is valued 2. Throw a small release party to celebrate the successful release. Use the relaxed atmosphere to perform an informal retrospective, to improve the next installation of the release. If we perform these actions, the tree looks like this. **Step 4: Getting to the goal and stabilizing with reinforcing loops** 1. Keep on applying the steps above to get to the goal(s) you set. 2. You might have to backtrack, remove actions, add other actions. 3. When you reach the goal, try to find a reinforcing loop. A reinforcing loop is a causal relation from an entity high up in the tree (near or at the goal) to an entity lower in the tree (nearer the actions you want to take). a. Examine the entities from top to bottom, starting with the goal b. Check if this entity could cause an entity lower down, from bottom to top. c. Apply the legitimate reservations if you find a candidate relation. 4. Reinforcing loops can help keep a goal “alive” by reinforcing the actions that bring about the goal. However, too much of a good thing can be bad: be aware of possible negative effects from repeatedly performing an action or strengthening a goal. --- **Examples:** It’s clear that there has to be a working release to have a party. Therefore, the release party should be at the top, caused by the working release. This is already a high up cause that has a low down effect. That’s great to keep people motivated. Still, how do we get the system started? The PM has to motivate developers and admins somehow. Involving admins early in the process, so that they really feel part of the team is a good way to do that. What we’re doing here is to create a “cross-functional virtual team”. Some team members are permanent, like the developers of the project. Other members are part of the team for (part of) this release only, like admins or developers of other impacted projects. These people are part of many virtual teams. The PM has no formal authority over them, so they have to motivate those people to want to work for a team. It’s very important that every member has a clear view of the goal and knows how they participated in bringing this goal about. By creating these virtual teams, we are dealing with the root cause of the problems: “development and operations are separated”. We are in effect creating a matrix structure, which keeps the good parts of the separation (clear roles, security and privacy), but removes the bad results (“over the wall” mentality, poor knowledge and attention to detail by developers about installation and maintenance). Even though we can’t do anything about the root cause, we can do something about it. There is one dangerous point in this diagram: it’s up to the PM to get this system started and to keep it going (with the help of the release party). What if this injection falls away? One way of dealing with this, would be to encode the other injections (“developer observes”, “architecture review” and “release party-retrospective”) in the standard process of the organisation. That would require some injections to spread the idea and to get it started. But afterwards, we hope the system becomes self-sustaining. The evaporating cloud Examine the reasoning behind two conflicting statements Step 1: Articulate the problem: where’s the conflict? 1. Describe the system and its goal, if you haven’t already. 2. Are you sure you want to solve this problem? 3. State the two sides of the conflict as entities (D and D’) 4. State the goal of the system as an entity (A) 5. Add an entity B, so that: in order to achieve A, we need B. In order to achieve B we need D. 6. Add an entity C, so that: in order to achieve A, we need C. In order to achieve C we need D’. 7. You should have a diagram like this: ``` A. Common goal +-----------------+------------------+ | | | +-----------------+------------------+ | B. Requirement 1 | D. Conflict side 1 | +-----------------+------------------+ | C. Requirement 2 | | +-----------------+------------------+ | | | +-----------------+------------------+ | D’. Conflict side 2 | ``` You should be able to read the diagram out loud like: “In order to have A, B must exist. We also need C in order to have A. We can’t get B, unless we have D. We must have D’ in order to have C. D and D’ are mutually exclusive, they cannot coexist.” Example: This is the IT organisation of a large company. Developers want to release more often to bring value sooner and to reduce risk. Admins want to install fewer releases to reduce costs and to reduce risk. Both of these departments together want to provide systems that provide the best value for the lowest cost to the business. ``` A. Value for money to the business +-----------------+------------------+ | | | +-----------------+------------------+ | B. High system value | D. Release more often | +-----------------+------------------+ | C. Low system cost & low risk | +-----------------+------------------+ | | | +-----------------+------------------+ | D’. Release less often | ``` “In order to provide good value for money to the business, developers must provide systems that provide high value at low risk (B->A). The admins must also ensure that these systems are installed and maintained at low cost and low risk (C->A). In order to have higher value and lower risk, developers need to release smaller releases, more often (D->B). In order to lower maintenance and installation costs and risk, admins need to make fewer changes to the systems (D’->C). Releasing more often AND less often is mutually exclusive, they cannot co-exist” (D<->D’). Step 2: Examine the diagram with the legitimate reservations 1. Does the diagram satisfy the level 1 reservations: - Clarity? - Entity existence? - Causality existence? 2. Does the diagram satisfy the level 2 reservations: - Additional cause? - Sufficient cause? - Predicted effect? 3. Take note of any assumptions 4. Are D and D' really mutually exclusive? - Why can’t D and D’ co-exist? Note any interesting assumptions. - Why aren’t we allowed to have D and D’? Note the assumption. - Is there any overlap between D and D’? If so, can you separate them more cleanly, while holding on to the common part? Example: Clarity: what does release more often/less often mean? Typical projects now take around 6 months. Developers would like to release every 2 months. Entity existence: Causality existence: - Does releasing more frequently increase value? Yes: business people have been asking for shorter releases, to be able to react faster to the competition. - Does releasing less frequently reduce cost? Yes: systems admins have to spend time planning and executing the change. There are often problems during or shortly after a release, so that admins have to perform emergency fixes. - Does releasing less frequently reduce risk? Yes: if you leave the systems alone, you don’t risk downtime or regression problems. Additional cause: - Is there another way to deliver value sooner, without releasing more often? We could make the system more configurable by users, so that they could make more changes without involving IT. But this is insufficient to be able to support all the features in the new releases. - Is there another way to reduce risk and cost of installations, except not releasing? Maybe.... Sufficient cause: does releasing often suffice to create value? No: we must also ensure that the release contains high value features and that they work. Let’s assume this is the case. Predicted effect: can we disprove “releasing less often reduces cost and risk”. Yes, if we can find projects that release often, yet are not costly or risky to install. Is this the case? Yes, there are one or two such projects. We should examine what they do differently. Why is it that most projects are risky and costly to install? We can examine this problem using a Current Reality Tree (see start of document). If we can find a way to make releases cheap and safe to install, we can remove D’, thus resolving the conflict. Bibliography Thinking for a Change: Putting the TOC Thinking Processes to Use – Lisa M. Scheinkopf (ISBN: 1574441019) The Toyota Way: 14 Management Principles From The World's Greatest For more books about the subject, see: http://wiki.systemsthinking.net/Systemsthinking/BookList.html Thank you for participating in this session. Marc Evers Piecemeal Growth The Netherlands http://www.piecemealgrowth.net marc@piecemealgrowth.net Pascal Van Cauwenberghe Nayima Belgium http://www.nayima.be pvc@nayima.be
{"Source-Url": "http://www.agilecoach.net/wp-content/uploads/2009/10/Thinking-for-a-Change-handout.pdf", "len_cl100k_base": 6130, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 32604, "total-output-tokens": 7141, "length": "2e12", "weborganizer": {"__label__adult": 0.00054168701171875, "__label__art_design": 0.0026149749755859375, "__label__crime_law": 0.0007266998291015625, "__label__education_jobs": 0.048797607421875, "__label__entertainment": 0.0001780986785888672, "__label__fashion_beauty": 0.000270843505859375, "__label__finance_business": 0.00853729248046875, "__label__food_dining": 0.0006747245788574219, "__label__games": 0.0014314651489257812, "__label__hardware": 0.0009660720825195312, "__label__health": 0.001277923583984375, "__label__history": 0.0006961822509765625, "__label__home_hobbies": 0.0007681846618652344, "__label__industrial": 0.0012607574462890625, "__label__literature": 0.003337860107421875, "__label__politics": 0.000949859619140625, "__label__religion": 0.0008521080017089844, "__label__science_tech": 0.0838623046875, "__label__social_life": 0.0009665489196777344, "__label__software": 0.0330810546875, "__label__software_dev": 0.806640625, "__label__sports_fitness": 0.0004396438598632813, "__label__transportation": 0.0009984970092773438, "__label__travel": 0.00035119056701660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28795, 0.01008]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28795, 0.44363]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28795, 0.92802]], "google_gemma-3-12b-it_contains_pii": [[0, 2361, false], [2361, 3139, null], [3139, 4829, null], [4829, 6284, null], [6284, 9273, null], [9273, 9439, null], [9439, 10826, null], [10826, 12299, null], [12299, 14160, null], [14160, 16142, null], [16142, 16525, null], [16525, 19260, null], [19260, 19906, null], [19906, 21286, null], [21286, 22836, null], [22836, 25481, null], [25481, 27951, null], [27951, 28795, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2361, true], [2361, 3139, null], [3139, 4829, null], [4829, 6284, null], [6284, 9273, null], [9273, 9439, null], [9439, 10826, null], [10826, 12299, null], [12299, 14160, null], [14160, 16142, null], [16142, 16525, null], [16525, 19260, null], [19260, 19906, null], [19906, 21286, null], [21286, 22836, null], [22836, 25481, null], [25481, 27951, null], [27951, 28795, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28795, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28795, null]], "pdf_page_numbers": [[0, 2361, 1], [2361, 3139, 2], [3139, 4829, 3], [4829, 6284, 4], [6284, 9273, 5], [9273, 9439, 6], [9439, 10826, 7], [10826, 12299, 8], [12299, 14160, 9], [14160, 16142, 10], [16142, 16525, 11], [16525, 19260, 12], [19260, 19906, 13], [19906, 21286, 14], [21286, 22836, 15], [22836, 25481, 16], [25481, 27951, 17], [27951, 28795, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28795, 0.03236]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
8586b19f83ecc740ec2f9a92b8c1a9c48081231d
Section 02: Solutions Section Problems 1. Comparing growth rates (a) Simplify each of the following functions to a tight big-$O$ bound in terms of $n$. Then order them from fastest to slowest in terms of asymptotic growth. (By “fastest”, we mean which function increases the most rapidly as $n$ increases.) - $\log_4(n) + \log_2(n)$ - $\frac{n}{2} + 4$ - $2^n + 3$ - $750,000,000$ - $8n + 4n^2$ Solution: (i) $2^n + 3 = O(2^n)$ (ii) $8n + 4n^2 = O(n^2)$ (iii) $\frac{n}{2} + 4 = O(n)$ (iv) $\log_4(n) + \log_2(n) = O(\log(n))$ (v) $750,000,000 = O(1)$ (b) Order each of these more esoteric functions from fastest to slowest in terms of asymptotic growth. (By “fastest”, we mean which function increases the most rapidly as $n$ increases.) Also state a simplified tight $O$ bound for each. - $2^{n/2}$ - $3^n$ - $2^n$ Solution: - $3^n$, which is in $O(3^n)$ - $2^n$, which is in $O(2^n)$ - $2^{n/2}$, which is in $O\left(\sqrt{2}^n\right)$ (or $O(2^{n/2})$). Constant multipliers don’t matter in big-$O$ notation, but a constant factor in the exponent does matter, since it corresponds to multiplying by some constant to the $n^{th}$ power. Saying $2^{n/2}$ is in $O(2^n)$ would be true, but it would not be a tight bound. 2. True or false? (a) In the worst case, finding an element in a sorted array using binary search is $O(n)$. (b) In the worst case, finding an element in a sorted array using binary search is $\Omega(n)$. (c) If a function is in $\Omega(n)$, then it could also be in $O(n^2)$. (d) If a function is in $\Theta(n)$, then it could also be in $O(n^2)$. (e) If a function is in $\Omega(n)$, then it is always in $O(n)$. Solution: (a) True (b) False (c) True (d) True (e) False As a reminder, we can think about \( O \) informally as an upper bound. If a function \( f(n) \) is in \( O(g(n)) \), then \( g(n) \) is a function that dominates \( f(n) \), and this domination can be really overshooting the mark. Every (correct) piece of code we write in this class will have a running time that is \( O(n^{10^{10}}) \). Conversely, we can think about \( \Omega \) informally as a lower bound. If a function \( f(n) \) is in \( \Omega(g(n)) \), then \( f(n) \) is a function that dominates \( g(n) \), and this domination can be really overshooting the mark also. The running time of any piece of code is always in \( \Omega(1) \). And finally, \( \Theta \) is a much stricter definition. \( f(n) \) is in \( \Theta(g(n)) \) (if and only if) \( f(n) \) is in \( O(g(n)) \) and in \( \Omega(g(n)) \). Usually when people say \( O \), they mean \( \Theta \). For questions a and b: note that binary search takes \( \log_2(n) \) time to complete. \( \log_2(n) \) is upper-bounded by \( n \), so \( \log_2(n) \in O(n) \). However, \( \log_2(n) \) is not lower-bounded by \( n \), which means \( \log_2(n) \in \Omega(n) \) is false. 3. Code to summation For each of the following code blocks, give a summation that represents the worst-case runtime in terms of \( n \). Solution: (a) int x = 0; for (int i = 0; i < n; i++) { for (int j = 0; j < i; j++) { x++; } } One possible solution is \[ T(n) = 1 + \sum_{i=0}^{n-1} \sum_{k=0}^{i-1} 1 \] (b) int x = 0; for (int i = n; i >= 1; i /= 2) { x += i; } One possible solution is \[ T(n) = 1 + \sum_{i=1}^{\log_2(n)} 1 \] 4. Code modeling For each of the following code blocks, construct a mathematical function modeling the worst-case runtime of the code in terms of \( n \). Then, give a tight big-\( O \) bound of your model. (a) ```c int x = 0; for (int i = 0; i < n; i++) { for (int j = 0; j < n * n / 3; j++) { x += j; } } ``` Solution: One possible answer is \( T(n) = \frac{n^3}{3} \). The inner loop performs approximately \( \frac{n^2}{3} \) iterations; the outer loop repeats that \( n \) times, and each inner iteration does a constant amount of work. So the tight worst-case runtime is \( \mathcal{O}(n^3) \). The exact constant you get doesn't matter here, since we'll ignore the constant when we put it into \( \mathcal{O} \) notation anyway. For example, saying we do 3 operations per inner-loop iteration (checking the loop condition, updating \( j \), and updating \( x \)) and getting \( n^3 \) instead of \( n^3/3 \) is also completely reasonable. (b) ```c int x = 0; for (int i = n; i >= 0; i -= 1) { if (i % 3 == 0) { break; } else { x += n; } } ``` Solution: The tightest possible big-\( \mathcal{O} \) bound is \( \mathcal{O}(1) \) because exactly one of \( n \), \( n - 1 \), or \( n - 2 \) will be divisible by three for all possible values of \( n \). So, the loop runs at most 3 times. (c) int x = 0; for (int i = 0; i < n; i++) { if (i % 5 == 0) { for (int j = 0; j < n; j++) { if (i == j) { x += i * j; } } } } Solution: While the inner-most if statement executes only once per loop, we must check if i == j is true once per each iteration. This will take some non-zero constant amount of time, so the inner-most loop will perform approximately n work (setting the constant factors equal to 1, is conventional, since constant factors can depend on things like system architecture, what else the computer is doing, the temperature of the room, etc.). The outer-most loop and if statement will perform n work during only 1/5th of the iterations and will perform a constant amount of work the remaining 4/5ths of the time. So, the total amount work done is approximately \( \frac{n}{5} \cdot n + \frac{4n}{5} \cdot 1 \). If we simplify, this means we can ultimately model the runtime as approximately \( T(n) = \frac{n^2}{5} + \frac{4n}{5} \). Therefore, the tightest worst-case asymptotic runtime will be \( O(n^2) \). (d) int x = 0; for (int i = 0; i < n; i++) { if (n < 100000) { for (int j = 0; j < n; j++) { x += 1; } } else { x += 1; } } Solution: Recall that when computing the asymptotic complexity, we only care about the behavior for large inputs. Once \( n \) is large enough, we will only execute the second branch of the if statement, which means the runtime of the code can be modeled as just \( T(n) = n \). So, the tightest worst-case runtime is \( O(n) \). (e) int x = 0; if (n % 2 == 0) { for (int i = 0; i < n * n * n * n; i++) { x++; } } else { for (int i = 0; i < n * n * n; i++) { x++; } } Solution: We can model the runtime of this function in the general case as: \[ T_g(n) = \begin{cases} n^4 & \text{when } n \text{ is even} \\ n^3 & \text{when } n \text{ is odd} \end{cases} \] Note that when we talk about worst-case analysis, the “cases” are different ways the code could run even after we know the value of \( n \). For this piece of code, once we know \( n \), there is only one way for the code to execute. So worst-case and best-case are identical for this function, it’s exactly \( T_g(n) \). Something interesting to note is that the model has differing tight big-\( O \) and tight big-\( \Omega \) bounds and so therefore has no big-\( \Theta \) bound. That is, the best big-\( O \) bound we can give for \( T_g(n) \) is \( T_g(n) \in \mathcal{O}(n^4) \); the best big-\( \Omega \) bound we can give is \( T_g(n) \in \Omega(n^3) \). These two bounds (\( n^4 \) and \( n^3 \)) are different so there is no big-\( \Theta \) for \( T_g \). 5. Applying definitions For each of the following, choose a \( c \) and \( n_0 \) which show \( f(n) \in \mathcal{O}(g(n)) \). Explain why your values of \( c \) and \( n_0 \) work. Solution: These solutions are divided into “scratch work” which is algebra you have to do before you start writing the proof and the “proof” itself. The scratch work technically doesn’t belong in a final answer, but the proofs are difficult to understand without them. For these, the proof will just be the scratch work algebra, possibly done in a different order, with some connecting words. (a) \( f(n) = 3n + 4, g(n) = 5n^2 \) Solution: **scratch work:** Our goal is to bound \( f \) by a function with \( n^2 \) terms so comparing to \( g \) is easier. \[ \begin{align*} 3n &\leq 3n^2 = \frac{3}{5} \cdot 5n^2 & \text{if } n \geq 1 \\ 4 &\leq 4n^2 = \frac{4}{5} \cdot 5n^2 & \text{if } n \geq 1 \end{align*} \] We add together the inequalities to get: \[ f(n) = 3n + 4 \leq \left( \frac{3}{5} + \frac{4}{5} \right) 5n^2 = \frac{7}{5} g(n) \] **proof:** One possible solution is \( c = \frac{7}{5} \) and \( n_0 = 1 \). We note that \( 3n \leq 3n^2 \) and \( 4 \leq 4n^2 \) as long as \( n \geq n_0 \). Adding these two inequalities, we have \( f(n) = 3n + 4 \leq 7n^2 = \frac{7}{5} g(n) \) is true for all \( n \geq 1 \). Therefore, we know that \( 3n + 4 \leq c \cdot 5n^2 \) is true for our chosen values of \( c \) and for all \( n \geq n_0 \). (b) \( f(n) = 33n^3 + \sqrt{n} - 6, g(n) = 17n^4 \) Solution: **scratch work:** Since \( g \)'s dominating term is \( n^4 \), we will try to bound \( f \) by a function with only \( n^4 \) terms. Going term by term of \( f \): \[ \begin{align*} 33n^3 &\leq 33n^4 \text{ as long as multiplying by } n \text{ increases the function (i.e. as long as } n \geq 1) \\ \sqrt{n} &\leq n^4 \text{ as long as } n \geq 1 \\ -6 &\leq 0n^4 \text{ (always)} \end{align*} \] Combining these we want to get: \( 33n^3 + \sqrt{n} - 6 \leq 33n^4 + n^4 \leq 34n^4 \leq c \cdot 17n^4 \) \( c \) being 2 is enough. **proof:** One possible solution is \( c = 2 \) and \( n_0 = 1 \). We note that \( 33n^3 \leq 33n^4, \sqrt{n} \leq n^4 \), and \(-6 \leq 0n^4\) all hold for \( n \geq n_0 = 1 \). Next, note that \( 34n^4 \leq c \cdot 17n^4 \) is true for all values of \( n \) and when \( c = 2 \). Therefore, we know that \( 33n^3 + \sqrt{n} - 6 \leq c \cdot 17n^4 \) is true for our chosen value of \( c \) and for all \( n \geq n_0 \). (c) \( f(n) = 17 \log(n), g(n) = 32n + 2n \log(n) \) **Solution:** **scratch work:** There are a lot of ways to do this one. Normally we would compare to the highest order term in \( g \), but because the constant is larger on the \( n \) term, it will be easier to compare to that. \[ 17 \log(n) \leq 17n \quad \text{as long as } \log(n) < n \quad \text{whenever } n > 2. \] Then we can compare immediately to \( g \): \[ 17 \log(n) \leq 17n \leq 32n \leq 32n + 2n \log(n) \leq c(32n + 2n \log(n)) \] where it’s good enough to set \( c \) to 1 **proof:** One possible solution is \( c = 1 \) and \( n_0 = 2 \). We can convince ourselves this is true by examining our inequalities: \( 17 \log(n) \leq 17n \leq 1 \cdot 32n \) for \( n \geq n_0 \). Since \( 2n \log(n) \) is always positive, we have \( 17 \log(n) \leq c \cdot (32 + 2n \log(n)) \) is true for our chosen values of \( c \) and \( n_0 \), we know that \( f(n) \in O(2n \log(n)) \). 6. Using our definitions Most of the time in the real world, we don’t write formal big-\( O \) proofs. The point of having these definitions is not to use them every single time we think about big-\( O \). Instead, we use the formal definitions when a question is particularly tricky, or we want to make a very general statement. Here are some particularly tricky or general statements that are easier to justify with the formal definitions than with just your intuition. (a) We almost never say a function is \( O(5n) \), we always say it is \( O(n) \) instead. Show that this transformation is ok, i.e. that if \( f(n) \) is \( O(5n) \) then it is \( O(n) \) as well. **Solution:** Let \( f(n) \) be the running time of the function. Since \( f(n) \) is \( O(5n) \), there exist positive constants \( c, n_0 \) such that \( f(n) \leq c \cdot 5n \) for all \( n \geq n_0 \). We need to find positive constants \( c', n'_0 \) such that \( f(n) \leq c' \cdot n \) for all \( n \geq n'_0 \). If we look at the inequality we have, it seems like a good idea to take \( c' = 5c \) and \( n'_0 = n_0 \). Then plugging in we have: \( f(n) \leq c \cdot 5n = c' \cdot n \) for all \( n \geq n_0 = n'_0 \), which is what we needed to show \( f(n) \) is \( O(n) \). (b) When we decide on the big-O running time of a function, we like to say that whatever happens on small n doesn’t matter. Let’s see why with an actual proof. You write two functions to solve the same problem: method1 and method2. method1 takes \( O(n^2) \) time and method2 takes \( O(n) \) time. What is the big-O running time of the following function: ```java public void combined(n){ if(n < 10000) method1(n); else method2(n); } ``` Solution: Let’s denote the number of operations needed to run method2 by \( g(n) \). What does it mean that method2 runs in \( O(n) \) time? It means that there exist numbers \( c, n_0 \) such that for all \( n \geq n_0 \), \( g(n) \leq cn \). Let’s try to argue about combined. When \( n \geq 10000 \) how many operations does it do? Something like \( 2 + g(n) \). But we already know that \( g(n) \leq cn \) whenever \( n \geq n_0 \). Now let’s try to find a \( c’, n_0’ \) for combined. If \( n < 10000 \) we won’t be using method2, so we want to take \( n \geq 10,000 \). We also want to take \( n \geq n_0 \), so that we will be able to bound \( g(n) \). Set \( n_0’ = \max\{10000, n_0\} \). For \( n \geq n_0’ \), the number of operations is \( 2 + g(n) \). We have \( 2 + g(n) \leq 2g(n) \leq 2cn \) for all \( n \geq n_0’ \). So if we take \( c’ = 2c \), we have exactly the definition of \( O(n) \), so the running time of combined is \( O(n) \). 7. Memory analysis For each of the following functions, construct a mathematical function modeling the amount of memory used by the algorithm in terms of \( n \). Then, give a tight big-O bound of your model. (a) List\[Integer\] list = new LinkedList\[Integer\](); for (int i = 0; i < n * n; i++) { list.insert(i); } Iterator\[Integer\] it = list.iterator(); while (it.hasNext()) { System.out.println(it.next()); } Solution: We insert \( n^2 \) items into our linked list. Each inserted item will create a new node, which uses up a constant amount of memory. The iterator itself will only view the underlying data, without making a copy. So, the overall memory usage can be modeled as: \[ M(n) = \sum_{i=0}^{n^2-1} e \] ...where \( e \) is the amount of memory used per each node. This is in \( O(n^2) \). (b) int[] arr = {0, 0, 0}; for (int i = 0; i < n; i++) { arr[i]++; } Solution: While we iterate $n$ times, this algorithm only uses up a constant amount of memory. So, the overall memory usage can be modeled as roughly $M(n) = 3c$, where $c$ is the amount of memory used by each int in the array. This is in $\mathcal{O}(1)$. ArrayDictionary<Integer, String> dict = new ArrayDictionary<>(); for (int i = 0; i < n; i++) { String curr = ""; for (int j = 0; j < i; j++) { for (int k = 0; k < j; k++) { curr += "?"; } } dict.put(i, curr); } Note 1: For simplicity, assume the dictionary has an internal capacity of exactly $n$. Note 2: The amount of memory used by a single character ($c$) and the amount of memory used by a single int ($x$) are both constant. Note 3: An ArrayDictionary stores its key-value pairs contiguously, and performs scans through (potentially) the entire data structure when performing an insert() or a find(). Solution: This problem is best solved intuitively first, then rigorously after we know what to look for. We know that the loops run (from outer-most to inner-most) from $i = 0, n, j = 0, i,$ and $k = 0, j$. If we consider just $i$ and $j$, we can imagine a sort of “triangle” of values, where $i$ always iterates to $n$ overall but $j$ iterates to 0, then 1, then 2, on and on until $j$ iterates to $n$. This approximates to $M(n) = \frac{1}{2}n^2$, but because we don’t care about constants so much, this really approximates $O(n^2)$. The same logic follows if we include $k$, such that overall, this section of code approximates $O(n^3)$. Now that we know what to expect, the rigorous derivation comes next. Note that the two nested loops ultimately construct a string of length $\sum_{j=0}^{i-1} \sum_{k=0}^{j-1} c$, where $c$ is the amount of memory used per character. The code will then insert each string along with the int into the internal array. If we let $x$ represent the amount of memory used per int, we can model the overall memory usage as: $$M(n) = \sum_{i=0}^{n-1} \left( x + \sum_{j=0}^{i-1} \sum_{k=0}^{j-1} c \right)$$ If we apply summation rules, we can simplify this into: $$\sum_{i=0}^{n-1} i = 0^{n-1} \left( x + \sum_{j=0}^{i-1} \sum_{k=0}^{j} c \right) = \sum_{i=0}^{n-1} \left( x + c \sum_{j=0}^{i-1} j \right) = \sum_{i=0}^{n-1} \left( x + c \frac{i(i-1)}{2} \right) = xn + c \frac{1}{6} (n-2)(n-1)n$$ This is in $O(n^3)$. 10
{"Source-Url": "https://courses.cs.washington.edu/courses/cse373/21sp/sections/section02-solutions.pdf", "len_cl100k_base": 5644, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33215, "total-output-tokens": 6492, "length": "2e12", "weborganizer": {"__label__adult": 0.0003528594970703125, "__label__art_design": 0.00021314620971679688, "__label__crime_law": 0.0003840923309326172, "__label__education_jobs": 0.0009365081787109376, "__label__entertainment": 6.54458999633789e-05, "__label__fashion_beauty": 0.0001494884490966797, "__label__finance_business": 0.00020778179168701172, "__label__food_dining": 0.0005106925964355469, "__label__games": 0.0009050369262695312, "__label__hardware": 0.0012493133544921875, "__label__health": 0.0006012916564941406, "__label__history": 0.00022971630096435547, "__label__home_hobbies": 0.0001443624496459961, "__label__industrial": 0.0004363059997558594, "__label__literature": 0.0002536773681640625, "__label__politics": 0.00022852420806884768, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.01495361328125, "__label__social_life": 8.285045623779297e-05, "__label__software": 0.003192901611328125, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.0003628730773925781, "__label__transportation": 0.0005822181701660156, "__label__travel": 0.00022935867309570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16918, 0.03734]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16918, 0.54876]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16918, 0.83723]], "google_gemma-3-12b-it_contains_pii": [[0, 1654, false], [1654, 3534, null], [3534, 4674, null], [4674, 6352, null], [6352, 8115, null], [8115, 10007, null], [10007, 12222, null], [12222, 14453, null], [14453, 14799, null], [14799, 16918, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1654, true], [1654, 3534, null], [3534, 4674, null], [4674, 6352, null], [6352, 8115, null], [8115, 10007, null], [10007, 12222, null], [12222, 14453, null], [14453, 14799, null], [14799, 16918, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16918, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16918, null]], "pdf_page_numbers": [[0, 1654, 1], [1654, 3534, 2], [3534, 4674, 3], [4674, 6352, 4], [6352, 8115, 5], [8115, 10007, 6], [10007, 12222, 7], [12222, 14453, 8], [14453, 14799, 9], [14799, 16918, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16918, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
eaac69492730fa586638856e18bdf79acce1318c
AI-guided Model-Driven Embedded Software Engineering Padma Iyenghar1,2, Friedrich Otte1 and Elke Pulvermueller1 1Software Engineering Research Group, University of Osnabrueck, Germany 2innotec GmbH, Erlenweg 12, 49324 Melle, Germany Keywords: Artificial Intelligence, Model-Driven Engineering (MDE), Embedded Software Engineering (ESE), Unified Modeling Language (UML), Software Development, MDE Tool. Abstract: In this paper, an use case of Artificial Intelligence (AI) empowered Model Driven Engineering (MDE) in the field of Embedded Software Engineering (ESE) is introduced. In this context, we propose to qualify MDE tools for ESE with an AI assistant or a chatbot. The requirements for the first version of such an assistant and the design concepts involved are discussed. A prototype of such an assistant developed using an open source conversational AI framework used in tandem with an MDE tool for ESE is presented. Empowering MDE tools with such AI assistants, would aid novices in MDE or even non-programmer to learn and adopt model-driven ESE with a not-so-steep learning curve. 1 MOTIVATION Artificial Intelligence (AI) is a sub-discipline of computer science which is used to supplement technical systems with the ability to process tasks independently and efficiently. With the help of learning algorithms, AI systems can continue learning during ongoing operations, through which the trained models are optimized and the data- and knowledge-bases extended. Recent studies on modelling the impact of AI on the world economy in (Bughin et al., 2018) claim that AI has large potential to contribute to global economic activity. For instance, simulation studies in (Bughin et al., 2018) show that around 70 percent of companies may adopt at least one type of AI technology by 2030 and AI could potentially deliver additional economic output of around $13 trillion by 2030. AI is also starting to impact all aspects of the system and software development lifecycle, from their up-front specification to their design, testing, deployment and maintenance, with the main goal of helping engineers produce systems and software faster and with better quality while being able to handle ever more complex systems. Thus, AI is envisaged to help deal with the increasing complexity of systems and software. In the direction of the aforesaid context, the Model Driven Engineering (MDE) paradigm has been introduced in the recent decade with the goal of easing the developmental complexity of software and systems. In MDE, models are set in the center of every engineering process. Its target is to guarantee significant rise in productivity, maintenance and interoperability. It is increasingly used in industry sectors such as Cyber Physical Systems (CPS), automotive and aviation to name a few. Thus, MDE has been a means to tame, until now, a part of the aforementioned complexity of software and systems. However, its adoption by industry still relies on their capacity to manage the underlying methodological changes, and also the adoption and training with new tools with significant cost and time overhead. In the recently concluded workshop on AI and MDE (MDEIntelligence, 2021), it is identified that there is a clear need for AI-empowered MDE, which will push the limits of classic MDE and provide the right techniques to develop the next generation of highly complex model-based system. 1.1 Collaboration of AI and MDE The convergence of two separate fields in computer science such as MDE and AI can give rise to collaboration in two main ways such as (a) AI-guided MDE and (b) MDE for AI. In the following, let us briefly touch upon the opportunities and challenges derived by integration of AI and MDE for both (a) and (b). For (a) AI-guided MDE, MDE can benefit from integrating AI concepts and ideas to increase its power, flexibility, user experience and quality. Some opportunities in this direction can be: MODELSWARD 2022 - 10th International Conference on Model-Driven Engineering and Software Development - AI planning applied to (meta-) modeling - Using machine learning of models, meta-models and model transformation through search-based approaches - Self-adapting code generators - AI-based assistants such as bots or conversational agents and virtual assistants for MDE tool. AI-assistant for human-in-the loop modeling, e.g. dialog-based optimization and support for modeling tasks, answering FAQs and tutorials using text-to-text, voice-to-text processing, etc. - Natural language processing applied to modelling - Semantic reasoning platforms over domain-specific models - AI techniques for data, process and model mining and categorization On the other hand, some challenges for (a) could be in choice, evaluation and adaptation of AI techniques to MDE, such that they provide a compelling improvement to current system, software modeling and generation processes. AI-powered MDE should significantly increase the benefits and reduce the costs of adopting MDE. Furthermore, this step should also enable ease-of-use of MDE tools such that, for instance, they become analogous to use and popularity of low-code platforms in the IT sector. In the case of (b) using MDE for AI, AI can primarily benefit from MDE by integrating concepts and ideas from MDE such as - Model-driven processes for AI systems development - Automatic code generation for AI libraries - Model-based testing of AI artifacts - Domain-specific modeling approaches for machine learning Rather than significant challenges, it is expected that experts in MDE can make comprehensive inroads in the AI domain with their rich background and experience in applying MDE across various sectors. 1.2 MDE in ESE Software development for embedded systems typically involves coding in programming language C (or C++), for a specific microcontroller. However, in the recent decade to cope with the growing complexity of software-intensive embedded systems, MDE has become an essential part for analysis, design, implementation and testing of these systems. In the state-of-the-art, a large variety of software modeling practices are used in the domain of Embedded Software Engineering (ESE). A majority of them employ Unified Modeling Language (UML) (OMG, 2021) as a first choice of a graphical formal modeling language. A survey in (Akdur et al., 2018), claims that the top motivations for adopting MDE in ESE (e.g. for CPS development) are cost savings, achieving shorter development time, reusability and quality improvement. Several state-of-the-practice UML-based MDE tools are available in the ESE domain. While most of these tools claim to help build models quickly, edit programs graphically, generate source code automatically and design systems across platform, they are not necessarily intuitive to immediately put to use in real-life projects after installing them (Sundharam et al., 2021). This makes the learning curve steep and may introduce high cost and time overhead. A typical use case of AI-guided MDE can be foreseen for the aforesaid scenario. For instance, to gain higher acceptance of such MDE tools and bring them a step closer to, for example, a typical embedded software developer venturing to MDE or even to a non-programmer/beginner to learn model-driven ESE, AI-based assistants can be developed and employed together with these tools. 1.3 Novelties Addressing the aforesaid aspect, we present the following novelties in this short paper: - Introduce the concept of an AI-assistant or a chatbot for an MDE tool in the context of ESE. - Define requirements for a first version of such a chatbot and elaborate on the design concepts of one requirement (step-by-step tutorial). - Present a prototype of the AI assistant developed using RASA (Rasa: Open Source Conversational AI, 2021) for a state-of-the-practice MDE tool in ESE namely, SiSy (Simple System, 2021). Following this introduction, related work is presented in Section 2. The requirements and design concepts are discussed in section 3. A prototype is presented in section 4 and section 5 concludes this paper. 2 RELATED WORK 2.1 AI-based Assistants A chatbot is an AI-based program or an assistant, designed to simulate conversation with human users. It uses Natural Language Processing (NLP) to communicate in human language by text or oral speech with humans or other chatbots. Chatbots offer users comfortable and efficient assistance when communicating with them; they provide them with more engaging answers, directly responding to their problems (Adamopoulou and Moussiades, 2020). A literature review in (Adamopoulou and Moussiades, 2020) presents the history, technology and applications of natural dialog systems or the so-called chatbots. There are two main approaches in developing a chatbot, depending on the algorithms and the techniques adopted, namely, pattern matching and machine learning approaches (Adamopoulou and Moussiades, 2020), (Ramesh et al., 2017). A chatbot can be developed using programming languages like Java and Python or a chatbot development platform that may be commercial or open-source (Nayyar, D.A., 2019). Open-source platforms make their code available, and the developer can have full control of the implementation. Although, commercial platforms do not give full control to developers, they may still benefit from data for efficient training of the chatbots. Open source platforms include Rasa (Rasa: Open Source Conversational AI, 2021), Botkit (Botkit: Building blocks for building bots, 2021), Chatterbot (Chatterbot python library, 2021), Pandorabots (Pandorabots: Chatbots with character, 2021) and Botlytics (Botlytics: Analytics for your bot, 2021). Commercial platforms include Botsify (Botsify - A Fully Managed Chatbot Platform To Build AI-Chatbot, 2021), Chatfuel (Chatfuel Chatbot solution, 2021) and Manychat (Manychat: Chat Marketing Made Easy with ManyChat, 2021). Designing highly functional NLUs requires expert knowledge in machine learning and natural language processing. For this reason, several vendors exist for NLU solutions that make it easier for developers to create programs with NLU. The currently most used and evaluated NLUs are (Abdellatif et al., 2021); Dialogflow from Google (Dialogflow, 2021), LUIS from IBM (LUIS-Language Understanding, 2021), Watson from IBM (Watson Assistant, 2021) and Rasa which is open source (Rasa: Open Source Conversational AI, 2021). These do not only consist of NLUs but also of components for building dialog managers, which makes them full-fledged chatbot frameworks. But until now, there are only scientific evaluations for the NLU part, because it is of greater importance and not every application that needs an NLU also needs a dialog manager. The results from several evaluation studies show that the performance of the NLU varies greatly depending on the content domain (Canonico and Russis, 2018), (Angara, 2018), (Shawar and Atwell, 2007). Therefore, it is important to determine the suitability of an NLU at the relevant domain. Rasa was specifically evaluated in the context of software engineering by using technical questions asked on stack overlow (Abdellatif et al., 2021). The performance of the NLUs varied from one aspect to another. There was no NLU that outperformed the others on every aspect. In an overall ranking, Watson was placed first, Rasa second, Dialogflow third and LUIS fourth. Watson performed best in intent classification and entity extraction, while Rasa performed best in confidence score. This means that an intent with high confidence value was more often correct for Rasa than it is with other NLUs. This makes Rasa very robust for different confidence thresholds and allows for effective fallback routines. Among the several open source conversational AI framework, we found Rasa to be comprehensive and easy-to-use and supported by its elaborate documentation. Hence, the choice to use Rasa conversational AI framework to build the AI assistant (hereafter referred to as chatbot) for AI-guided MDE in our work was made. An introduction to Rasa framework is not provided here due to space constraints. Interested readers are referred to (Rasa: Open Source Conversational AI, 2021) (Rasa architecture, 2021). 2.2 MDE in ESE: State-of-the-Practice In the last decade, Model-Driven Architecture (MDA) introduced by the Object Management Group (OMG) (OMG, 2021) is considered as the next paradigm shift in software and systems engineering. Model-driven approaches aim to shift development focus from programming language codes to models expressed in proper domain-specific modelling languages. Thus, models can be understood, automatically manipulated by automated processes, or transformed into other artifacts. For instance, in the direction of adoption of a model-driven approach and the use of simulation-based techniques, significant effort has been spent in the last decade for easing the development and the simulation of complex systems using UML/SysML models in works such as (Bucciarelli et al., 2013), (Bucciarelli et al., 2019), (Sporer, 2015), (Mhenni et al., 2018), (Mhenni et al., 2014) and (Andrianarison and Piques, 2010) to mention a few. Some of these works are also joint efforts from industry and academia. However, one must admit that the shift from model-based (models used as mere diagrams) to a completely model-driven methodology (models used as central artifacts) in real-life projects in the industry has not yet taken place, especially for ESE domain. For example, in a latest survey in (van der Sanden et al., 2021), a position paper on model-driven Systems Performance Engineering (SysPE) for Cyber Physical systems (CPS) is presented. The paper concludes that the state-of-practice is model-based and a transition to model-driven SysPE is needed to cope with the ever increasing complexity of today’s CPS. Some state-of-the-practice UML-based MDE tools in the embedded software domain include proprietary tools such as Rhapsody Developer (IBM Software, 2021), Enterprise Architect (Enterprise Architect tool, 2021) and SiSy (Simple System, 2021) and a free UML tool, Visual Paradigm (Visual Paradigm, 2021). In the non-UML domain, Matlab/Simulink (Mathworks Products, 2021) is among the most popular MDE tool employed in the embedded software domain. While most of these tools claim to help build models quickly, edit programs graphically, generate source code automatically and design systems across platform, they are not necessarily intuitive to immediately put to use in real-life projects after installing them. For the aforementioned scenario, a typical use case of AI-guided MDE can be foreseen. For instance, to gain higher acceptance of such MDE tools and bring them a step closer to, for example, a typical embedded software developer venturing to MDE or even to a non-programmer/beginner to learn model-driven ESE, AI-based assistants can be developed and employed together with these tools. 3 AI ASSISTANT FOR MDE TOOL As mentioned in section 1.1, in the context of AI guided MDE, the latter can benefit from integrating AI concepts to increase its power, flexibility, user experience and quality. For instance, a starting point can be development of an AI-assistant such as a chatbot, which can serve as conversational virtual assistants for answering FAQs, step-by-step guidance of the tutorials available in the MDE tool, modeling tasks and so on. Such a use case of AI-guided MDE, would help in enabling ease-of-use of MDE tools (e.g. lessen the steep learning curve) and perhaps also help to reduce the costs of adopting MDE. 3.1 Requirements In the following, requirements for the envisaged AI-assistant for an MDE tool applied in the context of ESE are outlined. However, the overall concepts and ideas discussed here could be applied in the context of an AI-based assistant for any MDE tool. Please note that, within the scope of this paper, we make use of an MDE tool SiSy (Simple System, 2021), aiming specifically at MDE for embedded software systems and envisage the usage of the AI-assistant with this tool. 3.1.1 R1: Step-by-Step Guided Tutorial To enable ease-of-use of the MDE tool step-by-step guidance of tutorials available in the MDE tool can be offered by the AI-based assistant. - **R1.1-Piece-wise Tutorial**: The tutorial should be provided in a piece-wise manner with step-by-step interactive instructions for the user, based on their comfort level. - **R1.2-Questions at Any Time**: During such a conversation-based tutorial, the AI-assistant should be able to answer questions anytime. - **R1.3-Manipulate Tutorial State**: The user should be able to request any tutorial, and should also be able to stop or restart the running tutorial or switch to another tutorial. Restarting the tutorial can be useful if the user accidentally gave false information or skipped a tutorial step. - **R1.4-Context Specific Tutorial**: The chatbot should provide content depending on the context. 3.1.2 R2: Frequently Asked Questions (FAQs) The chatbot should be able to answer a list of FAQs. These are typically single-turn interactions, which means that the user asks a question and the chatbot can answer in one turn, without additional context information or asking further questions. 3.1.3 R3: Ease of Use The chatbot must be easy and intuitive to use. It must be clear for the user how to request tutorials and FAQs. This also implies a high robustness for language understanding. 3.1.4 R4: Scalability The chatbot is envisaged to be used in the long run of the MDE tool. Hence, it is important that it can be easily adapted, i.e., change and add or remove content in a simple and fast way. 3.1.5 R5: Integration The AI Assistant should be easily accessible while using the MDE tool in question. This can be achieved, for instance, by integrating it within the MDE tool or by making it accessible via the web. 3.1.6 R6: Continuous Improvement When deployed, the chatbot should continue to collect data and improve its language understanding and dialog management. 3.2 Design Challenges and Decision This section discusses the design challenges for requirement 3.1.1 only (due to space constraints), to arrive at a design solution. A chatbot usually comprises two main components, namely the NLU and dialog manager. In line with this idea, the proposed design and architecture of the chatbot introduced in this paper is shown in Figure 1. ![Figure 1: Proposed design employing message handling process of the RASA framework.](image) The NLU is responsible for understanding the unstructured user input (text information) and the dialog manager controls state and flow of the conversation. As seen in Figure 1, the NLU component takes care of intent classification and entity extraction. The dialog management component handles conversation tracking, ambiguity and error handling and response generation. The user interface component receives the user input, and it is communicated to the NLU unit. Based on the extracted intents and entities, the dialog management component provides a response to the user interface component. 3.2.1 Handling Step-by-Step Tutorial To provide the tutorial step-by-step as discussed in section 3.1.1, the conversations have to be defined over multiple turns. In contrast, it would be straightforward to provide the conversation in one block, because this can be a simple question/response pattern. The challenge of providing a step-by-step tutorial is increased by the sub-requirement R.1.2 in section 3.1.1, which requires that the questions from the end-user of the AI assistant should be answered any time. This prevents a rigid sequential process where after step one, step two and then step three will follow and so on. This implies a dialog management (cf. Figure 1), which is designed for multiple-turn dialogues, needs to be used. Furthermore, it should be possible to use slots to save information over multiple turns. Rasa’s dialog management is designed for multiple-turn dialogues in so far that former intents can be taken into account when the next action is decided. Furthermore, it is possible to use slots to save information over multiple turns in Rasa. With enough training data, the dialog manager could learn to provide the tutorial steps in the correct order and flexibly react to other questions at the same time. But this approach would be very labor-intensive and there would be no guarantee for success. Hence, other approaches had to be designed: the first approach was to use forms to manage these multiple-turn dialogues, the second approach was to use slots and custom actions. Both these approaches have been designed and evaluated as part of the concept study phase and elaborated below. **Option 1: Form Approach.** One approach would be to use forms, since they already are a specific implementation of multiple turn dialogues. Forms are a special type of action that allows to define slots that need to be filled. A form can thus be active over several utterances. As long as there is an empty slot within the form, the agent will ask a predefined question for that slot until it will be filled and move to the next slot. If the user gives an utterance that can not be used to fill the slot, the message will be handled like it would if there was no active form by the dialog manager. This means that the user can ask the same questions he or she normally could, as long as the message is not confused with valid slot input. Therefore, requirement R1.2 would be fulfilled. Forms are usually used to collect user information in a structured fashion. After the bot gives a response, it will turn back to the form and repeat the question for the currently empty slot. Figure 2 shows how a form could be used to implement step-by-step tutorials by using slots in a slightly different way. The form starts after the NLU component detects the intent to start a tutorial. Then, the first step of the tutorial will be provided, and the user will be asked to fill the first slot with something like ”Did you succeed?” or ”Do you want to continue to the next step?”. Only if the next Figure 2: Flowchart diagram of the form approach option to implement step-by-step tutorials. User input is interpreted as affirmative intent by giving a message like "yes" or "please continue" will the form continue to the next step. If no affirmative intent is detected, the message will be handled by the dialog manager. This is repeated until all tutorial steps have been provided. If necessary, the first slots can be used to collect context information like controller type to fulfill requirement R1.4. A prototype evaluation of this approach has shown that this approach in fact fulfills requirements R1.1 and R1.2 and R3, but also has shown some disadvantages. Some disadvantages are, for each tutorial, a form has to be created and for each tutorial step, an individual slot has to be created. Furthermore, for each form, a FormValidationAction event that includes the validation logic for each slot, has to be defined. This makes adding content very time consuming and is in conflict with requirement R4. The effort could probably be mitigated by a script that adds all these slots and the validation logic automatically, but this would have to be updated every time that Rasa changes the domain syntax or structure or the form structure. Option 2: Custom Actions Approach. Another solution could be to use custom actions to provide the next tutorial step and slots to save the current state of the tutorial. The tutorial state could be modeled by the current tutorial name and the current tutorial step. When a tutorial is requested, a tutorial specific custom action will be executed. This will allow handling tutorial specific business logic, like different content depending on the controller type. A specially created "next" intent will be used to trigger a custom action "Tutorial dispatcher" that evaluates the current tutorial slot and calls the respective custom action. This approach is shown in Figure 3. As in option 1, the process begins after NLU module detects the intent to start a tutorial. The custom action will increment the tutorial step counter slot and set the slot that defines the current tutorial. If context information is needed to provide the tutorial, a form is run and all the necessary information will be collected. Otherwise, the agent will provide the first step of the tutorial directly. When the user says something like "next", the "Tutorial Dispatcher" will be called. The purpose of this action is to request the "current tutorial" slot and dispatch to the tutorial specific action. It could also be used to handle tutorial switches and edge cases. Calling the tutorial specific action will increment the tutorial step counter again and provide the next step. If the user has a question, the bot will answer it and the slots will remain untouched. The tutorial dispatcher decides which custom action to start, depending on the current tutorial slot. If a tutorial needs to ask user information, forms can be used. They can be invoked by the custom action before the first step, so the collected information can be used in the subsequent steps. Decision. As seen above, two approaches are evaluated for the requirement R1 in section 3.1.1. Although the form approach utilizes an existing Rasa tool, it comes with great disadvantages. The process of adding new tutorials would be very compartmentalized, and there is no obvious way to remedy this. The second approach using custom actions has a similar problem, because for each tutorial, a custom action has to be added. But the implementation effort can be reduced by making use of object inheritance. 4 PROTOTYPE The design alternatives and corresponding design decisions for the requirements mentioned in section 3.1.1-3.1.6 have been implemented in the chatbot prototype. A first version of the prototype of the chatbot can assist the MDE tool (SiSy) user with a set of tutorials and answer a set of FAQs, as seen in Figure 4. Due to space limitations, only the implementation specifics of the requirement R1 in section 3.1.1 is presented in this section. 4.1 Step-by-Step Tutorial The SiSy tool provides various tutorials to learn Model-driven implementation of embedded software. In the first version of the prototype, three tutorials are provided (Figure 4). In this section, the design and implementation of a message handling mechanism where a multi-step tutorial can be followed, and the chatbot is able to answer questions at any time is presented. Figure 5 shows the design of basic structure of tutorial implementation using the LED tutorial example (i.e., toggling LEDs in embedded target) example and described below. Further, the series of steps in the step-by-step LED tutorial provided piece-wise by the chatbot for the MDE tool SiSy (as a conversation-based tutorial) is shown in Figure 6 and Figure 7. - A TutorialHandler class has been implemented to reduce implementation effort involved in adding new tutorials (i.e., for scalability, extensibility and continuous improvement). Adding a tutorial always requires a new custom action. These actions are very similar to attributes and functions. - To add a new tutorial, one has to create a new Subclass to the TutorialHandler class and overwrite the name function and init function. Note that, the tutorials are split into multiple text messages and presented to the user based on the user input during conversation with the chatbot. - For instance, for the LED Tutorial, a custom action handle_led_tutorial has been created (see actions.py in Figure 5). The function of this custom action is to provide the next step of the LED tutorial. The action gets the current step of the LED tutorial by querying the tracker for a specific slot. If the action was called for the first time, it should first start a form. - The form defines all information that is needed by the user in order to provide the correct tutorial steps. Here the user is asked about the controller type he has. Based on the user input, (and if the current step is greater than zero and smaller than the maximum number of steps), the next tutorial step will be uttered. The name of the utterance for each step follow the pattern `utter tutorial name step number`. - Based on the extracted information from user input, the form could be used here to provide fine-grain instructions tailored to the user information. If the total number of steps is reached for a tutorial, a congratulation message will be uttered. - In addition, two rules have been added for the dialog manager (see rules.yml in Figure 5). The first rule: there is a 'request led form intent', the LED form handler is called. This is how the tutorial will be started. The second rule: Whenever there is a 'next' intent, the 'dispatch tutorials' action will be called. This will check which tutorial is active and dispatch to the correct action. This architecture described above is flexible since it allows the user to switch between multiple tutorials. Whenever the user wants to change the current tutorial, the user can type the name of the tutorial he wishes, and it will start. 5 CONCLUSION In this paper, an use case of AI-driven MDE is presented. A proposal to qualify MDE tools for ESE with an AI assistant or a chatbot is outlined. The requirements of a conversational chatbot which uses text (for input and output) and image (only output) as the conversational medium was elaborated. The design alternatives and design choices made to fulfil one of the requirements (step-by-step tutorial) is presented in this short paper. In the prototype, the design of a basic structure for message handling mechanisms in the chatbot is described. It was showcased how a multi-step tutorial can be followed by the user and how the chatbot is able to answer questions at any step. This is only the tip of the iceberg. The bot can be further improved in umpteen ways. One possible use case is by making use of training data for the dialog manager, with the so-called stories. The bot can help the user at a specific tutorial step, even if the user does not ask for specific information. For example an 'unspecific problem' intent could be added with example messages like 'It does not work', 'nothing works', 'how does this work?', 'I have a problem' and so on. In the next step, the stories could be revised and the responses that the chatbot should give in the exact situation - depending on the current tutorial, the current step or even slot values - are added to the stories. After enough examples, the dialog manager learns to provide the necessary information even if the user does not ask for it specifically-thereby achieving a true AI guided model-driven development experience. Figure 6: Step (a)-left and step (b)-right in the step-by-step LED tutorial provided piece-wise by the chatbot for the MDE tool SiSy. Figure 7: Step (c)-left and step (d)-right in the step-by-step LED tutorial provided piece-wise by the chatbot for SiSy tool. REFERENCES
{"Source-Url": "https://www.scitepress.org/Papers/2022/110062/110062.pdf", "len_cl100k_base": 6646, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28505, "total-output-tokens": 8910, "length": "2e12", "weborganizer": {"__label__adult": 0.0003690719604492187, "__label__art_design": 0.0005726814270019531, "__label__crime_law": 0.0002872943878173828, "__label__education_jobs": 0.0009307861328125, "__label__entertainment": 8.58306884765625e-05, "__label__fashion_beauty": 0.00017368793487548828, "__label__finance_business": 0.00024771690368652344, "__label__food_dining": 0.00031495094299316406, "__label__games": 0.0007452964782714844, "__label__hardware": 0.0017871856689453125, "__label__health": 0.0004248619079589844, "__label__history": 0.0002598762512207031, "__label__home_hobbies": 9.72747802734375e-05, "__label__industrial": 0.0005679130554199219, "__label__literature": 0.00023734569549560547, "__label__politics": 0.0002243518829345703, "__label__religion": 0.0005097389221191406, "__label__science_tech": 0.051788330078125, "__label__social_life": 7.557868957519531e-05, "__label__software": 0.00696563720703125, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.00026869773864746094, "__label__transportation": 0.0006418228149414062, "__label__travel": 0.00017976760864257812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36450, 0.0383]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36450, 0.58132]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36450, 0.89236]], "google_gemma-3-12b-it_contains_pii": [[0, 3937, false], [3937, 8400, null], [8400, 13632, null], [13632, 17941, null], [17941, 22181, null], [22181, 25788, null], [25788, 27901, null], [27901, 30888, null], [30888, 31149, null], [31149, 36450, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3937, true], [3937, 8400, null], [8400, 13632, null], [13632, 17941, null], [17941, 22181, null], [22181, 25788, null], [25788, 27901, null], [27901, 30888, null], [30888, 31149, null], [31149, 36450, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36450, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36450, null]], "pdf_page_numbers": [[0, 3937, 1], [3937, 8400, 2], [8400, 13632, 3], [13632, 17941, 4], [17941, 22181, 5], [22181, 25788, 6], [25788, 27901, 7], [27901, 30888, 8], [30888, 31149, 9], [31149, 36450, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36450, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
31185b1381db552b73a4d20627df3870267d658c
Efficient multicore scheduling of dataflow process networks Hervé Yviquel, Emmanuel Casseau, Matthieu Wipliez, Mickaël Raulet To cite this version: HAL Id: hal-00687750 https://hal.archives-ouvertes.fr/hal-00687750 Submitted on 14 Apr 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. EFFICIENT MULTICORE SCHEDULING OF DATAFLOW PROCESS NETWORKS Hervé Yviquel, Emmanuel Casseau IRISA, University of Rennes 1, 6 rue de Keramont, 22300 Lannion, France. {firstname.lastname}@irisa.fr Matthieu Wipliez, Mickaël Raulet IETR, INSA of Rennes, 20 avenue des buttes de Coësmes, 35200 Rennes, France. {firstname.lastname}@insa-rennes.fr ABSTRACT Although multi-core processors are now available everywhere, few applications are able to truly exploit their multiprocessing capabilities. Dataflow programming attempts to solve this problem by expressing explicit parallelism within an application. In this paper, we describe two scheduling strategies for executing a dataflow program on a single-core processor. We also describe an extension of these strategies on multi-core architectures using distributed schedulers and lock-free communications. We show the efficiency of these scheduling strategies on MPEG-4 Simple Profile and MPEG-4 Advanced Video Coding decoders. Index Terms— Dataflow computing, Multicore processing, Scheduling algorithm, Distributed algorithm, Lock-free multithreading 1. INTRODUCTION Since processor frequency is bounded due to physics constraints like power dissipation, multi-core architectures have become the solution to allow performance to keep growing as described by Moore’s law. These architectures present an interesting challenge: produce applications which fully exploit the parallelism provided by these processors. Several programming languages, extensions and models allow parallelism to be expressed in applications like Occam [1], Message Passing Interface [2] or Algorithmic Skeletons [3]. Most of them assume a specific underlying hardware architecture and are generally inefficient on other platforms. Dataflow programming is an attractive candidate to design parallel applications in an architecture-agnostic way. A dataflow program is composed of atomic processing blocks that communicate with each other with communication channels. Such a representation explicitly describes task-level parallelism within the application. The behavior of this kind of program is governed by a Model of Computation (MoC) which specifies a set of rules regarding the execution of the program. Several dataflow MoCs exist with different purposes and expressivenesses such as Synchronous Dataflow (SDF), Kahn Process Network (KPN) [9] and Dataflow Process Network (DPN) [8]. Dynamic dataflow models like KPN and DPN are very useful to describe the behavior of streaming applications. Contrary to SDF and similar models that consume and produce data in a static way, KPN and DPN models may have a data-dependent behavior i.e. the quantity of data consumed and produced by the processes may depend on the value of these data. This paper proposes several scheduling techniques to efficiently execute dynamic dataflow programs on single-core and multi-core processors. Indeed, determining a schedule of a dynamic dataflow program is not possible at compile-time (equivalent to the halting problem, see [4]). This paper makes the following contributions: • We give a formal definition of a round-robin policy for scheduling dataflow process networks on single-core architectures that has been successfully used in practice, see [5] and [6] (Section 3.1). • We present an efficient strategy to dynamically schedule actors of dataflow process networks on single-core architectures (Section 3.2). A clever scheduling strategy is key to ensure that a complex application with many processing blocks can be executed efficiently. • We propose a distributed and lock-free extension of these two dynamic scheduling strategies on multi-core architecture (Section 4) based on Lamport’s work [7] concerning lock-free communication channels of distributed processes. This paper presents results obtained with our scheduling strategies for the execution of two video decoders (MPEG-4 Simple Profile and MPEG-4 Advanced Video Coding) on a multi-core processor. 2. BACKGROUND This section presents the dynamic dataflow model called dataflow process network and how applications described by A Dataflow Process Network (DPN) is a model of computation [8] which can be described as follow: a set of processes called actors that communicate with unidirectional and unbounded FIFO channels called data-fifos and connected to ports of actors. A data-fifo has only one source port but may have several target ports. In the dataflow approach, the communication between actors corresponds to a stream of data composed of a list of tokens. Figure 1 presents a network of five actors linked by data-fifos. DPNs can be considered as a generalization of the well-known Kahn Process Networks (KPNs) [9]. The execution of an actor corresponds to the mapping of input tokens to output tokens applied repeatedly and sequentially on one or more data streams. This mapping is composed of three ordered steps: data reading, then computational procedure, and finally data writing. These repeated mappings are called actor firings and are guarded by a set of firing rules which specifies when an actor can be fired. These firing rules specify precisely the number and the values of tokens that must be available on the input ports to fire the actor. This is why DPNs can describe nondeterministic algorithms which is not possible with the KPN model. 2.2. Scheduling of Dataflow Process Networks DPNs are more suitable than KPNs to be scheduled on processors when there are more processes than processor cores because no context has to be saved between each actor execution. Indeed, the execution of an actor is described by a sequence of actor firings and actor switching can only happen between two firings, so only state variables need to be saved; in particular, it is not necessary to save the execution stack and other contextual information like registers, etc. Consequently, it is possible to reduce the overhead of scheduling a dynamic dataflow program by using a user-level scheduler rather than relying on threads scheduled by the operating system kernel. Von Platen shows an impressive acceleration of 3.4 to 105 frame per second (FPS) with a video decoder after using a user-level scheduler that uses a single thread to schedule actors in [6]. Scheduling an actor with this method is a lot more efficient than using threads because there is no need for the kernel to perform a context switch each time an actor is scheduled but only a function call. The data-fifos used throughout this paper are statically bounded to be implemented on finite memory and avoiding additional overhead due to dynamic memory allocation. 3. SINGLE-CORE SCHEDULING STRATEGIES A dynamic dataflow program cannot be fully scheduled by static methods so some dedicated strategies need to be developed in order to schedule actors during the execution. The following strategies are designed to execute a DPN-based application on a single-core architecture which handles the execution of only one actor at a given time. 3.1. Round-robin scheduling strategy Round-robin is a simple scheduling strategy that continuously goes over the list of actors: The scheduler evaluates the firing rules of each actor, executes the actor if a rule is met and continues to execute the same actor until no firing rules are met. This scheduling policy guarantees to each actor an equal chance of being executed, and avoids deadlock and starvation. Contrary to classical round-robin scheduling, there is no notion of time slice: an actor is executed until it cannot fire anymore in order to minimize the number of actor switching and consequently the scheduling overhead. The reason of this unfireability is that data-fifos will be finally full or empty because of their bounded sizes. Figure 2 shows an application of this round-robin scheduling on the example presented in Fig. 1. The scheduler executes the actors in a circular order i.e. the five actors A1, A2, A3, A4 and A5 are successively executed then the scheduler starts again from A1 and so on. 3.2. Data-driven / demand-driven scheduling strategy Data-driven / demand-driven strategy is a more advanced scheduling strategy of dynamic dataflow programs. Indeed, the round-robin strategy schedules actors unconditionally i.e. the firing rules of an actor could be checked even if they are all invalid. The firing rules of the actor will be checked, but no computation will be performed. As a result, the round-robin strategy becomes inefficient with complex applications containing many actors and a lot of control communications. Data-driven / demand-driven scheduling strategy is based on the well-known data driven and demand driven principles [10]. On the one hand, data-driven policy executes an actor when its input data have to be consumed to unblock the execution of the precedent actor. On the other hand, demand-driven executes an actor when its output is needed by another actor. Two types of events can cause the blocking of an actor execution, each one is implying a different scheduling decision: - When an actor is blocked because an input data-fifo is empty, demand-driven policy is applied and the scheduler executes the predecessor of this data-fifo. - When an actor is blocked because an output data-fifo is full, data-driven policy is applied and the scheduler executes the successors of this data-fifo. Indeed a data-fifo can be connected to several target ports (see Section 2.1). Contrary to the round-robin algorithm, a dynamic list of next schedulable actors is needed. The behavior of this schedulable list is illustrated with Fig. 3. When an actor is blocked during its execution, the empty or full data-fifos are identified and their associate predecessors or successors are added to the schedulable list. The actor to be executed next corresponds to the next entry in the schedulable list. 4. DISTRIBUTED AND LOCK-FREE MULTI-CORE SCHEDULING This section describes a distributed and lock-free multi-core scheduling technique to execute dynamic dataflow programs using round-robin and data-driven / demand-driven strategies. 4.1. Distributed scheduler A distributed scheduler is designed to execute applications on a multi-core architecture. Several local schedulers are executed concurrently on each processor core. This specific design avoids the use of a specific thread to manage the scheduling of the application. A static partitioning of the actors on the processor cores is needed to run our multi-core scheduler. On the one hand, the round-robin strategy goes over a static list of actors so its multi-core extension needs this static mapping of the actors on the cores to be implemented. On the other hand, the data-driven / demand-driven strategy could work with a mapping computed dynamically, but (1) a static mapping allowed us to develop the multi-core extension by tackling each problem one at a time, and (2) is considered as future work. Figure 4 presents a possible mapping of five actors on a dual-core processor. Fig. 4: Mapping example of a network on processor cores To form the distributed scheduler, one thread for each available processor core is created and forced to be run only on its associated core. The round-robin algorithm executes a subset of the actors that are mapped on the associated core. The round-robin algorithm executes a subset of the actors that are mapped on the associated core. Figure 5 shows an example of the distributed scheduler with the round-robin strategy. Fig. 5: Distributed multi-core scheduler using round-robin The multi-core version of the data-driven / demand-driven strategy is realized in the same way as the round-robin strategy. The difference is that with our static mapping, the predecessors and successors of a given actor can possibly be mapped to a different core than the one of this actor. This requires communications between the different threads to schedule all the actors. Figure 4 illustrates this: If A1 is blocked during its execution because the data-fifo called f2 is full then the scheduler has to add A3 to the list of schedulable actors. However A3 is managed by another scheduler so another inter-scheduler communication is needed. In a multi-core context, we use a combined version of the round-robin and data-driven / demand-driven strategies to avoid starvation of our distributed algorithm. Indeed, contrary to the single-core version, data-driven / demand-driven strategy cannot guarantee all the time that each local schedulable list is not empty. The scheduler applies the data-driven / demand-driven policy until its schedulable list is empty and then the round-robin policy is used until the schedulable list contains at least one actor. The algorithm is presented in Fig. 6. CombinedScheduling() begin while true do if isEmpty(schedulable) then actor = getNext(RoundRobin); else actor = getNext(DataDemandDriven); fi; fire(actor); if ¬RoundRobin ∨ ||firing|| > 0 then if isEmpty(actor.inputs) then addPredecessors(actor); else addSuccessors(actor); fi; fi; end Fig. 6: Combined scheduling algorithm 4.2. Lock-free inter-core communications Lock-free communications between distributed schedulers are used to avoid the synchronization of threads. Indeed, the fine granularity of the actors makes actor scheduler a critical part of the execution so the smallest overhead can have disastrous consequences on the performance. Moreover informing a remote scheduler to add a schedulable actor to its schedulable list is essential otherwise a deadlock could occur during the execution. In fact, if this scheduling information is not communicated, a self-contained cycle can appear and cause a deadlock of the application. For example in Fig. 4, if the data-fifos f4-7 are full, the actors A3, A4 and A5 form a self-contained cycle that never stops until A2 consumes the tokens contained in f4 but this could never happen if the other scheduler loops on a self-contained cycle, too. Another kind of FIFO channels called scheduling-fifos is used to communicate between schedulers without synchronization. Lamport proved that locks are not necessary in the case of single producer, single consumer FIFOs [7]. However it is important not to confuse the two types of FIFOs which work with the same mechanism but have two distinct uses: the data-fifo channels used to carry on the application stream and the scheduling-fifo channels used to share scheduling informations (in our case a set of next schedulable actors). Figure 7 shows the inter-core communications mechanism. When an actor execution is blocked, the scheduler adds the predecessor or the successors of the blocking data-fifo to its schedulable list. In some cases this actor is not executed by the current scheduler so this actor is sent to its associated scheduler by a scheduling-fifo channel. We propose two kinds of communication network topology (Fig. 8): mesh and ring. The mesh topology uses a bidirectional communication channel between each couple of actors. The distributed schedulers communicate directly but the number of scheduling-fifos increases exponentially with the number of cores. The ring topology offers the possibility to use the distributed scheduler with a limited number of scheduling-fifos: on a N-core processors N scheduling-fifos are needed. However the communication could cross N-2 schedulers in the worst case before the targeted scheduler receives it. For example (see Fig. 8(b)), if the scheduler on core 1 wants to communicate with the other one mapped on core 4, the schedulers on cores 2 and 3 are used as intermediaries. Fig. 8: Possible topologies of communications 5. RESULTS In this section, we present several experimental results to demonstrate the efficiency of our multi-core scheduler on real-world video applications. We also compare our approach with another runtime included in the OpenDF framework on the same applications [6] [11]. 5.1. Benchmarks The Reconfigurable Video Coding (RVC) framework was created by MPEG to increase the reusability and portability of the code in video decoders [5]. The description language of RVC, called RVC-CAL and based on the Dataflow Process Networks model, was used to implement the applications used in our experiments. We have implemented the round-robin and combined strategies in the C runtime library of the Open RVC-CAL Compiler (Orcc). These two scheduling strategies have been tested on dataflow descriptions of MPEG-4 Simple Profile and MPEG-4 Advanced Video Coding with different sized video sequences. We benchmarked these decoders on an Intel Xeon with four cores at 2.33GHz. During all the experiments, the data-fifos are bounded to 4096 elements and the scheduling-fifos to 200 elements. The testing video sequences are: For MPEG-4 SP, hit001 (CIF) from ISO/IEC 14496-4:2004 and old_town_cross (720p) encoded at 6Mbps with the Xvid encoder from an YUV file available on [12] and for MPEG-4 AVC LS_SVA_D (QCIF) and HCBP2_HHI_A (CIF) available on [13]. The results for various configurations are presented in Table 1 for the MPEG-4 SP decoder and in Table 2 for the MPEG-4 AVC decoder. We benchmarked too the MPEG-4 SP decoder on the 720 sequence using the other runtime library included in OpenDF framework: We obtain 10.1 fps on one core and 15.6 fps on two cores i.e. the speedup is about 1.54. <table> <thead> <tr> <th>Strategy</th> <th>Core</th> <th>CIF</th> <th>720p</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>Round-robin</td> <td>1</td> <td>144</td> <td>15.6</td> <td>1</td> </tr> <tr> <td></td> <td>2</td> <td>265</td> <td>26.6</td> <td>1.78</td> </tr> <tr> <td></td> <td>4</td> <td>494</td> <td>51.4</td> <td>3.36</td> </tr> <tr> <td>Combined</td> <td>1</td> <td>154</td> <td>16.1</td> <td>1</td> </tr> <tr> <td>strategy</td> <td>2</td> <td>288</td> <td>27.3</td> <td>1.75</td> </tr> <tr> <td>Ring</td> <td>4</td> <td>443</td> <td>49.8</td> <td>2.98</td> </tr> <tr> <td>Mesh</td> <td>4</td> <td>516</td> <td>51.9</td> <td>3.28</td> </tr> </tbody> </table> Table 1: Results of MPEG-4 SP for various configurations in frames per second <table> <thead> <tr> <th>Strategy</th> <th>Core</th> <th>QCIF</th> <th>CIF</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>Round-robin</td> <td>1</td> <td>28.4</td> <td>7.1</td> <td>1</td> </tr> <tr> <td></td> <td>2</td> <td>55.6</td> <td>13.9</td> <td>1.96</td> </tr> <tr> <td></td> <td>4</td> <td>90.8</td> <td>21.2</td> <td>3.05</td> </tr> <tr> <td>Combined strategy</td> <td>1</td> <td>169</td> <td>40.6</td> <td>1</td> </tr> <tr> <td>Ring</td> <td>2</td> <td>294</td> <td>71.4</td> <td>1.74</td> </tr> <tr> <td>Mesh</td> <td>4</td> <td>341</td> <td>75.2</td> <td>1.93</td> </tr> </tbody> </table> Table 2: Results of MPEG-4 AVC for various configurations in frames per second 5.2. Mapping validation using genetic algorithm A genetic algorithm was developed to find efficient static mapping of actors on the processor cores and used during these experiments. Most of the time, dynamic dataflow programs can be easily partitioned on dual-core processor with manual methods thanks to the explicit parallelism of dataflow representations. However this is increasingly complex to do when the number of cores and actors grows. For example, the dataflow description of MPEG-4 SP is composed of 42 actors and MPEG-4 AVC one is composed of 131 actors. More than one thousand possible mappings of a dataflow program into a multi-core processor is quickly reached. 5.3. Discussion The data-driven / demand-driven strategy shows its efficiency with the MPEG-4 AVC decoder which contains many actors and many control flows. Moreover the data-driven / demand-driven strategy, and consequently the combined strategy, is slightly more efficient than the round-robin strategy even on small applications like MPEG-4 SP. Our multi-core extension of these single-core scheduling strategies is validated by the high speedups we obtained compared to the maximal theoretical speedups. Results show that the round-robin strategy is better on MPEG-4 Simple Profile than the data-driven / demand-driven strategy on four cores. Indeed, most of the time the round-robin executes a fireable actor because this application is described with few actors. When MPEG-4 SP is partitioned on multiple cores, more scheduling operations are required using the data-driven / demand-driven strategy, which leads to a slightly increased overhead. Finally, the speedup obtained with the OpenDF framework is lower than the ones obtained with our two scheduling strategies with the same mapping of actors on the processor cores. 6. RELATED WORK In [10] and [14], Parks and Haid et al. deal with implementation and scheduling of Kahn Process Networks. Contrary to Dataflow Process Networks, the context switches of process suspension and resumption cannot be avoided and it leads to an inevitable overhead. On the one hand, Haid et al. have chosen to use lightweight and stackless threads implementation to minimize this overhead. On the other hand, Parks presents a combined demand-driven and data-driven strategy. Unfortunately he gives no result we can use for comparison. Aldinucci et al. presents a low-level programming framework based on lock-free queues dedicated to multiprocessors streaming applications in [15]. Like us, they try to use lock-free communication channels to avoid synchronization between threads. In their model, all channels with multiple writers and readers are built by assembling a set of single reader and single writer channels with an external thread which manages the data copies between these channels. This approach avoids cache invalidation but a lot of data transfers is needed to hide the overhead of switching between threads. In [16], Boutellier et al. present a methodology to map and schedule actors on multiprocessors. They begin to transform the RVC-CAL network in a set of homogeneous synchronous dataflow graphs. Unfortunately these graphs cannot be generated when an actor execution depends of the input token value. Moreover, the small MPEG-4 SP decoder was their only test-case and the complexity of such static techniques increases exponentially according to the application size. 7. CONCLUSION This paper proposes a new approach to efficiently schedule dynamic dataflow programs with a lock-free and distributed algorithm on multi-core architectures based on two presented single-core scheduling strategies. The results of the experiments shows that our multi-core scheduler scales up well to four cores. In future work we will focus on stream communications between cores to improve the application speedup and we will extend our multi-core scheduling algorithm to dynamically map the actors on processor cores. 8. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00687750/file/yviquel.pdf", "len_cl100k_base": 5174, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21704, "total-output-tokens": 6367, "length": "2e12", "weborganizer": {"__label__adult": 0.0005044937133789062, "__label__art_design": 0.0007452964782714844, "__label__crime_law": 0.0005793571472167969, "__label__education_jobs": 0.0005922317504882812, "__label__entertainment": 0.00022411346435546875, "__label__fashion_beauty": 0.00022268295288085935, "__label__finance_business": 0.0002639293670654297, "__label__food_dining": 0.000537872314453125, "__label__games": 0.0010967254638671875, "__label__hardware": 0.0094757080078125, "__label__health": 0.0008802413940429688, "__label__history": 0.0005125999450683594, "__label__home_hobbies": 0.0001512765884399414, "__label__industrial": 0.0012044906616210938, "__label__literature": 0.00026035308837890625, "__label__politics": 0.0005030632019042969, "__label__religion": 0.00087738037109375, "__label__science_tech": 0.467529296875, "__label__social_life": 9.310245513916016e-05, "__label__software": 0.010589599609375, "__label__software_dev": 0.50146484375, "__label__sports_fitness": 0.0004317760467529297, "__label__transportation": 0.0010786056518554688, "__label__travel": 0.0002906322479248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25967, 0.04568]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25967, 0.28267]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25967, 0.8906]], "google_gemma-3-12b-it_contains_pii": [[0, 1058, false], [1058, 5175, null], [5175, 9095, null], [9095, 12788, null], [12788, 16704, null], [16704, 21247, null], [21247, 25967, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1058, true], [1058, 5175, null], [5175, 9095, null], [9095, 12788, null], [12788, 16704, null], [16704, 21247, null], [21247, 25967, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25967, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25967, null]], "pdf_page_numbers": [[0, 1058, 1], [1058, 5175, 2], [5175, 9095, 3], [9095, 12788, 4], [12788, 16704, 5], [16704, 21247, 6], [21247, 25967, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25967, 0.12319]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
daf4b42ca9c582bffc8ace8edc8869d09f89ae34
Career in Open Source? Relevant Competencies for Successful Open Source Developers Nicole Kimmelmann*, Friedrich-Alexander-University Erlangen-Nürnberg * Correspondence author: nicole.kimmelmann@wiso.uni-erlangen.de Summary Open Source (OS) offers new ways of career for software developers. The article describes relevant competencies in a systematic structure along characteristic principles and challenges in Open Source projects. The results are based on a Grounded Theory content analysis of interviews with Open Source software developers, their project managers and human resource managers in Open Source software companies. Implications for future Human Resource Management in software companies are presented as an outlook. Keywords ACM CCS → Software and its engineering → Software creation and management → Collaboration in software development → Open source model; ACM CCS → Social and professional topics → Professional topics → Computing education 1 Introduction Former analysis of OS software communities has addressed status dynamics within OS communities [32], contribution patterns [21] and how they create informal and formal social structures to manage membership and joining processes [38]. But there is a lack of knowledge to understand which competencies are necessary to reach specific status in OS projects. As a result ways of OS career seem to be quite incalculable for developers so far. Which competencies are relevant in order to work as an OS developer in comparison to a developer job in the proprietary software sector? How should and could these competencies be supported by human resource management strategies in OS software companies? The article tries to satisfy the need for empirical data to answer these questions from a Human Resource perspective. Relevant competencies will be summarized systematically in a table. The needs and strategies of a holistic Human Resource Management in software companies to promote the developers in their competency profile are presented structured in a model at the end of the article. The answers are of interest to software developers deciding whether contributing to OS. But the results should also be relevant for OS/Closed Source software companies as far as the status and success of their developers in OS projects has positive economic effects on their own product planning and distribution. Human resource management for software developers does not address these new challenges in a systematic way so far (not as reflected in the literature). The implications at the end of the article might be an interesting starting point to think about the career development of OS software developers in a more comprehensive way. 2 Open Source as a Field of Career for Software Developers 2.1 Career Objectives of Developers Participating in OS Projects Since the increasing importance of OS software within the late 1990s there has been a lot of research from different disciplines on the motivation of software developers to contribute to OS [1; 3; 9–11; 16]. Especially the apparent altruistic character of developers as demonstrated by freely contributing to open software projects has been studied by researchers [11; 13]. Two main ranges of explanations can be defined [4]. Following the anthropological line of argumentation, OS engagement of participants is determined by the motive of improving the software to their own needs [e.g., 10] and their willingness to invest time because of fun [15; 35]. The idea of “intrinsic motivation” [8] is connected to this ideal of a gift based OS community linked to reciprocity and kinship. More economically oriented studies showed that individual rationality of the developers in order to maximize “profits” such as improving their own programming skills [e.g., 11; 16; 36], or earning reputation and credits [6; 15; 27; 30; 39] must be seen as equivalent inducement for an OS engagement. For example: at least 61% of respondents in the Bitzer et al. investigation [3] believed that OS activities benefit their career. Consistent with these results scientifical literature has noticed a growing phenomenon of developers creating OS software as a component of their paid employment [1; 23; 26]. This includes strategic integration of OS in the developer’s own professional career profile. Developers who perform in OS projects successfully might be able to offset their opportunity costs by signaling marketable skills to future employers. This can lead to higher job versatility [19] and higher wages [18]. Despite of those theoretical deliberations towards the (monetary) effects of OS activities there is hardly any empirical research about the real careers of OS developers. Existing studies only show that OS engagement by itself does not guarantee better career chances. Actually it is the committer status earned in those OS projects that really correlates with higher wages – at least when it comes to merit-based Foundation projects [9]. That emphasizes the importance of a particular community career path which developers need to follow in order to achieve positive effects from their OS involvement. 2.2 Relevant Principles of Career in OS Projects Unlike other occupational communities OS communities and their developers are not necessarily associated with a single employer or workplace. Instead members of the community are likely to work toward collective goals outside of their employment [27]. Rules and routines within the OS community are established by itself. Therefore it is interesting to have a closer look at characteristic principles of career in OS communities. OS is ideal-typically characterized by an egalitarian process of collaboration/contribution, a public discussion tradition and a merit-based decision making process [14; 17; 22] in order to attract high-quality contributions from voluntary members [25]. Égalité does not mean that every developer contribute in the same way and amount. But people working in the OS Community find their own processes and best projects in a self-organizing way [29]. This openness allows “the results of creativity to be used, developed, and tested by anyone so that everyone can learn from one another” [12, p. 140]. Developers are able to maintain different roles in this process and to develop their own competencies quite freely in contrast to other disciplines. But the community “must integrate the individual contributions into a common pool, which can heighten interdependencies and the need for coordination mechanisms (e.g. Thompson, 1967)” [25, p. 1081]. Successful OS projects often have strong leaders [21; 22]. These distinctive principles influence careers in OS projects as well: In contrast to closed software development, positions in the OS Community are not only assigned but need to be earned in the opinion of the community. The community members have a big influence in promoting someone implicitly or explicitly. That means successful developers must be willing and able to fulfill the philosophy and criteria of the community. Merit will be rewarded with greater status, responsibility, or opportunity to enhance the development of the OS software [32; 38]. The following model of a typical career path within OS projects illustrates these principles on a more specific level. It is based on the three main positions identified in different OS projects: user, contributor and committer. Of course, the presented model can only be interpreted in its simplification as OS projects differ in their complexity, hierarchy and forms of interaction between their members [25]. The outlined career path needs to be seen in its exemplarity – integrating typical roles and activities confirmed in different more complex models [e.g. 20]. As Fig. 1 shows, reaching the highest level called “committer” can only be achieved by a multiple-stage process from user to contributor up to committer. This career requires first of all being an active part of the OS project community. This communal process reduces the risk of a wrong decision [29]. The first career position as a contributor seems to be achievable quite simply for gifted and encouraged developers as there are no entry barriers for users to the OS market. Keeping in mind the never ending need for new features to improve the quality of OS software developers can select their favourite project to work on their expe- Contributors differ from normal users insofar as they are recognized and accepted by the community. The contributor status enables to submit patches, fix bugs or provide small features. Besides, the status does not have a sustainable advantage but takes a lot of time and effort. The second career step to the level of committer is connected with an explicit promotion by the existing committers. That authority is not handed out easily. In other words: Entrance for new developers into OS projects on this level is restricted as the achieved status is usually permanent. This limits the possible number of committers in one project. In contrast to the first career step more than just technical competencies seem to be relevant to get this grant. Leading positions require skills in building the organization [25]. Finding out more about these necessary competencies was the main purposes of our research project at the University of Erlangen-Nuremberg. 3 Methodology 3.1 Research Design The results are based on a grounded theory research design [5] using guided interviews with two OS software developers (Interview 1 and 2), their project managers (Interview 3 and 4) and human resource managers (Interview 5 and 6). Each interview was approximately 90 minutes long, following a particular manual for the different sets of interviewees. In order to get access to the field but also to combine the Informatics and Human Resource perspectives, the interviews were led by a team consisting of a professor for OS software (Prof. Dr. Dirk Riehle, University of Erlangen-Nuremberg) and an assistant professor for professional competency development (Prof. Dr. Nicole Kimmelmann, University of Erlangen-Nuremberg). The theory building process was characterized by triangulation of data collection and data analysis which made it possible to integrate interesting aspects of the first set of data into the sampling and data collection of the second company/group of participants. Participants were asked to describe relevant competencies of OS software developers depending on their stage of career/position in OS projects. Possibly connections between the status in the OS community and the position/status in their OS company were particular interesting as well. Relevant competencies for the future were discussed at the end of the interviewing process in order to deliver implications for a Human Resource development programme within software companies. The three perspective format (including the Human Resource managers) made it possible to gather data from the practitioners’ point of view but also to be aware of their “blank spots” when it comes to their own relevant competencies. 3.2 Sampling The interviewees were chosen following the idea of theoretical sampling [5] from two companies developing OS Software based on the Linux kernel. Both companies are well established in the OS Software field with their own OS products. The actual selection of the companies was also based on the idea of contrasting a small and a big OS software company. The number of employees ranges from 45 to 350 people. All interviewees were selected by the companies itself and took part freely in the research project. Both integrated software developers had at least 10 years working experience in the field of OS development and were committer in several OS projects. The project managers have been developing software in OS Software companies/projects for more than 15 years. Their teams are involving 13 respectively 45 software developers at the moment. The Human Resource managers have both been part of the companies for more than five years and were responsible for professional development in similar positions in companies outside of the field of informatics. 3.3 Quality of the Results All interviews were transcribed and analysed using qualitative content analysis software called MAXQDA for a systematic and transparent process. Similar aspects and corresponding concepts were categorized in a multidimensional hierarchy of competencies. The analysis of the interviews was discussed by both researchers in a pair analysing process in order to increase the internal validity. of the data analysis. The presented results are to be seen as confirmed in both samples. Conflicting points shown in the interviews must be analysed in further research steps including a broader sampling and range of interviews. 4 Relevant Competencies of Successful OS Software Developers The following table summarizes the necessary profile of successful developers on their way to the committer status as it was recorded in the interviews. The structure of the table connects distinctive marks of the OS Software community and work with relevant competencies of the developers in order to systematize the competencies. Referring to the often cited educational competence model of Erpenbeck and Von Rosenstiel [7] the described competency profile distinguishes between technical (T), social (S) and personal (P) competencies. Technical competencies are relevant technical knowledge (like programming in Linux), documented technical experience and corresponding attitudes that are relevant for the successful implementation of OS software. Social competencies are understood as interpersonal skills required to support the software development and distribution process or the person’s own career in an explicit way. Personal competencies include attitudes, values and motivation of the software developer. Besides, they are strategies and skills for organizing one’s personal life and to developing their own personality. These factors can support or hinder a successful career of the developer by regulating his/her professional behaviour. They are usually quite stable and strongly connected to the personality of the developer. Nevertheless they can be changed by an inner process of self-reflection [7]. In the following discussion important aspects of the results are highlighted and explained further. - High Importance of Social Competencies Being a successful member of an OS project is not limited to technical competencies. The interviewees emphasized a “well-balanced competency-profile” (Interview 1) for successful developers. Indeed it is the social competency field that is crucial across all levels of career (Interviews 1–6). The high importance of these competences rejects the prejudice of the social incompetent “nerd” in a very comprehensive way. - High importance of communication skills and teamwork Communication skills and ability for team work are the most important competencies at all as developing OS software is mainly organized by email-communication and through global virtual teams (Interviews 5 and 6). Relevant competencies regarding this kind of communication are summarized in the category “e-Mail-competency”. This includes skills like answering questions of other users friendly (Interviews 1 and 4) and with respect to their (cultural) communication style (Interview 2) or dealing with <table> <thead> <tr> <th>Table 1</th> <th>Relevant Competencies of Successful OS Software Developers.</th> <th>Origin: Own table based on empirical data.</th> </tr> </thead> <tbody> <tr> <td>Distinctive characteristics of OS software Development</td> <td>Relevant competencies of OS developers</td> <td></td> </tr> <tr> <td>Egalitarian process of collaboration/contribution</td> <td>Technical:</td> <td></td> </tr> <tr> <td></td> <td>• Programming</td> <td></td> </tr> <tr> <td></td> <td>• “Architecture competency”</td> <td></td> </tr> <tr> <td></td> <td>• “Implementation of new features without disturbing others’ work”</td> <td></td> </tr> <tr> <td>Philosophy of “social give and take”</td> <td>Social:</td> <td></td> </tr> <tr> <td></td> <td>• “E-mail-competency”</td> <td></td> </tr> <tr> <td></td> <td>• Capacity for teamwork</td> <td></td> </tr> <tr> <td></td> <td>• “Not being arrogant against others”</td> <td></td> </tr> <tr> <td></td> <td>Personal:</td> <td></td> </tr> <tr> <td></td> <td>• Altruistic character</td> <td></td> </tr> <tr> <td></td> <td>• “To want to be in on the whole”</td> <td></td> </tr> <tr> <td></td> <td>• Motivation to improve software</td> <td></td> </tr> <tr> <td></td> <td>• Motivation through acknowledgement</td> <td></td> </tr> <tr> <td>Tradition of public discussion</td> <td>Technical:</td> <td></td> </tr> <tr> <td></td> <td>• Implementation of feedback</td> <td></td> </tr> <tr> <td></td> <td>• Active communication skills</td> <td></td> </tr> <tr> <td></td> <td>• “E-mail-competency”</td> <td></td> </tr> <tr> <td>Self-organized working processes</td> <td>Social:</td> <td></td> </tr> <tr> <td></td> <td>• Intrinsic motivation to work in OS</td> <td></td> </tr> <tr> <td></td> <td>• Ability to learn</td> <td></td> </tr> <tr> <td></td> <td>• Openness to new things and approaches</td> <td></td> </tr> <tr> <td></td> <td>• Persistence</td> <td></td> </tr> <tr> <td></td> <td>• Time-Management</td> <td></td> </tr> <tr> <td></td> <td>• Ability to adapt to changing situations</td> <td></td> </tr> <tr> <td></td> <td>• Self-organisation</td> <td></td> </tr> <tr> <td></td> <td>• To demand high quality of own work</td> <td></td> </tr> <tr> <td></td> <td>• Curiosity</td> <td></td> </tr> <tr> <td>International Community</td> <td>Technical:</td> <td></td> </tr> <tr> <td></td> <td>• Identification of possible successful projects</td> <td></td> </tr> <tr> <td></td> <td>• Gaining recognition and earning reputation</td> <td></td> </tr> <tr> <td></td> <td>• High number of qualitative patches</td> <td></td> </tr> <tr> <td></td> <td>• Documentation of work</td> <td></td> </tr> <tr> <td></td> <td>Social:</td> <td></td> </tr> <tr> <td></td> <td>• Presentation skills</td> <td></td> </tr> <tr> <td></td> <td>• Ability to establish and maintain contact with the community</td> <td></td> </tr> <tr> <td></td> <td>Personal:</td> <td></td> </tr> <tr> <td></td> <td>• Motivation for participation in the community life</td> <td></td> </tr> <tr> <td></td> <td>• Internalisation of the “social give and take” philosophy of the community</td> <td></td> </tr> </tbody> </table> criticism from other members of the community via mail (Interviews 1, 4, 5 and 6). - Requirement of architectural competencies Programming in OS is more than the knowledge of particular software. In order to find missing features or easily integrate them into new projects respectively working for common solutions in teams and respecting a bilateral feedback culture within the community (members), developers need an understanding of how systems are built (Interview 4). Otherwise they are not able to "think outside the box" (Interview 4). - Building an honourable reputation The rights of committers are connected with trust from the community and its clients. No community is searching for unpopular outsiders (Interview 2). Developers need to get visibility of OS members by asking relevant questions to maintainers (Interview 1), giving presentations on community conferences (Interview 1 and 2) or attending working groups in the community (Interview 1). The status can be approved by the public examination of the applicant’s work: “code talks” (Interview 1). That does not only mean to be competent and to do the right thing but to live the philosophy of social give and take as well as to show a social competent behaviour to other members (Interview 1). The results illustrate the complexity of the successful developers profile and the need for a strategic planning of a career in OS. Applicants must take the chance to get into the right project at the right time and they need the competencies to be recognized and accepted to do so. Asking the interviewees for demanded skills in the future the described competencies are mainly confirmed in their relevance. Technical competencies will be a stable aspect of a successful developer. But besides contributions documentary work is of increasing relevance as it is reflected in higher reputation for that kind of passive work. Social competencies are connected to a growing importance for future success in OS projects as users and developers are becoming more diverse in their cultural/linguistic background or personal expectations onto OS software. OS developers are not part of an inner circle of equals but members of a global community which includes users without technical experience. That means: Committers need to adapt products to new target groups. This makes English language skills, empathy, communication skills and the ability to provide constructive feedback necessary. The overall visibility makes a person necessary that is willing to fulfil the role of someone in the “publicity” of the community. Corresponding personal competencies are the ability to take feedback from others and compliance to the social rules of the community. Because of some cases of discrimination (e.g. against women) in the history of OS the community seems to be more sensitive to this part of work. Therefore aspiring developers should have these competencies in mind planning a career in OS. 5 Implications for Human Resource Management in OS Companies 5.1 The Need for a Comprehensive Competence Development Model in OS Software Companies Building up a profile of the described competencies should not be only developer’s individual matter of question and time but also part of Human Resource Management in software companies: On the one side developers involved in important OS projects as a committer can become an economic resource trying to reach or maintain the company’s position in the market. The team-leaders and Human Resource managers of both companies interviewed emphasized the need for an approach to support their developers’ career inside the community as “clients connect the committer status with trust in the company’s competencies” (Interview 1). On the other side OS work unveils potential or force competencies of a developer that can be relevant for the company when the competencies are transferred to the workplace. Examples of these competencies that are connected with OS contribution are: fast induction into new software, involvement in current discussions and changes within the IT-market, sense of responsibility and being a communicative part of a team (Interviews 1–6). As a conclusion software companies should be interested multiply in promoting their developers becoming a committer to get influence in relevant OS projects. In order to realize competency developments which satisfy both the interests of the company and the developer Human Resource Management of software developers must be planned and implemented in a comprehensive way including the needs and motivations of the OS community, the developer and the company at the same time. The following chapter tries to summarize relevant principles to achieve this goal. 5.2 Elements of a Comprehensive Competence Development Model in OS Software Companies Arranged around a common Human Resource Management model you can formulate the following implications for Human Resource Managers in OS software companies as an outlook. It systematizes the relevant aspects of planning and implementing Human Resource strategies in reference to the above mentioned competencies as a process. Successful strategies of Human Resource Management in the companies interviewed are integrated in order to illustrate the abstract recommendations for further research and practice. - Demand Analysis Within the demand analysis qualification requirements and potentials of the developers are identified in comparison with the OS goals of the company: Which competencies should be developed to maximise individual and economic success with the committer status? The process for this first step of Human Resource Management can be initiated by the developer or the company. Crucial for success is the involvement of the developer at a very early stage. Focus of the analysis can be the deficits or the potential of the developer. - **Activity plan** The activity plan must consolidate the motivation of the prospective participating developer. Competence development must be initiated intrinsic (Interview 5). For that the personal motivation of the developer needs to be considered. Considerations of the developers might be: "Is the time for participation in a Human Resource training equivalent to the effect towards my career?" (Interview 6). As the interviewees rated the intrinsic motivation of the OS software extremely high inducements to take part in a human resource activity should be connected with the personal goals the developer would like to achieve in the OS community (Interviews 1, 3, 4, 5 and 6). In order to choose the right projects connections to the community are compulsory (see Fig. 2). Existing committers in projects could be gate-opener in informing the company about upcoming changes in projects (Interview 1). Otherwise developers need to get working time to analyse the market in order to find the most appropriate project (Interviews 3 and 6). - **Action-taking** Human Resource managers must be sensitized for possible and suitable ways of learning outside and within the OS community. Flexible arrangements of competence development including informal or social ways of learning (e.g. pair programming with colleagues) are suitable with regard to the mentioned relevant social competencies (Interview 2). - **Performance Test** With regard to the public documentation of OS activities this part of the model can be implemented quite easily, e.g. by proofing the new behaviour of the developer on mailing lists in open source projects (Interviews 1, 4 and 6). The sustainable success of these strategies is influenced by super-ordinated procedures shown in the outer circle of the four edges. - **Transfer** The competence development model is based on the idea that competences acquired as a committer in OS projects can be transferred to the workplace in the company. Further research will need to show how successful this transfer can be for various kinds of competencies. From Human Resource perspective transfer into the daily work life is influenced by the possibility to practice the new competencies in an adequate way and time. Managers can contribute to this sustainable development by a corresponding organisation of the team structure in consultation with the project managers. Developers should be encouraged to continue the OS activity in a specially reserved part of the working time. Continuous mentoring programmes with other committers in the company can be a supplementary strategy to patronize developers on their way to higher status in OS projects (Interviews 5 and 6). - **Quality Management** Human Resource strategies for developers are not to be autonomous from the strategic planning of the company. Therefore, criteria for success and improvement of the developer need to be specified before starting a competence development activity. Both sides – developer and company – should clarify their expectations of the committer status. - **Performance Improvement** Performance improvement takes into consideration any necessary structural or material support the developer needs to realize the career as committer in a particular OS project. But it also analyses possible aspects that hinder the developer to work with full potential before starting a corresponding activity. That is: Reasons for deficits in the competence profile of a developer might be caused by the working atmosphere in the company or the community itself and need to be analysed. - **Evaluation/Educational Controlling** This procedure underlies the whole process and embraces two parts: Satisfaction of the developer (evaluation) and economic effects for the company (educational controlling). Further research need to show how both aspects can be realized simultaneously and in which way the committer status of developers effect the company’s return on investment. OS is an extraordinary working field when it comes to the question of measuring the competencies. As work itself is accessible to everyone in the community the contributions of everyone are also archived for open review by its members. "The net never forgets" (Interview 4). This public profile is not limited to technical competencies. Social competencies can be observed by following the GitHub-Repositories and mailing-lists as well. Acknowledgements I would like to record my gratitude to Prof. Dirk Riehle from the University Erlangen-Nürnberg for his important part in this research project which was based on his interest in this topic. He was responsible for the sampling and contact to cooperating partners. Interviews were led together and the first coding was done in a pair-coding partnership. Besides he was a useful feedback partner. Also the comments of the reviewers were an excellent help to revise the article. References Received: March 6, 2013 The topic of the next two issues prospectively will be “Engineering Adaptive Software Systems” (Guest Editors: N. Ritter and W. Renz) and “Security in Business Processes” (Guest Editor: R. Accorsi). They will contain following papers: “Engineering Adaptive Software Systems” - K. Herrmann et al.: Online Horizontal Partitioning of Heterogeneous Data - W. Powley and K.-U. Sattler: A Framework for Autonomic Workload Management in DBMSs - St. Baumann and K.-U. Sattler: Autonomic Physical Database Design – From Indexing to Multidimensional Clustering “Security in Business Processes” - N. Zannone et al.: Privacy Analysis of User Behavior Using Alignments Das Standardwerk zur IT-Sicherheit – umfassend, praxisrelevant, aktuell Claudia Eckert IT-Sicherheit Konzepte – Verfahren – Protokolle 8. Auflage 2013 XIV, 1016 Seiten Flexcover ISBN 978-3-486-72138-6 € 69,80 Informationstechnologie (IT) ist heute in nahezu allen Bereichen von zentraler Bedeutung. Ergebnisse von Sicherheitsanalysen und die wachsende Zahl an Angriffen auf vernetzte IT-Systeme verdeutlichen die großen Sicherheitsprobleme, die mit diesen komplexen IT-Systemen einhergehen. In diesem Buch werden die zur Umsetzung der Sicherheitsanforderungen benötigten Verfahren und Protokolle detailliert vorgestellt und anhand von Fallbeispielen erläutert, so dass nicht nur ein Bewusstsein für IT-Sicherheitsrisiken entwickelt werden kann, sondern auch konkrete Gegenmaßnahmen ergriffen werden können. Ein Muss für jeden, der sich mit dieser hochaktuellen Problematik beschäftigt! Prof. Dr. Claudia Eckert ist Inhaberin des Lehrstuhls Sicherheit in der Informatik der TU München und Direktorin der Fraunhofer-Einrichtung für Angewandte und Integrierte Sicherheit (AISEC) mit Sitz in Garching bei München. Bestellen Sie in Ihrer Fachbuchhandlung oder direkt bei uns: www.degruyter.com/oldenbourg
{"Source-Url": "https://dirkriehle.com/wp-content/uploads/2013/10/itit.2013.1009.pdf", "len_cl100k_base": 6410, "olmocr-version": "0.1.42", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25226, "total-output-tokens": 8495, "length": "2e12", "weborganizer": {"__label__adult": 0.0008273124694824219, "__label__art_design": 0.0007524490356445312, "__label__crime_law": 0.0008993148803710938, "__label__education_jobs": 0.09356689453125, "__label__entertainment": 0.00016999244689941406, "__label__fashion_beauty": 0.0003707408905029297, "__label__finance_business": 0.0069122314453125, "__label__food_dining": 0.00081634521484375, "__label__games": 0.0013103485107421875, "__label__hardware": 0.0005736351013183594, "__label__health": 0.000904083251953125, "__label__history": 0.0003571510314941406, "__label__home_hobbies": 0.00026798248291015625, "__label__industrial": 0.0006709098815917969, "__label__literature": 0.0007147789001464844, "__label__politics": 0.0008668899536132812, "__label__religion": 0.0007452964782714844, "__label__science_tech": 0.005725860595703125, "__label__social_life": 0.0006799697875976562, "__label__software": 0.01019287109375, "__label__software_dev": 0.87060546875, "__label__sports_fitness": 0.0007734298706054688, "__label__transportation": 0.0008411407470703125, "__label__travel": 0.0004584789276123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36307, 0.03215]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36307, 0.23831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36307, 0.89265]], "google_gemma-3-12b-it_contains_pii": [[0, 2400, false], [2400, 8409, null], [8409, 12596, null], [12596, 17258, null], [17258, 22969, null], [22969, 27458, null], [27458, 34094, null], [34094, 35101, null], [35101, 36307, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2400, true], [2400, 8409, null], [8409, 12596, null], [12596, 17258, null], [17258, 22969, null], [22969, 27458, null], [27458, 34094, null], [34094, 35101, null], [35101, 36307, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36307, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36307, null]], "pdf_page_numbers": [[0, 2400, 1], [2400, 8409, 2], [8409, 12596, 3], [12596, 17258, 4], [17258, 22969, 5], [22969, 27458, 6], [27458, 34094, 7], [34094, 35101, 8], [35101, 36307, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36307, 0.22527]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
adfdd76296ad536c344ef805a770d7fc58d5e4a6
Vectorization of the 2D Wavelet Lifting Transform Using SIMD Extensions D. Chaver, C. Tenllado, L. Piñuel, M. Prieto, F. Tirado Departamento de Arquitectura de Computadores y Automática Facultad de C.C. Físicas. Universidad Complutense. Ciudad Universitaria s/n 28040 Madrid. Spain. {dani02,ctenllado,lpinuel,mpmatias,ptirado}@dacya.ucm.es Abstract This paper addresses the vectorization of the lifting-based wavelet transform on general-purpose microprocessors in the context of JPEG2000. Since SIMD exploitation strongly depends on an efficient memory hierarchy usage, this research is based on previous work about cache-conscious DWT implementations [1,2,3]. The experimental platform on which we have chosen to study the benefits of the SIMD extensions is an Intel Pentium-4 (P-4) based PC. However, unlike other authors [4], the vectorization has been performed avoiding assembler language programming in order to improve both code portability and development cost. Index Terms— JPEG2000, lifting, SIMD optimization. 1. Introduction A significant amount of work on the optimization of the lifting-based 2D discrete wavelet transform (DWT) has been performed in recent years in the context of the JPEG2000 [1,2,5]. This interest is caused by the considerable percentage of execution time involved in this component of the standard. According to some authors, it accounts for 40-60% [1,2,5] of the JPEG2000 encoding time. From a performance point of view, one of the main bottlenecks of the DWT is caused by the discrepancies between the memory access patterns of the two principal components of the 2D DWT: the vertical and the horizontal filtering. These differences cause one of these components to exhibit poor data locality in the straightforward implementations of the algorithm. As a consequence, most of the previous work about DWT performance optimization has been focused on memory hierarchy exploitation. A different strategy has been followed in [4]. In this case, the performance of the JPEG2000 DWT has been improved by means of fixed-point arithmetic and Intel’s MMX ISA (Instruction Set Architecture) extensions. The aim of our research is to structure the lifting computations in order to take advantage of both the memory hierarchy and the SIMD parallelism. In fact, as we have shown in previous studies, an efficient exploitation of the SIMD ISA extensions available in modern microprocessors strongly depends on an efficient memory hierarchy usage [3,6]. The experimental platform on which we have chosen to study the benefits of the SIMD extensions is an Intel Pentium-4 (P-4) based PC. Despite using a specific platform we should remark that, unlike other studies [4,7], we have avoided coding at the assembly language level in order to improve portability (it also prevents long development times). The rest of this paper is organized as follows. Some related work and the experimental environment are covered in Sections 2 and 3 respectively. Section 4 describes some details of our DWT implementations and discusses the performance results obtained without vectorization, which is analyzed in detail in Section 5. Finally, the paper ends with some conclusions. 2. Related Work As mentioned above, the performance optimization of the DWT is not a new research issue. The optimization of both the convolution-based and the lifting-based DWTs has already been done for all sorts of computer systems. Focusing on the target of this paper, i.e. general-purpose microprocessors, several optimizations aimed at improving the cache performance have been proposed in [1,2,8]. Basically, [1] investigates the benefits of traditional loop-tiling techniques, while [2,8] investigate the use of specific array layouts as an additional mean of improving data locality. The thesis of [8] is that row-major or column-major layouts (canonical layouts) are not advisable in many applications, since they favor the processing of data in one direction over the other. For convolution-based DWT, they studied the benefits of two non-linear layouts, known in the literature as 4D and Morton [8]. In these layouts the original \( m \times n \) image is conceptually viewed as an \([m/\text{tr}] \times [n/\text{tc}]\) array of \( \text{tr} \times \text{tc} \) tiles. Within each tile, a canonical (row-major or column-major) layout is employed. In [2], this study has been extended with the analysis of the lifting-based DWT, although for this approach, a lightly different non-linear layout is employed (more details about it are explained in section 4.1). The approach investigated in [1] is less aggressive. Nevertheless, they addressed the memory exploitation problem in the context of the whole JPEG2000 image coding application, which is more tedious to optimize than a wavelet kernel. The solution investigated by these authors, which they dubbed aggregation, is similar to the classical loop-tiling strategy that we have applied to the vertical filtering in [12] (the one that lacks spatial locality if a row-major layout is employed). 3. Experimental platform Our experimental platform consists of a P-4 (2.4 GHz Model 2) machine running under Linux, the main features of which are described in the table below (see also [9]). <table> <thead> <tr> <th>Table 1. Pentium-4 system main features</th> </tr> </thead> <tbody> <tr> <td>Processor</td> </tr> <tr> <td>L1 Data Cache</td> </tr> <tr> <td>L2 Unified Cache</td> </tr> <tr> <td>Motherboard</td> </tr> <tr> <td>Memory</td> </tr> <tr> <td>Operating system</td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td>ICC Compiler switches</td> </tr> <tr> <td></td> </tr> <tr> <td>GCC switches</td> </tr> </tbody> </table> All optimizations have been carried out at the source code level to avoid assembly language programming. In addition, in order to isolate compiler effects, we have employed two different compilers: the GNU GCC 3.2 [10] and the Intel C/C++ 6.0 (ICC) [11]. In both cases, we have used generic optimization switches. Both compilers provide access to the P-4’s SIMD ISA extensions (known as SSE: Streaming SIMD Extensions) by means of the same set of intrinsic functions, which allows C/C++ style coding instead of assembly language [10,11]. Consequently, the same hand-tuned code is employed in both cases. In addition to these intrinsics, which most of them map one-to-one to SSE instructions, the ICC provides an automatic vectorizer, which in our case is activated by means of the “–vec –restrict” switches. Nevertheless, for the programs under study, fully automatic vectorization is not possible, and both code modifications and guided-compilation are required. 4. Memory hierarchy exploitation 4.1 Implementation details Two types of DWTs are considered in the JPEG2000: the lossless algorithm is based on an integer 5-tap/3-tap filter, whereas the lossy compression uses the popular Daubechies 9-tap/7-tap floating-point filter [4]. In this paper we have only covered the latter, although the proposed optimizations can also be applied to the reversible filter. In particular, our implementation uses single precision data to represent both the image elements and the wavelet coefficients. Nevertheless, we should also mention that, at the time of writing this paper, we are analyzing the potential benefits of using fixed-point arithmetic. Due to the memory hierarchy bottleneck, an important design decision must be made regarding memory management. As is well known, the lifting scheme allows an inplace computation of the DWT [4], i.e. the transform can be calculated without allocating auxiliary memory (see figure 1). However, this memory saving is at the cost of scattering the wavelet coefficients throughout the original matrix, which in the context of the JPEG2000 involves a post-processing step where the coefficients are rearranged. This way, each sub-band is contiguously stored in memory, which simplifies the subsequent quantization stage [2]. In order to avoid this rearrangement overhead, we have also considered two additional strategies. The first one, which we have denoted by mallat, was proposed in [2]. It uses an auxiliary matrix to store the results of the horizontal filtering. In this way, as figure 2 shows, the horizontal high and low frequency components are not interleaved in memory. The vertical filtering reads these components and writes the results into the original matrix following the order expected by the quantization step. In order to improve data locality we have employed a recursive data layout [2] where each sub-band is laid out contiguously in memory. As we will explain below, this approach also allows a better exploitation of the SIMD parallelism. **Figure 1: Inplace strategy (logical view on the top; recursive data layout on the bottom).** Furthermore, we have introduced an additional strategy, which we have denoted by inplace-mallat. It can be considered as a trade-off between the inplace and mallat alternatives. It performs the horizontal filtering inplace but uses an auxiliary matrix to store the final wavelet coefficients as soon as they are computed. In this way, at the end of the calculations, the transformed image is stored in the expected order, thus avoiding the post-processing stage. As above, a recursive data layout is employed in order to improve data locality. Figure 3 graphically describes this alternative. Only the low frequency components in each direction (denoted by LL) are stored in the original matrix (apart from the deepest decomposition level) whereas the other components (denoted by LH, HL and HH) are moved into the auxiliary matrix in their correct final positions. The recursive data layout benefits the spatial locality of the memory access pattern. Nevertheless, further data locality improvements are possible by means of loop-tiling (aggregation in [1]). Supposing a column-major layout on every wavelet sub-band (the whole image for the inplace strategy), memory access becomes a bottleneck in the horizontal filtering. In order to reduce this overhead, instead of processing the image rows one after the other, which produces very low data locality, the horizontal filtering is applied column by column so that the spatial locality can be more effectively exploited. **Figure 2: Mallat strategy (logical view on the top; recursive data layout on the bottom).** 4.2 Performance results Figure 4 and 5 show the experimental results for the different strategies under study using different image sizes. The reported execution times correspond to the processing of a single color component, treating the entire image as a single tile. Figure 4: Execution time breakdown for 256², 512² and 1024² images using both compilers. I, IM and M denote inplace, inplace-mallat, and mallat strategies respectively. Each bar shows the execution time of each level and the post-processing step (when required), denoted by Post. As expected, the mallat and the inplace-mallat approaches outperform the inplace version for levels 2 and above. The reason for such behavior lies in the improvement of the data locality introduced by the recursive layout. On the other hand, we observe that these approaches also entail a noticeable slowdown for the first decomposition level due to both a larger working set (remember that they require an auxiliary matrix) and a more complex access pattern. However, this overhead is by far compensated in the inplace-mallat version, which achieves the best global execution time if we take into account the post-processing stage required in the inplace case. In contrast, this does not happen in the mallat approach, which exhibits the poorest performance in most cases. Figure 5: Execution time breakdown for 2048², 4096² and 8192² images using both compilers. Finally, focusing on compiler performance, we should note that the native ICC compiler outperforms GCC in the mallat and the inplace-mallat approaches. However, and contrary to our expectations, the opposite behavior is observed for the inplace code. 5. Vectorization 5.1 Semi-automatic vectorization From a programmer’s point of view, the most suitable way to exploit SIMD extensions is automatic vectorization, since it avoids low level coding techniques, which are platform dependent. Nevertheless, loops must fulfill some requirements in order to be automatically vectorized, and in most practical cases both code modifications and guided compilation are necessary. In our experimental platform, this kind of vectorization is only possible using ICC [11]. In particular, this compiler can only vectorize simple loop structures. Primarily, only inner loops with simple array index manipulation (i.e. unit increment) and which iterate over contiguous memory locations are candidates. In addition, global variables must be avoided since they inhibit vectorization. Finally, if pointers are employed inside the loop, pointer disambiguation is mandatory (this must be done by hand using compiler directives). Considering these restrictions and supposing column-major layouts, only the horizontal filtering can be automatically vectorized (see Algorithm 1). Furthermore, in the case of the *inplace* version, the vectorization is limited to the first decomposition level since data are interleaved above this level. ``` /* Column loop */ for(j=2,k=1;j<=(#columns-4);j+=2,k++) { /* Vectorizable row loop. Every 4 rows from each column are operated in parallel */ #pragma vector aligned for(i=0;i<#rows;i++) { /* 1st operation */ colj+3[i]=colj+3[i]+ alfa*( colj+4[i]+ colj+2[i]); /* 2nd operation */ colj+2[i]=colj+2[i]+ beta*( colj+3[i]+ colj+1[i]); /* 3rd operation */ colj+1[i]=colj+1[i]+ gama*( colj+2[i]+ colj[i]); /* 4th operation */ col[i]=col[i]+ delt*( colj+1[i]+ colj-1[i]); /* Last step */ detail k[i]=(colj+1[i]= colj+1[i]*phi_inv); aprox k[i]=(col[i] = col[i]*phi); } } ``` **Algorithm 1.** Automatically vectorizable horizontal filtering (*mallat*), assuming a column-major layout for the image. The variables colj, colj+1, colj+2, colj+3, colj-1 are local pointers to the Matrix1 columns j, j+1, j+2, j+3 and j-1 respectively, whereas detail k and aprox k are local pointers to the destination Matrix2 columns (see figure 1). ### 5.2 Hand-coded vectorization Obviously, this approach involves more coding effort than the automatic case, since the SIMD parallelism has to be explicitly expressed. Although intrinsics allow more flexibility, it is also convenient in this case to store the wavelet coefficients contiguously in memory. This way, they can be directly packed into vectorial registers. Under column-major layouts, this means that only the horizontal filtering can be effectively vectorized (as above, just on the first decomposition level for the *inplace* version). The vertical filtering could be vectorized now but at the expense of an additional data transposition stage [3], which reduces the benefits of the SIMD parallelism. Although we have considered this strategy in the optimization of the convolution-based DWT, this approach is not covered in this paper due to space limitations. The interested reader can find more information in [3]. ![Figure 6. Vectorial computation of a single horizontal lifting stage. The white arrows indicate how the image is scanned.](image) Figure 6 describes how vectorial computations are performed in one of the lifting stages. The image is scanned following the same order as in the scalar version but all the calculations are carried out in groups of four. The specific hand-tuned DWT algorithm is described in the Algorithm 2. ### 5.3 Performance results Before analyzing the benefits on the whole DWT, it is convenient to isolate the improvements achieved by the vectorization on the horizontal filtering. Figure 7 compares the scalar and vectorial versions of this processing for the different strategies under study. For the sake of simplicity, only the results for a 1024² pixels image are considered. Nevertheless, similar behavior can be observed for the other images sizes. Algorithm 2. Hand-Coded vectorized horizontal filtering (mallat), assuming a column-major layout for the image and using the Intel C/C++ Compiler intrinsic functions. As can be noticed, the vectorization achieves a significant performance gain. The speedup ranges between 4 and 6 depending on the strategy. The reason for such a high improvement is due not only to the vectorial computations, but also to a considerable reduction in the memory accesses caused by the exploitation of the packed loads/stores provided by the SSE extensions, the reuse of the vectorial registers, etc.). This enhancement of the memory behavior also explains why the speedup is higher in the first decomposition level due to its larger working set. Figure 7. Execution time breakdown of the horizontal filtering for all the codes under study using a 1024² pixels image. I, IM and M denote inplace, inplace-mallat and mallat approaches respectively. S, A and H denote scalar, automatic-vectorized and hand-coded-vectorized versions respectively. We should also remark that the speedups achieved by the strategies with recursive layouts (i.e. inplace-mallat and mallat) are higher than the inplace version counterparts, since the computation on the latter can only be vectorized in the first level. Nevertheless, in the ICC versions, a small reduction in the execution time is observed in this case for the other levels due to the use of vectorial memory transfers (which are automatically introduced by the compiler when vectorization is enabled). Focusing on Intel’s ICC, it is interesting to note that both vectorization approaches (i.e. automatic and hand-tuned) produce similar speedups. In fact, both versions generate almost the same assembly code, which highlights the quality of the ICC vectorizer. \[\text{...}\] \[\text{...}\] \[\text{...}\] \[\text{...}\] \[\text{...}\] Figure 8. Execution time breakdown for all the codes under study, using a 1024² pixels image. I, IM and M denote inplace, inplace-mallat and mallat approaches respectively. S, A and H denote scalar, automatic-vectorized and hand-coded-vectorized versions respectively. Figure 8 shows the global DWT execution time for the same image size. The improvement achieved by the vectorization of the horizontal processing translates into an overall speedup of between 1.5 and 2. The shortest execution time is reached in this case by the ICC mallat version (when using GCC both recursive-layout strategies obtain similar results). This surprising behavior of the mallat approach (remember that it provides the worst results in the scalar version) is a consequence of its better performance on the vertical filtering. Unlike the scalar version, this is now the most costly DWT component and hence the disadvantages of the mallat horizontal filtering (see figure 7) are compensated (more details can be found in [9]). Figure 9 compares the speedup of the different vectorial codes over the inplace-mallat approach (the best scalar version) and over the inplace approach. The speedup grows with the image size since, as mentioned above, the vectorization improvements are not only due to the vectorial computations but also to the memory usage. On average, the speedup is about 1.8 over the inplace-mallat scheme, growing to about 2 when considering it over the inplace strategy. Focusing now on the compilers, ICC clearly outperforms GCC by a significant 20-25% for all the image sizes. Figure 9. Speedup achieved by the different vectorial codes over the inplace-mallat (top chart) and inplace (bottom chart) versions respectively. Figure 10. Normalized execution time. Execution time to image size (secs/pixels) ratio for the hand-coded vectorial versions (ICC compiler). Figure 10 compares the performance of the different approaches using as performance metric a normalized execution time (execution_time/image_size). This figure emphasizes the superior behavior of the vectorial mallat scheme for the ICC since, as can be noticed, it exhibits the best performance scalability (i.e. the execution time per pixel remains almost constant with the image size). 6. Conclusions In this paper we have studied the optimization of a JPEG2000-aware lifting-based DWT on modern general-purpose microprocessors. The main conclusions can be summarized as follows: 1. Focusing on the scalar version, a novel scheme based on recursive layouts has been introduced. This scheme, which we have denoted by inplace-mallat, outperforms both a cache-conscious inplace implementation and a recent approach proposed by Chatterjee et al. [2] (denoted by mallat). 2. Based on our previous studies on SIMD exploitation, we have proposed some code modifications that allow the vectorial processing of the lifting algorithm. Two different methodologies have been explored in the case of the ICC compiler: semi-automatic and intrinsic-based vectorizations. However, both of them provide similar results. 3. Exceeding our expectations, the speedup achieved in the horizontal filtering is about 4-6 (depending on the code) since vectorization also reduces the pressure on the memory system. This enhancement translates into a significant performance gain in the whole transform (around 2 on average). In addition, this gain increases with image size. 4. In contrast with the scalar version, the vectorial mallat approach outperforms the other schemes. Moreover, it exhibits a better scalability and its benefits grow with image size. Finally, we should note that most of our insights are compiler independent, but additional analysis on other computing platforms will improve the generality of this study. In future research we also plan to integrate the proposed optimizations in a reference implementation of the JPEG2000 in order to improve the diffusion and understanding of our results and to facilitate further comparisons. 7. Acknowledgements This work has been supported by the Spanish research grants TIC 99-0474 and TIC 2002-750. We would also like to thank the anonymous referees for their valuable comments. 8. References
{"Source-Url": "http://www.researchgate.net/profile/Christian_Tenllado/publication/2544756_Vectorization_of_the_2D_Wavelet_Lifting_Transform_Using_SIMD_extensions/links/00463525da13e4b4a6000000.pdf", "len_cl100k_base": 4801, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26046, "total-output-tokens": 6035, "length": "2e12", "weborganizer": {"__label__adult": 0.0006537437438964844, "__label__art_design": 0.0011224746704101562, "__label__crime_law": 0.0006833076477050781, "__label__education_jobs": 0.0005125999450683594, "__label__entertainment": 0.0001666545867919922, "__label__fashion_beauty": 0.0003273487091064453, "__label__finance_business": 0.0003731250762939453, "__label__food_dining": 0.0005240440368652344, "__label__games": 0.000797271728515625, "__label__hardware": 0.014617919921875, "__label__health": 0.001163482666015625, "__label__history": 0.0005888938903808594, "__label__home_hobbies": 0.0001518726348876953, "__label__industrial": 0.0014400482177734375, "__label__literature": 0.0003426074981689453, "__label__politics": 0.0005555152893066406, "__label__religion": 0.0011091232299804688, "__label__science_tech": 0.373291015625, "__label__social_life": 8.606910705566406e-05, "__label__software": 0.01018524169921875, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.0004529953002929687, "__label__transportation": 0.001071929931640625, "__label__travel": 0.0003027915954589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24686, 0.03277]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24686, 0.48965]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24686, 0.8904]], "google_gemma-3-12b-it_contains_pii": [[0, 3901, false], [3901, 8691, null], [8691, 10753, null], [10753, 12302, null], [12302, 16273, null], [16273, 18142, null], [18142, 20171, null], [20171, 24686, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3901, true], [3901, 8691, null], [8691, 10753, null], [10753, 12302, null], [12302, 16273, null], [16273, 18142, null], [18142, 20171, null], [20171, 24686, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24686, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24686, null]], "pdf_page_numbers": [[0, 3901, 1], [3901, 8691, 2], [8691, 10753, 3], [10753, 12302, 4], [12302, 16273, 5], [16273, 18142, 6], [18142, 20171, 7], [20171, 24686, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24686, 0.09924]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
c8fcfa16d1133dd48e2630ba9b03caf45e0d2de4
[REMOVED]
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783319672618-c2.pdf?SGWID=0-0-45-1616244-p181122937", "len_cl100k_base": 7868, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34766, "total-output-tokens": 10017, "length": "2e12", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.0005421638488769531, "__label__crime_law": 0.0002211332321166992, "__label__education_jobs": 0.0004513263702392578, "__label__entertainment": 6.210803985595703e-05, "__label__fashion_beauty": 0.0001513957977294922, "__label__finance_business": 0.0002321004867553711, "__label__food_dining": 0.000278472900390625, "__label__games": 0.00037169456481933594, "__label__hardware": 0.0007085800170898438, "__label__health": 0.0003437995910644531, "__label__history": 0.0002696514129638672, "__label__home_hobbies": 6.473064422607422e-05, "__label__industrial": 0.0002543926239013672, "__label__literature": 0.000301361083984375, "__label__politics": 0.00021183490753173828, "__label__religion": 0.0003807544708251953, "__label__science_tech": 0.01482391357421875, "__label__social_life": 6.878376007080078e-05, "__label__software": 0.006591796875, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0002009868621826172, "__label__transportation": 0.0003991127014160156, "__label__travel": 0.00018668174743652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40785, 0.0443]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40785, 0.40604]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40785, 0.88071]], "google_gemma-3-12b-it_contains_pii": [[0, 2526, false], [2526, 5958, null], [5958, 7817, null], [7817, 10447, null], [10447, 13309, null], [13309, 16483, null], [16483, 19152, null], [19152, 21113, null], [21113, 22821, null], [22821, 25246, null], [25246, 27923, null], [27923, 30880, null], [30880, 34142, null], [34142, 37087, null], [37087, 40535, null], [40535, 40785, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2526, true], [2526, 5958, null], [5958, 7817, null], [7817, 10447, null], [10447, 13309, null], [13309, 16483, null], [16483, 19152, null], [19152, 21113, null], [21113, 22821, null], [22821, 25246, null], [25246, 27923, null], [27923, 30880, null], [30880, 34142, null], [34142, 37087, null], [37087, 40535, null], [40535, 40785, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40785, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40785, null]], "pdf_page_numbers": [[0, 2526, 1], [2526, 5958, 2], [5958, 7817, 3], [7817, 10447, 4], [10447, 13309, 5], [13309, 16483, 6], [16483, 19152, 7], [19152, 21113, 8], [21113, 22821, 9], [22821, 25246, 10], [25246, 27923, 11], [27923, 30880, 12], [30880, 34142, 13], [34142, 37087, 14], [37087, 40535, 15], [40535, 40785, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40785, 0.23684]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
10d89c6bcd8d918cbab37881985beeae0cc3c238
Lecture 3 - Functions and Control Structures Range variables vs. vectors: Continuing our discussion of using range vectors vs. using vectors. Recall our general rule will be to use range variables to create and reference vectors, and vectors will be used to store our information for manipulation (equations etc.) and graphing. Mail Box Analogy: Think of the process of storing data like putting stuff into a series of mailboxes. Not only do we need to keep track of what we put into the mailboxes, but we also need to keep track of where the mailboxes are. Range variables will be used to point at mailbox locations. Vectors will be used to represent the contents of the mailboxes. If I want only the stuff stored in the 5th mailbox, I will point to that mailbox to retrieve its contents. This is done using an index as in \( x_5 \). IMPORTANT: An index to an array must be an integer. The index represents a discrete location in memory. There is no 3.4th mailbox on the block, and there is no 3.4th spot in a vector. The contents of that discrete spot can be a non-integer, a word, etc., but its location (the index, the subscript) must be an integer. Range Variables: Indexing an array (pointing to a spot) can be done one spot at a time (with a constant or scalar variable), or multiple spots at one time (with a range variable). A range variable can be used to point to multiple locations at one time. - Use an index to reference specific locations within an array - e.g. \( i := 4 \) Grade\(_i\) OR JUST Grade\(_4\) refers to 89.3 in the vector - The index can be a number, a scalar variable, or a range variable. In any case the index must be an integer, as it is pointing to one or more discretely numbered locations (mailboxes) Example: Indexing specific spots within a list of grandes: \[ \text{Grade := [56 72 65 98 84 91]}; \] I can use a range variable to refer to only the first 3 spots, or some interior spots: \[ \begin{align*} \text{i := 1..3} & \quad \text{Grade}_{i} = & \quad \text{i := 3..5} & \quad \text{Grade}_{i} = & \quad \text{i := 1,3..5} & \quad \text{Grade}_{i} = \\ 56 & \quad 72 & \quad 65 & \quad 65 & \quad 98 & \quad 84 & \quad 56 & \quad 65 & \quad 84 \end{align*} \] Example: Using a range variable to calculate the mean value of the same list of grades: \[ \sum_{i} \text{Grade}_i \quad \text{The summation comes from the keystroke $\$}\n\] \[ i := 1..6 \quad \text{average} := \frac{\sum_{i} \text{Grade}_i}{\text{length(Grade)}} \quad \text{average} = 77.667 \] Example: Using a range variable to calculate the mean value of portions of the list of grades: \[ \sum_{i} \text{Grade}_i \quad \text{average} := \frac{i}{3} \quad \text{average} = 64.333 \quad \sum_{i} \text{Grade}_i \quad \text{average} := \frac{i}{3} \quad \text{average} = 68.333 \] Example: Take the summation of a complete vector to find an average (different kinda summation) \[ \text{average} := \frac{\sum \text{Grade}}{\text{length(Grade)}} \quad \text{average} = 77.667 \quad \text{This summation (Ctrl 4) sums the entire vector without need of a range variable} \] Example: Multiplying two different vectors together by default gives the scalar dot-product as a result. If we wish to multiply element by element, use the range variable idea. Here we only want the first 3 values of Inertia. So we use a range variable with values 1, 2 and 3. \[ b := \begin{pmatrix} 3 \\ 9 \\ 2 \\ 4 \end{pmatrix} \quad h := \begin{pmatrix} 5 \\ 1 \\ 8 \\ 2 \end{pmatrix} \quad \frac{b \cdot h^3}{12} = 120 \quad i := 1..3 \quad I_i := \frac{b_i \cdot (h_i)^3}{12} \quad I = \begin{pmatrix} 31.25 \\ 0.75 \\ 85.333 \end{pmatrix} \quad \text{Dot Product} \] Functions We had a brief look at creating functions in the previous lecture. These functions will be a necessary part of learning and using the three basic control structures (decisions, counted loops, conditional loops). The examples in the following sections all use control structures written within functions. Control Structures Three ways to control the flow of a program by controlling which commands are executed and how many times. 1) Decision (if-statements) 2) Counted loop (for-loops) 3) Conditional loop (while-loops) If Statements - Decision (no loop) ``` do this if (condition true) single line to execute do this other thing otherwise optional addition to structure ``` ``` if "condition is true" "statement 1" "statement 2" ``` For loop - counted loop (repeat statements a pre-determined number of times) ``` for variable start, next, end "statement 1" "statement 2" ``` While loop - conditional loop (repeat statements as long as condition remains true) ``` while "condition is true" "statement 1" "statement 2" ``` More details and examples: IF-Statements Venn diagram of branching Example #1: assigning letter grades grade >= 90 'A' 80 > grade >= 70 'C' 70 > grade >= 60 'D' 90 > grade >= 80 'B' grades < 60 'F' Write Mathcad code to express the above grade classifications This structure uses 5 separate if - statements with upper and lower limits These if - statements - in - series do not interface with each other assignment equal sign within a function | grade(x) := | out ← "you get an A" if (x ≥ 90) | out ← "you get a B" if (x ≥ 80) \(\land\) (x < 90) | out ← "you get a C" if (x ≥ 70) \(\land\) (x < 80) | out ← "you get a D" if (x ≥ 60) \(\land\) (x < 70) | out ← "you fail" if (x < 60) | | note that these lines that use the function appear beneath the first line of the function | | grade(50) = "you fail" | grade(61) = "you get a D" | grade(75) = "you get a C" | grade(88) = "you get a B" | grade(93) = "you get an A" | • each decision structure is evaluated separately • if one condition is found true, all other conditions are still checked example #2: grades : find the logic error in the structure below | grade(x) := | out ← "you get an A" if x ≥ 90 | out ← "you get a B" if x ≥ 80 | out ← "you get a C" if x ≥ 70 | out ← "you get a D" if x ≥ 60 | out ← "you fail" if x < 60 | | grade(50) = "you fail" | grade(61) = "you get a D" | grade(75) = "you get a D" | grade(88) = "you get a D" | grade(93) = "you get a D" | • several conditions overlap • decision structure will execute **sequentially** just like any other command • First condition found true is used, all others are still evaluated • As long as grade is \(\geq\)60, letter grade will always be assigned ‘D’ • venn diagram for previous example CONCLUSION: Using a string of if - statements in series must be done cautiously Example #3: grades: a working version of example #2 Now we will NEST the if - statements instead of using them in series. In this case, the next if - statement will only be evaluated if the previous one is true. In this way, we won’t overlap assignments • each condition is evaluated based on the result of the previous condition • So if x < 60, none of the IFs are seen, and the ‘otherwise’ action is triggered Example #4: grades: another working version of example #2 Finally, we can just use the series if - statement approach if we stack the decisions in the reverse order of example #2 quick quiz: when will the following statement be true? logical symbol for ‘or’ if \((x \geq 70) \lor (x \leq 80)\) the choice of ‘and’ / ‘or’ completely changes the meaning of the condition, test your logic before using them IF - Function We’ve just seen some examples of the if - statement. Another version of ‘if’ is the IF - Function \[ \text{if} (\text{cond, x, y}) \text{ Returns } x \text{ if logical condition } \text{cond} \text{ is true (non-zero), or } y \text{ otherwise.} \] It's really a short-cut version of the if statement we just saw above. You can find out a bit about this through the help -> index -> IF function -> Quick Sheet Example Here we’ll see that we can use either the If - function or the If-statement to do the same stuff. Following the example, start by defining a range variable from -2 to 2, and creating a function to work with. \[ x := -2, -1.9.. 2 \] \[ f(x) := x^2 - 1 \] I’ll show you how to plot in class Let’s use the two different kinds of decision - if to alter the basic function. We’ll make g(x) and g2(x) to do the same thing. Note that this example differs from the others that follow in that we are making decisions based on f(x) instead of x alone (based on the y - axis value rather than the x-axis value). \[ g(x) \text{ is } f(x) \text{ when } f(x) > 0 \text{ and } 0 \text{ otherwise}: \] \[ g(x) := \text{if}(f(x) > 0, f(x), 0) \] \[ g2(x) := \begin{align*} \text{out} & \leftarrow f(x) \text{ if } f(x) > 0 \\ \text{out} & \leftarrow 0 \text{ otherwise} \end{align*} \] equivalent Another example. We’ll make \( h(x) \) and \( h_2(x) \) to do the same thing \[ h(x) \text{ is } f(x) \text{ when } x \geq 1 \text{ and } -f(x) \text{ otherwise}, \] \[ h(x) := \text{if}(x \geq 1, f(x), -f(x)) \] \[ h_2(x) := \text{out} \leftarrow f(x) \text{ if } x \geq 1 \] \[ \text{otherwise} \] Another example. We’ll make \( k(x) \) and \( k_2(x) \) to do the same thing \[ k(x) \text{ is } f(x) \text{ when } -1 < x < 1 \text{ and } -f(x) \text{ otherwise}, \] \[ k(x) := \text{if}(x > -1) \land (x < 1), f(x), -f(x)] \] \[ k_2(x) := \text{out} \leftarrow f(x) \text{ if } (x > -1) \land (x < 1) \] \[ \text{otherwise} \] Another example. We’ll make \( l(x) \) and \( l_2(x) \) to do the same thing \[ l(x) \text{ is } f(x) \text{ when } 1 < x \text{ or } x < -1, \text{ and } -f(x) \text{ otherwise}, \] \[ l(x) := \text{if}(x < -1) \lor (x > 1), f(x), -f(x)] \] \[ l_2(x) := \text{out} \leftarrow f(x) \text{ if } (x < -1) \lor (x > 1) \] \[ \text{otherwise} \] Another example. We’ll make \( m(x) \) and \( m_2(x) \) to do the same thing. \[ m(x) \text{ is } f(x) \text{ when } x < -1, \text{ is } -f(x) \text{ when } x > 1, \text{ and 0 otherwise:} \] \[ m(x) := \text{if}(x < -1, f(x), \text{if}(x > 1, -f(x), 0)) \] \[ m_2(x) := \begin{cases} \text{out} \leftarrow f(x) & \text{if } x < -1 \\ \text{out} \leftarrow -f(x) & \text{if } (x > 1) \\ \text{out} \leftarrow 0 & \text{otherwise} \end{cases} \] \[ \begin{align*} \text{for loop Example #1:} & \quad \text{Use a for-loop to create a function that generates vectors of numbers.} \\ \text{Create_Vec} &: \left( \begin{array}{c} \text{start} \\ \text{stop} \\ \text{inc} \end{array} \right) \\ \text{Create_Vec}(0, 2, .5) &= \begin{pmatrix} 0 \\ 0.5 \\ 1 \\ 1.5 \\ 2 \end{pmatrix} \\ \text{Create_Vec}(-2, 6, 2) &= \begin{pmatrix} -2 \\ 0 \\ 2 \\ 4 \\ 6 \end{pmatrix} \end{align*} \] - The indented line is inside the for-loop. The index ‘i’ increases by one each time through until ‘i’ takes a value greater than ‘num’, at which point the loop stops. **For loop Example #2:** Create a function that sums up the values in a vector that is passed in \[ \text{sum}_\text{vec}(\text{vec}) := \begin{cases} \text{sum} \leftarrow 0 \\ \text{for } i \in 1..\text{length}(\text{vec}) \\ \text{sum} \leftarrow \text{sum} + \text{vec}_i \\ \text{sum} \end{cases} \] Length is a built in function that operates on vectors. \[ \begin{pmatrix} 98 \\ 84 \\ 71 \\ 88 \\ 56 \end{pmatrix} \] Grades := summation := \text{sum}_\text{vec}(\text{Grades}) \text{summation} = 397 We are keeping a running sum by adding sum to itself plus the next grade on the list each time through the list. **For loop Example #3:** Create a function that finds the mean value of all numbers in a list that are greater than 60. In this case, we’ll have to make a decision about each number before we add it to a running sum to calculate the average value. We also have to keep track of how many values go into the sum (how many are greater than 60). \[ \text{Calc}_\text{Ave}(\text{vec}) := \begin{cases} \text{sum} \leftarrow 0 \\ \text{total} \leftarrow 0 \\ \text{for } i \in 1..\text{length}(\text{vec}) \\ \text{if } \text{vec}_i > 60 \\ \text{sum} \leftarrow \text{sum} + \text{vec}_i \\ \text{total} \leftarrow \text{total} + 1 \\ \text{average} \leftarrow \frac{\text{sum}}{\text{total}} \end{cases} \] \[ \begin{pmatrix} 98 \\ 84 \\ 71 \\ 88 \\ 56 \end{pmatrix} \] Grades := \text{Calc}_\text{Ave}(\text{Grades}) = \left( \frac{85.25}{4} \right) For loop Example #4: Now we’ll take the previous example up a notch. Write a program that processes a vector of grades to count how many students pass and how many fail, and calculate the average overall score. Solution process: Input: individual student grades output: number of failing students number of passing students average of all grades Pseudocode: enter the vector of student grades start a loop that executes once for each student decide if grade is passing or failing if passing, add one to the passing variable if failing, add one to the failing column add grade to running total regardless of pass or fail so we can calculate average go back to top of the loop to get next grade average grade is running total grade divided by number of grades entered output the results (# pass, # fail, average) Grades := \[ \begin{pmatrix} 75 \\ 84 \\ 23 \\ 96 \\ 43 \\ \end{pmatrix} \] \[\text{class}(x) := \begin{align*} \text{sum} & \leftarrow 0 \\ \text{pass} & \leftarrow 0 \\ \text{fail} & \leftarrow 0 \\ \text{for } i \in 1..\text{length}(x) \\ \text{pass} & \leftarrow \text{pass} + 1 \text{ if } x_i \geq 60 \\ \text{fail} & \leftarrow \text{fail} + 1 \text{ otherwise} \\ \text{sum} & \leftarrow \text{sum} + x_i \\ \text{ave} & \leftarrow \frac{\text{sum}}{\text{length}(x)} \\ \text{out} & \leftarrow \begin{pmatrix} \text{fail} \\ \text{pass} \\ \text{ave} \end{pmatrix} \\ \end{align*}\] \[\begin{align*} \text{failures} & := \text{class}(\text{Grades}) \\ \text{passes} & := \text{result}_1 \\ \text{average} & := \text{result}_3 \\ \end{align*}\] Why are we setting these three to zero? A function can only send out one ‘thing’, so we make it contain all three things we need failures = 2 passes = 3 average = 64.2 For loop Example #5: Consider the cantilevered beam illustrated to the right with a variable valued point load $P$ at the end. An equation is provided which describes the beam deflection at any point $x$ along the beam. The program below uses a for loop to: a) create a vector of $x$-locations along the length of the beam, b) create a vector with the corresponding deflections, c) adds one to a scalar being used to index the vectors being created. NP is the number of points at which we want to calculate the deflection between 0 and the total length $L$ \[ defl = \frac{-P}{6EI} (3Lx^2 - x^3) \] For loop Example #5 ``` L := 20 I := 600 E := 29000 P := 20 NP := 10 BeamDefl(L, NP, E, I, P) := \[ \text{inc} \leftarrow \frac{L}{(NP - 1)} \] \[ i \leftarrow 1 \] for $x \in 0, \text{inc} \ldots L$ \[ \text{xaxis}_i \leftarrow x \] \[ \text{defl}_i \leftarrow \frac{-P}{6EI} (3Lx^2 - x^3) \] \[ i \leftarrow i + 1 \] \[ \text{results} := \text{BeamDefl}(20, 10, E, I, P) \] ``` We've set up the output from the function to contain the two vectors xaxis and defl. They are placed as individual elements in a 2x1 vector. Note that when we display the contents of 'results', it tells us that each of the two elements are a 10x1 vector. Thus 'results' is called a 'data structure'. The difference is that each element in a data structure can be a vector or matrix, not just a scalar. These elements are indexed just like a vector (with a subscript). In order to display and use the contents of the data structure, we need to assign a name to each of the elements in the data structure. This is done below above the graph. \[ \text{xaxis} := \text{results}_1 \] \[ \text{defl} := \text{results}_2 \] Assigning names to elements in the data structure while - loop conditional loop (decision and loop) while (condition) statements inside conditional loop This loop continues to repeat as long as the condition(s) are true statements inside loop must be able to change variable(s) used in the condition While loop Example #1: We have a long vector of numbers, and there is only one value less than zero in that vector. We want to find where in that list the negative value is. Find_Negative(vec) := stop ← 1 location ← 1 while stop ≠ 1 if vec_location < 0 out_location ← location stop ← 0 location ← location + 1 out_location Something New: notice the condition in the while statement above. We are asking if stop is equal to 1 each time through the loop. If that condition is true, the loop is executed again. We are not assigning 1 to the variable stop, we are comparing the existing variable stop to a value. That equal sign is in bold, which means it's a comparison. We get that one by using Ctrl = ‘location’ is what we are using to index the vector. It has to be integer values, and is set up to be 1 first time through the loop, then 2, etc. so that in the if statement we are sequentially comparing the numbers in the vector vs. 0 one at a time. When we find the negative value, the value of ‘location’ is pointing to the place where the negative value resides in the vector, so we save it into ‘out_location’, which is listed at the bottom of the function as the variable that is output from the function. We also re-assign ‘stop’ to 0, which will cause the while loop to stop executing (it only continues if stop = 1). Thus our algorithm will keep looking until it finds a negative value, save the location of that negative value, then exit the loop, and we’re done. While loop Example #2: We’ll change the previous example of calculating the deflection of a cantilevered beam so that it is now a design problem. Let’s say that following parameters are fixed (non-negotiable values): I, NP, P, E And the length of the beam L is to be designed such that the maximum deflection is close to 0.06 inches Obviously, the longer the beam, the greater the tip deflection. We will iteratively increase the length of the beam until the deflection is greater than 0.06 inches. What we will do is calculate the tip (maximum) deflection starting with the length of 20 inches. If the tip deflection is less than 0.06 inches, we will increase the length of the beam by a factor of 5 % of its current value. This will be done as many times as necessary until the tip deflection exceeds 0.06 inches. Pseudocode: receive the input (length of beam, number of points along the beam). Calculate the maximum deflection at the tip of the beam if the deflection magnitude is less than 0.06 inches, increase length by 5 % recalculate the tip deflection and repeat using a while loop Now that length has been chosen so that tip deflection exceeds 0.06 in, calculate the increment Start a loop to calculate the x-axis values and deflection at these locations place the output into a data structure \[ kips := 1000\text{lbf} \quad ksi := \frac{kips}{\text{in}^2} \] \[ E := 29000\text{ksi} \quad P := -20\text{kips} \quad I := 50\text{in}^4 \quad L := 20\text{in} \quad NP := 10 \] \[ \text{BeamDefl} (L, NP, P, E, I) := \frac{-P \cdot L^3}{3 \cdot E \cdot I} \] \[ \text{maxdefl} \quad \text{while} \quad |\text{maxdefl}| < 0.06\text{in} \] This control structure adjusts \( L \) until it suits my given constraint of \(< 0.06\text{in} \) \[ \text{maxdefl} \quad \text{L} \quad \text{i} \quad \text{inc} \quad \text{for} \quad x \quad \text{defl} \quad \text{i} \quad \text{inc} \quad \text{for} \quad x \quad \text{defl} \] These two outputs are copied and pasted into the variable names given in the yellow ( ) To plot vectors that have units on a scale with the final units you want, divide the vector name by the unit you want to the right, both the \( x \)-axis and the \( y \)-axis are displayed in inches. Another Example: finding maximum value in a vector (list of numbers) Given a vector of any size with random numbers inside, write a program that will identify two things: 1) The maximum value in the vector 2) The location (index) of that maximum value in the vector This is a task that can be accomplished using built-in Mathcad functions. Here we will do it the hard way to learn the logic and learn to use simple control structures. Solution Procedure It helps to create a simple example to work with. Let’s work with a vector that contains only six integers. >> Grade = [56 72 65 98 84 91]; Goal: Have Mathcad identify 98 as the maximum value, and find its index location to be 4 we will need the following tools: a) decisions b) a loop c) vector indexing d) some scalar variables to keep track of decisions Pseudo-code: 1) input the student grades 2) find out how many grades there are 3) look at each grade and compare it with the other grades a) pick the first two, save the bigger of the two as the current max save its location b) pick #3 spot and compare it with the current max, if #3 is bigger, save it as the new current max, and save the new location c) pick #4 and compare it to current max, repeat b) d) repeat for each remaining grade, updating current max if new max is found Note that c) and d) are repeating the instructions in b) as we iterate the location being compared (loop) Note that b) makes a decision (if current bigger than previous, then...) Now let’s refine the Pseudocode to reflect these observations 1) Enter grades and determine total number 2) Assume first grade is largest (assign max := grade1, maxptr := 1) 3) Loop from i = 2 to the final grade: compare gradei with max if gradei > max, new value for max is gradei new value for maxptr is i if gradei not > max move to next value to compare Mathcad Code: Now that we have a good pseudocode, we translate it into Mathcad \[ \begin{align*} \text{findmax}(\text{in}) & := \\ & \text{maximum} \leftarrow \text{in}_1 \\ & \text{maxptr} \leftarrow 1 \\ & \text{for } i \leftarrow 2 \ldots \text{length}(\text{in}) \\ & \quad \text{if } \text{in}_i > \text{maximum} \\ & \quad \quad \text{maximum} \leftarrow \text{in}_i \\ & \quad \quad \text{maxptr} \leftarrow i \\ & \left(\text{maximum} \atop \text{maxptr}\right) \\ & = \left(\begin{array}{c} 56 \\ 72 \\ 65 \\ 98 \\ 84 \\ 61 \end{array}\right) \\ & \left(\begin{array}{c} \text{biggest} \\ \text{location}\end{array}\right) := \text{findmax}(\text{grades}) \\ & \text{biggest} = 98 \quad \text{location} = 4 \end{align*} \] Isn’t saving the location of the maximum, and the max itself a little redundant? We only need to save the location (index) of the number. Why not just save the location, and retrieve the value when needed? We can by using the idea of a pointer. **Pointer** - A variable whose purpose is to keep track of where a value is within an array, not the value itself - Must be an INTEGER since it is used as an index to a vector Let’s re-work the maximum algorithm, but only save the location of the current maximum, not the max itself...using a pointer Finding a maximum value in a vector using the pointer concept \[ \text{findmax}(\text{in}) := \text{maxptr} \leftarrow 1 \\ \text{for } i \in 2..\text{length}(\text{in}) \\ \quad \text{maxptr} \leftarrow i \text{ if } \text{in}_i > \text{in}_{\text{maxptr}} \\ \text{maxptr} \] I find the pointer that points to the location of the biggest value \[ \text{pointer_to_max} := \text{findmax}(\text{grades}) \] I use the pointer as an index to the array \[ \text{biggest} := \text{grades}_{\text{pointer_to_max}} \] This is a little more efficient (fewer steps) and reduces the number of variables needed (maximum is not used this time) For the sake of showing off Mathcad’s nice features, let’s use some more advanced built in functions to solve this problem. ‘max’ is a built in function that finds the maximum value of a vector \[ \text{max}(\text{grades}) = 98 \] But note that this function does not return its location, just the value. How is this a drawback??
{"Source-Url": "https://www.essie.ufl.edu/~kgurl/Classes/Lect3421/L3_Controls_s02.pdf", "len_cl100k_base": 7196, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 41485, "total-output-tokens": 8433, "length": "2e12", "weborganizer": {"__label__adult": 0.000499725341796875, "__label__art_design": 0.0018520355224609375, "__label__crime_law": 0.0005974769592285156, "__label__education_jobs": 0.0562744140625, "__label__entertainment": 0.00022685527801513672, "__label__fashion_beauty": 0.0002982616424560547, "__label__finance_business": 0.0005278587341308594, "__label__food_dining": 0.000759124755859375, "__label__games": 0.0015697479248046875, "__label__hardware": 0.002925872802734375, "__label__health": 0.0008630752563476562, "__label__history": 0.0007610321044921875, "__label__home_hobbies": 0.0006108283996582031, "__label__industrial": 0.0015516281127929688, "__label__literature": 0.0007495880126953125, "__label__politics": 0.0004830360412597656, "__label__religion": 0.0008854866027832031, "__label__science_tech": 0.1851806640625, "__label__social_life": 0.00038814544677734375, "__label__software": 0.0308990478515625, "__label__software_dev": 0.7099609375, "__label__sports_fitness": 0.00046539306640625, "__label__transportation": 0.0011148452758789062, "__label__travel": 0.0003781318664550781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23866, 0.04402]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23866, 0.66495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23866, 0.77213]], "google_gemma-3-12b-it_contains_pii": [[0, 2213, false], [2213, 3671, null], [3671, 4750, null], [4750, 4955, null], [4955, 6552, null], [6552, 7376, null], [7376, 8696, null], [8696, 9678, null], [9678, 10732, null], [10732, 12217, null], [12217, 13983, null], [13983, 15748, null], [15748, 17507, null], [17507, 18817, null], [18817, 19739, null], [19739, 21234, null], [21234, 22894, null], [22894, 23866, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2213, true], [2213, 3671, null], [3671, 4750, null], [4750, 4955, null], [4955, 6552, null], [6552, 7376, null], [7376, 8696, null], [8696, 9678, null], [9678, 10732, null], [10732, 12217, null], [12217, 13983, null], [13983, 15748, null], [15748, 17507, null], [17507, 18817, null], [18817, 19739, null], [19739, 21234, null], [21234, 22894, null], [22894, 23866, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23866, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23866, null]], "pdf_page_numbers": [[0, 2213, 1], [2213, 3671, 2], [3671, 4750, 3], [4750, 4955, 4], [4955, 6552, 5], [6552, 7376, 6], [7376, 8696, 7], [8696, 9678, 8], [9678, 10732, 9], [10732, 12217, 10], [12217, 13983, 11], [13983, 15748, 12], [15748, 17507, 13], [17507, 18817, 14], [18817, 19739, 15], [19739, 21234, 16], [21234, 22894, 17], [22894, 23866, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23866, 0.01111]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
d9f124cd40a5419aa5c6dd934280d373e84085e2
TECHNICAL REPORT HyperVoice A Phone-Based CSCW Platform Paul Resnick CCS TR # 133, Sloan School WP # 3463-92 August, 1992 CENTER FOR COORDINATION SCIENCE HyberVoice A Phone-Based CSCW Platform Paul Resnick CCS TR # 133, Sloan School WP # 3463-92 August, 1992 Acknowledgments This research was supported by Digital Equipment Corporation, the National Science Foundation (Grant No. IRI-8903034), Matshushita Electric Industrial Co., Boeing, Information Resources, Inc., Electronic Data Systems, Apple Computer Company, and the corporate members of the MIT International Financial Services Research Center. HyperVoice A Phone-Based CSCW Platform Paul Resnick MIT Center for Coordination Science 1 Amherst Street, Room E40-181 Cambridge, MA 02139 (617) 253-8694 presnick@mit.edu ABSTRACT A major shift is underway in how we think about telephones. For decades, they were used solely for one-to-one, synchronous communication. The increasing use of answering machines and voice messaging, however, is shifting the public perception of telephones, thus opening a space for more innovative applications. Five years from now, some of the most interesting and popular cooperative work applications will probably use telephones as the primary means of access. This paper presents evidence that there are practical phone-based cooperative work applications and describes a set of software tools that facilitate the development of such applications. INTRODUCTION Telephones are the most ubiquitous, best-networked, and simplest computer terminals available today. This makes telephones an attractive platform for cooperative work applications such as event calendars, issue discussions, task tracking, and question and answer gathering applications, especially those in which local-area network connection of all users cannot be assumed. The limitation of telephones is that they provide only sound for output and only sound and twelve buttons for input. Many skeptics who are familiar with existing telephone interfaces believe that this limitation is so strong as to eliminate the possibility of creating practical phone-based cooperative work applications. A new telephone interface style called Skip and Scan [15], however, opens up the possibility of more complex telephone-based applications, including cooperative work applications. Field trials of several applications, totaling more than 7000 calls, demonstrate that it is possible to build usable and useful phone-based cooperative work applications. The field trials also highlight some of the factors that will influence the success or failure of applications. These factors include the value of the expressiveness of voice, the need for anonymity, the need to remember large chunks of information, and the distribution of costs and benefits among users. HyperVoice is an application generator for phone-based cooperative work applications. Several features distinguish HyperVoice from other software tools for building telephone applications: 1) The specification language primitives are at a high level of abstraction. An interpreter automatically determines the details of dialogue sequencing and the text of prompts. 2) Messages have internal structure. Using telephone forms, callers can add new information objects (messages) that consist of several fields. The fields can contain recorded voice, typed-in dates, phone numbers, and quantities, or even links to other information objects. Sorting and filtering operations can act on the non-voice fields. 3) Presentation formats are separate from information objects. Multiple presentation formats can present the same information objects in different ways, depending on the context. In addition, callers can add new information objects without specifying details of how to present the information, since existing presentation formats can be reused. SAMPLE APPLICATION: EVENT CALENDAR Consider the following scenario for a phone call to an event calendar application. The computer answers the phone and prompts the caller to select one of six event categories. The caller selects the lectures and seminars category. The system tells the caller that there are twelve announcements in this category. The announcements are arranged in the order of the event... dates and the caller then uses two buttons, 9 and 7, to move forward and back through the announcements. Each announcement begins with a headline, so that the caller can quickly decide whether to listen to the rest of the announcement. If the caller chooses to keep listening beyond the headline, several other "fields" are played back, including the date, the time, the location, a contact phone number, and details. The caller can also press #, the "smart" fast-forward button. Unlike an ordinary fast-forward button, which advances a fixed amount of time, the "smart" fast-forward skips to the next logical segment in the recording, in this case the next field of the announcement. After listening to some of the announcements, the caller presses 3, which initiates the entry of a new announcement. The caller then fills out a "telephone form." Each field of the event announcement becomes one entry blank in the form. The caller can press 9 and 7 to skip back and forth between the entry blanks. In some entry blanks, the caller speaks information. In other entry blanks, such as the one for the date field, the caller presses buttons to enter data, such as 082192 for August 21, 1992. For typed in data, the system runs validity checks, ensuring, for example, that the caller enters a date in the next 90 days. The caller can review and replace the contents of any of the entry blanks before deciding to save the form. Once saved, the new announcement is added to the lectures and seminars category. Now consider the scenario of a moderator calling up to make sure that no one has added an inappropriate announcement. Instead of selecting a category, the moderator enters a special code and the system goes to a list of all the event announcements. In this case, however, the announcements are sorted in descending order by the date on which they were entered, so that the newest announcements are at the beginning of the list. When listening to an announcement, the moderator hears all the same fields that regular callers do, plus the 'date added' and 'category' fields. The system automatically had added those fields to each announcement that was posted to the system. If the moderator finds an inappropriate announcement, entry of a special code removes that announcement from the system. This scenario illustrates a number of important features of applications developed using HyperVoice. 1) The information in the system can come from many sources, since anyone with a touch-tone phone can add a new announcement. 2) The event announcements consist of several pieces. While recording, that allows the system to remind the callers of important information to record, and allows callers to re-record smaller pieces when they make mistakes in recording. During playback, it permits callers to use a smart fast-forward button to skip between fields. 3) Some of the fields of event announcements are symbolic rather than recorded, including the date, date added, and category fields. This allows the system to provide validity checks during data entry and to sort announcements during playback. 4) There are multiple presentation formats for the same information objects. For example, a moderator hears the announcements sorted in a different order, and with additional fields played back. **HYPERVOICE** HyperVoice is an application generator for telephone bulletin-board applications. To create an application, a programmer specifies a login procedure, a collection of linked information objects, how to filter and sort the information, and how callers can add new information objects. A pre-processor automatically generates the text of prompts that need to be recorded. At run-time, an interpreter generates state-machines that determine the details of dialogue sequencing. Existing telephone application toolkits require programmers to specify state-machine representations directly [5, 13]. These tools provide programming environments analogous to HyperCard. By contrast, HyperVoice provides a specification language that abstracts away from many of the details of dialogue sequencing and navigation prompts. It is analogous to recent research on generating screen-based interfaces from higher-level abstractions [6, 17, 18]. This section briefly describes the HyperVoice application generator and presents part of the specification of the event calendar application. For more details, see [14]. **The Language** A programmer begins by specifying object types. In the event calendar application, for example, there is an event announcement object type that has eight fields: headline, date, time, location, contact number, details, date added, and category. Lists are ordered sequences of objects. The same object can appear in more than one list. In the event calendar, for example, there is one list of events for each category, plus a master list that contains all the announcements. **Login** Some applications require login procedures to restrict access and to determine the initial list of information objects to present to different callers. The HyperVoice Login primitive includes two parameters for whether callers need to enter ids and passwords to access the system. If no registration is required, two other parameters specify the initial privileges callers should get and the initial list to present. If registration is required, a parameter specifies a list of User objects. The id and password that a caller enters pick out a User object, which contains fields that specify the initial privileges and list to present. **Presenting a List** A List Format specifies how to present a list of objects. The List Format has a number of parameters, as summarized in Table 1. The filter selects a subset of the list to present. The Sort Formats determine the order in which to present the selected objects. The Item Formats specify which fields of the objects to play back, in what order, and whether to play the field names before the contents of the fields. One Item Format specifier is included for each of the object types that can appear in the list. A menu is a special case of presenting a list of objects, as determined by the two parameters, 'how to advance' and 'how to select'. Callers can advance to the next item either by waiting until the end of the current item (WAIT) or by pressing a button (SKIP), or both. With numeric selection, callers press 1 to select the first object, 2 to select the second and so on. With positional selection, a single button selects the current object. For a discussion of the relative merits of the menu styles that these two parameters can generate, see [14]. The 'Name for Objects' parameter is used in generating the text of prompts that will tell the caller how to navigate through the list. For example, if the parameter has the word "announcement" as its value, a number of prompts will be generated, including, "For the next announcement, press 9," and, "For the previous announcement, press 7." The 'Response List?' parameter determines what recordings will be played back to introduce the list. Each list that is not a response list has its own recording to introduce the contents of the list. Response lists, however, are generated automatically when information objects are added, so that the description of those lists needs to be generated automatically as well. HyperVoice plays the phrase "Responses to," and then the contents of the specified field of the object that the list contains responses to. **Adding New Information by Phone** A List Action specifies how a caller can add new objects. It determines the privileges required to add a new object and the locations in the list from which the addition can be initiated. The List Action also includes an Extension Format specification (Table 2). The Extension Format determines what kind of object to add, which lists to add it to, and where in those lists to add it. Finally, an Extension Format determines initial values for the fields, which fields will appear in the form, maximum recording lengths, and validity checks on typed in data, such as accepting dates only in the next 30 days. <table> <thead> <tr> <th>Parameter Name</th> <th>Alternative Values</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Filter</td> <td></td> <td>Selects a subset of the list to present.</td> </tr> <tr> <td>Sort Formats</td> <td></td> <td>The order in which to present the items.</td> </tr> <tr> <td>Item Formats</td> <td></td> <td>How to play back each item in the list.</td> </tr> <tr> <td>How to Advance</td> <td>SKIP, WAIT, BOTH</td> <td>Whether the caller presses a button to advance to the next item, or just waits.</td> </tr> <tr> <td>How to Select</td> <td>NONE, NUMERIC, POSITIONAL</td> <td>The selection mechanism for menus.</td> </tr> <tr> <td>List Action</td> <td></td> <td>Action that callers can take to add a new information object.</td> </tr> <tr> <td>Name For Objects</td> <td></td> <td>Used in generating prompts.</td> </tr> <tr> <td>Response List?</td> <td>NO, Field Name</td> <td>Is this a list of responses to some other object? If so, the field of the other object to play in the list header.</td> </tr> </tbody> </table> Table 1: The parameters of a List Format specification. <table> <thead> <tr> <th>Parameter Name</th> <th>Alternative Values</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Add to Current List?</td> <td>YES, NO</td> <td>Whether the new object will be added to the list currently being presented to the caller.</td> </tr> <tr> <td>Other Lists to Add To</td> <td>END, BEGINNING</td> <td>Add the new object at the end or the beginning of the list(s).</td> </tr> <tr> <td>Location to Add</td> <td></td> <td>A collection of Field Edit Formats that specify which fields will appear in form, initial values, validity checks, etc.</td> </tr> <tr> <td>Type of Object to Add</td> <td></td> <td></td> </tr> <tr> <td>Edit Format</td> <td></td> <td></td> </tr> </tbody> </table> Table 2: The parameters of an Extension Format specification. Example: The Event List Format Now consider how to specify the presentation of a category of even announcements, as shown in Table 3. It specifies that the announcements be sorted in ascending order by the date of the announcement. The first six fields of each event announcement will be played back, and all except the headline will have the field name played back before the contents. There is a List Action (details not shown) that specifies how a caller can add a new announcement. It specifies that the new announcement will be added both to the current list and to the master list that the moderator uses, as described in the scenario above. A different List Format (not shown) specifies the presentation of the master list of announcements to the moderator. It specifies that the announcements be sorted in descending order by the 'date added' field, so that the moderator can hear the most recent announcements first. The Item Format in the List Format includes two additional fields, 'category' and 'date added'. Finally, the List Format does not include a List Action, since the moderator will not be adding announcements directly to the master list. Implementation Programmers specify the primitives described above by filling in screen-based forms in the OVAL system [8], which runs on a Macintosh. These primitives are then written to a database file and transferred to an IBM-compatible PC. A C language program on the PC reads the application specification from the database file and updates the database file whenever callers add new information objects. A commercial add-in card, Watson, handles speech digitization and touch-tone detection. SUCCESS FACTORS FOR APPLICATIONS Previous research on CSCW applications and analysis of the differences between text and voice suggest a number of context variables that will influence the value of HyperVoice applications. Below I discuss both positive and negative factors. Green Flags Time-critical information Once entered over the phone, information is immediately available to other callers. For applications with time-critical information, HyperVoice will be more useful than communication systems such as mass mailings that have longer delays to publication. Need for access from home or while traveling One of the great advantages of the telephone is its widespread accessibility. Applications that benefit from entry or retrieval of information from remote sites will be more likely to succeed, since competing technologies cannot match its accessibility. Need for expressiveness of voice Voice is more expressive than text; through tone, pitch, and speed, a speaker can convey much information not conveyed by a text transcription. Studies of teleconferencing indicate that voice is the single most important channel to include for collaborative tasks [12] and that voice is a better annotation medium for the more complex, controversial, and social aspects of collaborative tasks [4]. Users have weak composition, keyboarding, or reading skills Unlike text-based systems, phone-based systems do not require these skills. Opportunity to create a 'honeymoon period' Most communication systems have increasing returns to adoption: the utility of the system to each user increases as the number of other users increases [2, 10]. To achieve critical mass, these systems need an initial honeymoon period in which users adopt based on expectations of future value, when others have adopted, rather than on the current utility of the system. Any opportunity to create a honeymoon period, through sponsorship by a powerful person or through a big splash introduction of the system, will improve its chances of success. Red Flags Well-entrenched communication patterns Changes from existing patterns are disruptive and people may not perceive the opportunity for better communication patterns. Poor distribution of costs and benefits The distribution of incentives is a well-known problem for CSCW systems [7]. While the benefits to the group as a whole may outweigh the costs, the benefits may accrue to some individuals and the costs to others, who may then refuse to participate. Need for anonymity A person's voice is more easily identified than a person's writing style. Research indicates that the anonymity of bulletin boards can improve participation from shy people [3] and that anonymous suggestions can enhance brainstorming sessions [11]. Using digital signal processing algorithms, it is possible to disguise voices, but only at the cost of losing the expressiveness and some of the intelligibility. Need to scan large information chunks If there is no way to break information chunks into small pieces, then it will take longer to listen to a message than for a good reader to read or scan a written version of it. If, on the other hand, the information divides naturally into small, meaningful segments, then a consistent set of telephone buttons may allow callers to accomplish the fast changes of attention that eye gaze shifts accomplish in visual scanning. This is the idea behind the Skip and Scan interface style [15]. Need to remember large information chunks The presentation of information by phone is ephemeral. If callers need to take information with them, they will have to engage in the tedious process of transcription. FIELD TRIALS HyperVoice has generated working prototypes of many different applications, including a joke collector, several versions of an issue discussion application, event calendars, a question and answer line for teachers, and a task tracking application. These prototype applications demonstrate the utility of the HyperVoice primitives as a specification language for a diverse set of applications. The design process for several of the prototypes also demonstrated the utility of HyperVoice's high-level primitives for discussion of alternative designs with non-programmers. Table 4 summarizes field trials of several of the prototypes. The field trials demonstrate the possibility of usable and useful telephone-based cooperative work applications. Observations from both the successful and unsuccessful field trials are consistent with the success factors described above. Issue Discussions In an issue discussion application, callers can not only listen to what others have recorded, but also record responses. Similar to selecting a category of events, a caller to an opinion forum navigates through one or more menus to select a topic. A topic is a list of comments, each having headline and contents fields. Each time a caller adds a comment, an empty list of responses is automatically created. Then, a future caller who listens to a recorded comment presses a button to go to the responses to that comment. The first issue discussion was used as an adjunct to a class discussion. Despite some technical and user interface difficulties, a majority of the class called and a number of them recorded comments, including responses to others' comments. The second issue discussion, on the topic of intellectual property rights on software user interfaces, was demonstrated at the Interactive Experience at CHI '90. Experts on the topic recorded comments in advance. Most conference-goers, however, were unwilling to publicly record an opinion on such a controversial topic since their voices might be recognized. <table> <thead> <tr> <th>Application Name</th> <th>Message Types</th> <th>Participatory</th> <th># of calls</th> <th># of messages added</th> <th>duration of trial</th> </tr> </thead> <tbody> <tr> <td>Issue Discussion 1: class discussion</td> <td>Comment</td> <td>No</td> <td>~20</td> <td>~10</td> <td>4 days</td> </tr> <tr> <td>Issue Discussion 2: Intellectual Property</td> <td>Comment</td> <td>No</td> <td>~150</td> <td>3</td> <td>3 days</td> </tr> <tr> <td>Issue Discussion 3: U-TALK</td> <td>Comment</td> <td>No</td> <td>1030+</td> <td>152+</td> <td>2 months +</td> </tr> <tr> <td>Mandela Task Tracking</td> <td>Status Report</td> <td>Yes</td> <td>1</td> <td>0</td> <td>7 days</td> </tr> <tr> <td>Mandela Event calendar and Volunteer signup</td> <td>Message (unstructured)</td> <td>Yes</td> <td>1378</td> <td>more than 200</td> <td>10 days</td> </tr> <tr> <td>Peace and Justice Events Hotline</td> <td>Event Announcement</td> <td>No</td> <td>4578+</td> <td>more than 300</td> <td>1 year +</td> </tr> <tr> <td>Curriculum Questions and Answers</td> <td>Lesson Plan, Question, Response, Success Story, Meeting Announcement, Comment</td> <td>Yes</td> <td>72 by head teacher; 66 by others</td> <td>57 by head teacher; 10 by others</td> <td>6 weeks</td> </tr> </tbody> </table> Table 4: Summary of field trials. Those marked + are ongoing. The final issue discussion, U-TALK, (pronounced "You talk") was open to the entire MIT community. The letters spell out the internal MIT phone number for the system. I initially hoped for serious discussion of issues such as academic honesty. While some people recorded serious comments, and even one poem, others took advantage of the expressiveness of voice to record music and other entertaining sounds. For example, in response to a question about their worst experience at MIT, two people shouted in unison, "Everything!" To preserve anonymity, some people tried to disguise their voices. **Task Tracking** The people coordinating Nelson Mandela's visit to Boston in June 1990 had the opportunity to use a HyperVoice application to help track the status of plans for events. One or two people recruited from local corporations and non-profit organizations were responsible for each of the approximately ten events. In addition, there were several overall coordinators, including a publicity person and an overall operations coordinator. Because they were an ad hoc group, the only technologies that they all had in common were telephones and fax machines. Interestingly, the HyperVoice specification language provided a convenient language for discussing alternative designs with the head of the operations committee. We considered several designs, varying the fields of a status report, who would be allowed to add new status reports, and who would be allowed to listen to them. We decided on one list of status reports for each event, with open access, both reading and writing, to all members of the operations committee. The committee members, however, did not use the phone application. Follow-up telephone calls to them indicate several reasons. First, the system was ready too late in the process. Communication and coordination structures, however inadequate, had already been established. Second, the operations coordinator introduced the system briefly in the middle of a lengthy meeting. Several people could not remember the system being introduced at all. Thus, the introduction process did not create enough excitement to spawn a honeymoon period. Third, there was an incentive distribution problem. The work required to post status reports would have fallen on the event coordinators, while most of them perceived that the benefits would all accrue to the overall operations coordinator. Finally, some of the event coordinators wanted to retain as much control as possible over their events. As a result, they did not want to open their plans to others' scrutiny. **Event Calendars** A more successful HyperVoice application publicized the schedule of events for Nelson Mandela's visit to Boston. In this case, the general public was permitted to add new event announcements. The event organizers, however, added and removed announcements by phone, and this proved to be quite useful for handling last minute changes to the schedule. Printed flyers and newspaper and radio spots advertised the phone number. Besides listening to the schedule of events, callers could also listen to recorded requests for volunteers and leave (unstructured) messages to sign up. A second event calendar, called the peace and justice event hotline, allowed the general public to post announcements using telephone forms. While a few people had trouble with the structured input (they recorded their entire announcements in each entry blank of the telephone form), most had no trouble. Leaflets and announcements at political events around Boston publicized the phone number. During the Gulf War, the emcee at a city-wide rally described it as the best source of up-to-date information about events. While transmission of information about event announcements does not require the expressiveness of voice, a number of people remarked that they liked hearing many different voices on the system. More than a year later, the system is still in active use. **Curriculum Questions and Answers** A group of 38 elementary school math teachers used a HyperVoice application to communicate about a new math curriculum that they were using. The HyperVoice specification language again turned out to be useful in discussing alternative designs with a non-programmer, the head teacher who developed the curriculum. The key decisions again were the structure of information objects and who would be able to listen to and add them. We decided to have publicly accessible lists of success stories, meeting announcements, and general comments, and a list of lesson plans that only the head teacher could add to. Each lesson plan had an attached list of questions that any teacher could add to, and each question had an attached list of responses that they could add to. The motivation for this was that the head teacher routinely handled the same question from more than one teacher, causing her to be on the phone for several hours each night. By making the questions and answers public, she hoped not to handle the same question repeatedly. Thus, this application embodies some of the ideas in the Answer Garden system [1]. The question and answer system made use of the ability to provide multiple presentation formats for information objects. The head teacher promised to call in each night and respond to any questions that other teachers had recorded. There were as many as forty different lesson plans on the system at any one time. Without a special method of access, she would have had to check all forty question lists for new questions. --- CSCW 92 Proceedings November 1992 Instead, each new question was also added to a master list that she checked. In addition, each new question had a field that contained a pointer to its associated lesson plan. That way, the head teacher could listen to the new question and hear what the associated lesson plan was. The actual usage differed quite a bit from the planned usage. Almost half the teachers never called. Those who did call listened to the lesson plans that the head teacher posted and listened to the meeting announcements. A few of them recorded success stories or congratulations to other teachers on wedding plans, but only two recorded questions. This application, then, gave the head teacher the additional task of recording lesson plans, without reducing the number of repeat questions that she handled. After six weeks, she stopped recording new lesson plans. A number of plausible explanations can be made of the usage patterns. First, the system was introduced six weeks into the school year, after communication patterns were already set. Second, while the 38 teachers were distributed among many schools, there were usually two or three in each school, and the whole group met for one day a month, so that many teachers may not have felt the need for greater communication. Third, teachers generally do not get much feedback from peers, so that asking publicly about teaching puts them in an unfamiliar and vulnerable position. Thus, the lack of anonymity of recorded voice appeared to be an important factor. Fourth, at approximately ten minutes each, the recorded lesson plans were too long. To absorb all the detail in a recorded lesson plan, teachers would have had to take extensive written notes. Despite that, some teachers did use lesson plans from the phone system in their classrooms. RELATED WORK HyperVoice builds on previous research in two areas, telephone-based interfaces and screen-based information-sharing tools. Overall, this project generalizes the notion of semi-structured information objects and presentation formats from text-based systems to a new medium, and develops a telephone interface style that is suited to the entry and presentation of semi-structured objects. One previous project investigated the collection of semi-structured information objects over the telephone [16]. The PhoneSlave conducted conversations with callers to elicit structured answering machine messages. The system asked each caller a series of questions ("Who's calling please?"; "What is this in reference to?"; "At what number can he reach you?"; etc.) After playing a question, it recorded whatever the caller said, until it detected a long pause, then went on to the next question. The system automatically filled in the date and time of the call. The structured messages were retrieved using graphical tools on a computer screen. HyperVoice generalizes this dialogue to include entry of non-voice data such as dates and even links to other objects. HyperVoice also generates interfaces that take advantage of message structure in presenting information objects for playback by telephone. A number of asynchronous text-based systems help coordinate the activities of a set of people. Applications have included conversation management [19], task tracking and scheduling. Malone et al [9] showed that most such applications can be constructed from the following features: 1) sharing of semi-structured information objects; 2) filters and, more generally, agents, that automatically select some objects for presentation to particular users at particular times; 3) views that specify how to present visually single objects or collections of objects in ways that are helpful to particular users at particular times. HyperVoice generalizes the notion of semi-structured objects in these systems to include voice as a data type. It also generalizes the concept of views, a visual notion, to that of presentation formats, and provides presentation formats that are particularly suited to telephones. FUTURE RESEARCH Future plans call for the development and evaluation of additional applications and the integration of two other widely accessible technologies, fax and email. The planned applications include an aid to scheduling meetings and task allocation applications (e.g., a distributed sign-up sheet). Some of these will require extensions to the HyperVoice specification language. To integrate fax and email with the telephone, I am developing an architecture for sharing semi-structured information objects that will allow users to participate with any combination of these technologies, without requiring any individual user to have all three. For example, a user could enter a semi-structured object using a telephone form, but could send in a fax to be the contents of one of the fields, and an email message to be the contents of another. An extension of this idea would be to make the telephone an alternative mode of access to Lotus Notes or Oval [8] databases, for both entry and retrieval. CONCLUSION The main message of this paper is that it is possible and worthwhile to use the telephone as a platform for input and retrieval of semi-structured information objects. HyperVoice provides a language that is useful both in considering alternative designs and in specifying applications. The field trials of applications demonstrate that even some prototype-quality systems were usable and useful. The telephone is a worthy platform for further research on CSCW applications and for the development of commercial products. ACKNOWLEDGMENTS Tom Malone provided invaluable guidance and assistance throughout this project. Charlie Welch maintains the Peace and Justice event hotline. Wendy Lee did the fundraising and publicity for U-TALK. Victoria Bill, Leslie Clark, Mary Leer, and Lauren Resnick made the teachers' question and answer application possible and Jolene Galegher and Mark Ackerman helped analyze it. Kevin Crowston, Rob Fichman, Chris Kemerer, Jintae Lee, Wanda Oriakowski, Mike Plusch, Chris Schmandt, Bob Virzi, and Joanne Yates also contributed the content of this paper. REFERENCES
{"Source-Url": "http://dspace.mit.edu/bitstream/handle/1721.1/48061/hypervoicephoneb00resn.pdf?sequence=1", "len_cl100k_base": 6965, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 57973, "total-output-tokens": 8781, "length": "2e12", "weborganizer": {"__label__adult": 0.00039076805114746094, "__label__art_design": 0.0009794235229492188, "__label__crime_law": 0.0002989768981933594, "__label__education_jobs": 0.006359100341796875, "__label__entertainment": 0.0001989603042602539, "__label__fashion_beauty": 0.00020623207092285156, "__label__finance_business": 0.0005154609680175781, "__label__food_dining": 0.0003800392150878906, "__label__games": 0.0004911422729492188, "__label__hardware": 0.00879669189453125, "__label__health": 0.0004372596740722656, "__label__history": 0.000530242919921875, "__label__home_hobbies": 0.00018215179443359375, "__label__industrial": 0.0005812644958496094, "__label__literature": 0.0005846023559570312, "__label__politics": 0.0002353191375732422, "__label__religion": 0.0004935264587402344, "__label__science_tech": 0.184814453125, "__label__social_life": 0.00019359588623046875, "__label__software": 0.06695556640625, "__label__software_dev": 0.72509765625, "__label__sports_fitness": 0.00016963481903076172, "__label__transportation": 0.0008630752563476562, "__label__travel": 0.00022232532501220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39015, 0.02473]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39015, 0.38877]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39015, 0.9189]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 0, null], [0, 157, false], [157, 157, null], [157, 609, null], [609, 609, null], [609, 4277, null], [4277, 9836, null], [9836, 14820, null], [14820, 18901, null], [18901, 23661, null], [23661, 29247, null], [29247, 34790, null], [34790, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 0, null], [0, 157, true], [157, 157, null], [157, 609, null], [609, 609, null], [609, 4277, null], [4277, 9836, null], [9836, 14820, null], [14820, 18901, null], [18901, 23661, null], [23661, 29247, null], [29247, 34790, null], [34790, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null], [39015, 39015, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39015, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39015, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 0, 4], [0, 157, 5], [157, 157, 6], [157, 609, 7], [609, 609, 8], [609, 4277, 9], [4277, 9836, 10], [9836, 14820, 11], [14820, 18901, 12], [18901, 23661, 13], [23661, 29247, 14], [29247, 34790, 15], [34790, 39015, 16], [39015, 39015, 17], [39015, 39015, 18], [39015, 39015, 19], [39015, 39015, 20], [39015, 39015, 21], [39015, 39015, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39015, 0.15951]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
df54431678dd5b2df16ef452b8b9184a76da802c
A Practical Architecture for User Modeling in a Hypermedia-Based Information System Julita Vassileva Universität der Bundeswehr München Institut für Technische Informatik 85577 Neubiberg, Germany E-mail: jiv@informatik.unibw-muenchen.de Abstract User Modeling is a field of increasing importance for industrial applications, especially for information retrieval from large data-bases using browsing as a search strategy. The most of the research in this field, however, has been theoretical. We have implemented a new architecture for user modeling based on analysis of the tasks performed by the users. It allows adaptive browsing support for users with different levels of experience, data-protection, and a degree of adaptability according to the preferences of individual users. This architecture was applied in building a user modeling component for a hypermedia-based information system for hospital information which is now being experimented. Introduction Browsing is a useful technique for retrieving documents from data-bases (Thompson & Croft, 1989). It has been widely applied recently as hypertext and hypermedia systems have become increasingly popular (Begoray, 1990). The main cognitive advantage of this technique is that the users in general are better able to recognise the information they want than to characterise it in advance. The disadvantages of browsing are that it is easy to get lost in a complex network of nodes representing documents and concepts and that there is no guarantee that a browsing search will be as effective as a more conventional search. If it offers a rich set of links, the system is responsible for helping the users understand what the links mean, how they might be used, and where they are in the network formed by them. Without this kind of help, browsing can take on the aspect of the user finding his way in a maze, where he can become hopelessly "lost in (hyper)space" (Conklin, 1987). User Modeling can help in supporting the user's navigation (Kobsa, 1993). Our work stems from an industrial project for creating a large hypermedia information system for hospitals. After a long phase of interviews and observations we came up with an architecture of a User Model (UM) which: • supports novice users by ensuring a smaller browsing space, while providing to experienced users a larger browsing space and possibilities for direct access to the required information; • combines user modeling with data-protection; • provides adaptiveness and self-improvement; • provides adaptation at discrete points in time, after the user has been informed; • provides tools for the users to adapt their user models; • supports collaborative work. Application Domain HYNECOS is an information system for hospital information developed by Siemens AG in co-operation with the University Clinic of Orthopaedics in Heidelberg (Hertwig, 1993). It contains multimedia data about patients, personnel, hospital stations (room-plans, beds, and occupancy) and medical concepts. All this information is organised logically following the Hypertext Development Methodology (HDM) for creating hypertext from relational databases (Grazotto et.al, 1991). The system is implemented in ToolBook on an IBM PC 486. HYNECOS is still at a prototype stage, it contains only the necessary minimum of information to demonstrate the abilities of the system. Even during initial testing of the prototype, it became clear that user modeling would be of a major importance for the success of the system, since the group of potential users of HYNECOS was very broad and heterogeneous and the amount of information to be offered (browsing space) was too big. After some initial discussions with users we conceived the idea that the UM should be set in the context of the user's tasks, so that these could be used as a basis for "filtering" information. We created a general scheme of a task-hierarchy. With this empty scheme, we interviewed one user who filled it with the tasks which he typically performs and the information he needs for them. With this example-scheme in hand, we interviewed several other users whom we considered as typical. They easily interpreted the empty scheme according to their specific tasks. In this way we obtained several schemes reflecting different tasks and information needs. We used these schemes to refine our idea of the architecture of the system and the UM. The process of interviewing helped us draw some conclusions about the restrictions and requirements for user modeling in hypertext-browsing for practical applications. **Specific Problems of User Modeling in Browsing Information Retrieval** ** Acquiring data about the user** Most of the methods for acquiring data about the user's interests, reported in theoretical research papers cannot be used in our application. In principle, the task of finding out the plan or goal of the user in browsing is more difficult than in query-based information retrieval (Kok, 1991): the user's browsing activities can be chaotic and non-sequential and they do not necessarily reflect his goal or task. That is why all known adaptive interfaces for browsing use as evidence other aspects of the user's behaviour that might reflect his interests. For example, (Kaplan et al., 1993) assume that the more time is spent on a unit, the more interesting it is. In other approaches (Thompson & Croft, 1989), (Kok, 1991) the user is asked to estimate the "interestingness" of every unit that is retrieved. In our case, as will be explained later, neither of these methods is realistic. A lot of other restrictions are posed in a practical application. For example, almost all theoretical approaches treat the hypertext system as modifiable on the basis of the information in the UM, if this is required (Kaplan et al., 1993). However, in our case the hypermedia system is a "given" and it can be only "masked" or "viewed". ** User group identification** Different approaches for representing a UM for browsing exist: some take a symbolic, logic-based perspective (Kok, 1989), others use connectionist schemes, like associative networks (Bellow, 1986), (Kaplan et al., 1993). However, in our case, where there are clearly identifiable categories of users, it is worth taking advantage of this fact. There is no point in using techniques that are appropriate mainly when nothing is known about the interests of the user and only a strongly individual user model can be helpful as in (Bellow, 1986) and (Kaplan et al., 1993). The presence of user groups reduces the need to find evidence about the interests of individual users. This suggests the use of some kind of stereotype model (Rich, 1989), (Chin, 1989). Stereotype approaches, however, are not so widely used for modeling the user’s preferences in information retrieval. The main difficulty in our case is that it is hard to give one systematic classification of users, because the factors influencing the information needs are many — for example, the task performed, the place, the profession and the rank of the user — and combining these factors requires a weighting scheme, i.e., giving higher priorities to some of the factors. The only general solution will be to represent explicitly all factors and their possible values in order to ensure a coverage of all possible combinations. **Task-based context for user modeling** In our domain, as in many other application-domains, different user groups have typical tasks and goals for their data-retrieval. For example, (Kaplan et al., 1993) also assume a fixed set of user goals; they don't infer the goals from the user's browsing activities. Every goal (task) has specific information needs that provide a context in which the information (topic) needed is known in advance (Tyler & Treu, 1989). In our case, however, the tasks appeared to be decomposable, i.e. hierarchically organized, and their information needs are mutually dependent. That is an important difference from the approach of Hyperflex (Kaplan et al., 1993), where the goals are represented as independent nodes in the hypertext structure, at the same level as the nodes, corresponding to the hypertext-topics (see figure 1). This means that every new goal, introduced in the system is considered to be semantically independent of all other goals, i.e., the weights of the links from this goal to all hypertext-topics have to be given explicitly (or learned by the system). In our approach, a new task will be considered first with respect to the other tasks in the hierarchy so that its place can be found, and certain information needs can be ascribed to the task (at least the information needs of its children nodes). This means that comparatively less knowledge engineering effort is needed for assigning weights to the goal-topic links (we therefore do not provide the machine learning capability of Hyperflex). **Figure 1: The goal (task) representation: two approaches.** **Figure 1:** Direct access vs. browsing In some previous studies — e.g., (Thomson & Croft, 1989), (Kaplan et. al., 1993) — it has been assumed that the user doesn't know exactly what he is searching for, because it is new information, e.g., news or advice. In our domain, by contrast, the user normally knows what document he needs. The question for him is only how to obtain it conveniently. In principle, for such an application, a query-based information retrieval ought to be ideal. But the experience of the clinic with the same documentation represented in a relational database showed that the users experienced serious difficulties when formulating queries; the hypermedia-based prototype had a far higher degree of acceptance. For the real application, however, the chains of documents to be visited became too long and the choices offered were sometimes too confusing. Unlike (Dumais, 1988) and (Furnas, 1986), we decided not to merge the browsing paradigm with a query-based but rather to rely solely on user modeling to provide sufficiently direct access to information. The user's level of experience with the system There was a disagreement among users during the interview phase about whether it is better to have a smaller number of directions in which to search at a given time, even if this is restrictive, or always to be able to access the desired information directly. Users who had not previously seen the system preferred to have a small set of links available at any moment, so that making each choice would be easier. Users who knew the system better, felt comfortable with the unrestricted interface provided by the prototype which had no UM. These two requirements conflict, and a compromise can be found only if we explicitly represent the factor "experience" as a way to determine the degree of direct vs. browsing access to the information. We had to choose whether to make the system adaptive with respect to this factor — i.e. to create means for the system to infer the level of experience of the user from his browsing — or to make it adaptable — i.e. to provide means for the user to change the level of experience in his UM when he wants. We decided to implement both decisions. Too high adaptation is not always an advantage The users need to have a coherent mental model of the system. A system that is constantly adapting, even if this is supposed to be happening for their own benefit, makes them feel uncomfortable and decreases their confidence. We came to this conclusion when we observed users working on different versions of HYNECOS. We believe, the policies usually applied in connection with new software releases (which are announced in advance) is accepted by users much better: They need to know what is going to be changed in the system and why. In the medical domain it is sometimes of vital importance to access data quickly; the users therefore want to be able to have absolute confidence in their system. Another reason not to strive for maximal individualisation is that the system also serves communicative functions. For example, it can be used on the same computer by two or three doctors and several nurses. In order not to confuse and impair the communication between them, the system should look the same for users performing the same tasks. That is why the design of the UM should provide for creation and adaptation by a group of users (not necessarily a homogeneous user-class) which is going to work together in a team. In other words, the system should support collaborative work by means of a "group user model". Data-protection With the already existing noncomputerised medical documentation, doctors, nurses and students were free to examine the files with the patient data. However, all of the interviewed users agreed that there must be data protection in the electronic version of the patient data, especially because far more people will have access to it. Data protection is an important issue which so far has not been considered in connection with user modeling in information retrieval. Normally, every information-rich database has different user rights of access. User modeling can help to ensure data-protection from unauthorised access. An Architecture for User Modeling. The proposed architecture for user modeling can be described as a three-layer structure to be added on top of the hypermedia system (figure 2). The first layer contains representations of the tasks performed by the users. They define the "views" of hypermedia and provide the main context in which a specific UM is situated. The second layer contains information about the user classes. It provides specific constraints on the rights of access to information and requirements for the form of presentation. The third layer contains the individual user models. Every individual UM contains additional information not implied by the user class, for example the user's level of experience. Figure 2: Architecture for User Modeling Task-hierarchies The typical tasks performed by the users that involve work with the information system are represented as hierarchies. Every task implies specific information needs, i.e. *entities* in the HDM - terminology (topics, nodes). When the user is performing a relatively specific task (lower in the hierarchy), he sees a limited view over the hypermedia, one which is relevant to the task; moving up in the task-hierarchy, he gets wider views (cf. figure 3). Ways of defining views over the hypermedia. A task-dependent view can be defined in two ways: - "Free browsing with an anchor" — i.e., providing links to the entities needed for the task, and allowing the user to browse further following the standard hypermedia links from these entities. In this way, the task serves as a sort of anchor. It provides the starting points in browsing to which users can always come back, if they get lost. - "Restricted browsing" — i.e., allowing the user to browse only within the entities linked to one task. In this way the normal hypermedia links outside of the view are disabled ("masked"). If the task is a high-level one, the browsing space includes the browsing space of its sub-tasks. Figure 3: Architecture of the User Model. We expected that this second way of viewing would be preferred by novice users as they would prefer to work on one task at a time. The experimental results with a group of novice users showed, however, that about half of them preferred to be able to browse freely, following the logical links of the hypermedia, provided they have an anchor to return to. We offer two possible explanations, which were informally supported in additional interviews and tests: 1) The larger part of the users were not actually executing the task they had chosen, but rather working simultaneously on several tasks. After switching to a task one-level higher, they no longer felt restricted in their browsing. This shows that the level of experience has not been set appropriately for them. 2) A small proportion of the users could be seen to be unable to formulate the information needs of the current task exactly. They were searching for information that they didn't actually need in order to complete the task. It was recognised, however, by people from both groups that the restrictive way of defining a view improves task performance, saves unnecessary browsing and generally helps them organise their work better. The question of which way of defining a view is better, can be related to the general question of the degree of the system's adaptation to the user with respect to the user's adaptation to the system. We left this question to be answered by the users, by allowing them to chose the viewing style themselves. Task-determined rights to modify information The tasks define not only rights to access, but also to modify information. One way to reduce the risk of data corruption and loss is to associate rights to modify data only with the tasks that are expected to change this data. For example, during the task of "Administering therapy", the nurse needs to know the patient's diagnosis, but the diagnosis is only supposed to be changed during performance of the task "Making a diagnosis". Other forms of data protection are implemented at the user-class level, as discussed in the next section. User-classes The users population can be divided into several overlapping user classes with different information needs, rights of access to information and appropriate forms of presentation. The factors that define the user class in our case are: profession (doctor, nurse, manager, student, or patient), location (ambulance or station), and rank (up to 5 stages depending on the profession). A user-class is characterised with a combination of values of these factors. Every user class can be related to a different set of task hierarchies or isolated low-level tasks. Normally, there is an inheritance in the task hierarchies of classes with the same profession. For example, a chief doctor at the station has to perform all the tasks of a doctor plus the task of station management. Rights to access and modify information are largely dependent on user class. Higher-ranking users may have two types of special rights: 1) The right to access a larger amount of data which is not included in the task hierarchy of the user class. Every user can access all data, except where access is explicitly prohibited for his or her user class. 2) The right to modify data even when this is not allowed by the task which is currently being performed. Rights to access or modify data can also be restricted for certain classes of users. For example, patients are restricted to getting only information from their own files and from the medical concept base. Students are not allowed to see personal data and to modify any information. The user class can imply also specific presentation needs. If there are several alternative representations of the same entity, one is chosen that is considered to fit best the needs of the user class. Such presentation preferences can, however, be changed in the individual UM. Individual User Models A user class serves as a "kernel" to which many individual user models are related (see figure 2). Every individual UM contains parameters which specify the user's level of experience and his or her requirements with respect to the task hierarchy, the style of "viewing", the form of presentation, and the screen layout. Where some parameters are missing, the corresponding values are inherited from the user-class. The individual UM can be changed both by the user and by the system (after consulting the user) when nonoptimal performance is observed. ![Individual User's Task Hierarchy](image) **Figure 4:** The user's level of experience. Level of experience is a parameter of the individual UM which determines at what level of every branch of his task hierarchy the user can get access to the hypermedia-entities (see figure 4). The specific rights of an individual user to access and modify data — which may differ from those determined by the user class — are determined by a set of parameters. The individual UM contains also parameters corresponding to the user's special presentation preferences and screen layout. Basic Features of the System Context In HYNECOS, the context of interaction between the user and the system is defined when the browsing space is limited according to the task selected from the task-hierarchy. An experienced user can immediately get the view over the entire hypermedia system. A novice will be guided through the task hierarchy until he reaches a level that he has shown an ability to cope with; then he will be given the corresponding smaller view. By gradually increasing the navigation space together with his moving up in the task hierarchy, the user is always interacting with the system in an appropriate context. Adaptation Our interpretation of the notion of adaptation is slightly different from that of (Croft, 1984). We believe that the system must not only "make available those tools which are relevant to the current task", but also be able to change dynamically according to the changing user's needs (not only with respect to the tasks) in order continually to maintain the appropriate context for interaction. Strongly adaptive systems, however, threaten the user with a loss of control, and their users have difficulties in developing coherent models of them (Fischer, 1992). That is why we decided that the system's adaptation to the user's needs has to be carried out not continuously, but at discrete points in time, and only after the user has given specific permission. Because one of the most important time-dependent factors is the user's experience (Norcio & Stanley, 1989), the system must have means for finding out and reacting to the changes in the user's level of experience. The user's navigation actions are recorded, and if patterns are found in them which imply that the user's proficiency has increased, a flag is set indicating that it seems appropriate to increase the user's recorded level of experience. For example, if he goes down the task hierarchy, reaches his current level of experience and, without browsing in the information space provided, immediately clicks on the "task completed" button, this is a sign that he wants to get to the higher level and be allowed a broader view of the hypermedia. When there is evidence that the user's recorded level of experience is too low (or too high), the system changes it, after obtaining the user's consent. Data is also collected about particular types of representation that are retrieved especially often. After a threshold has been exceeded, the user is asked whether he really prefers the type of representation in question. On the basis of his answer, the parameters of his individual preferences are updated. Similarly, the system collects statistics about nonoptimal behaviour of the users in different classes. This information is used for revision of the task-hierarchies and the links from the tasks to the information entities. In this way the system can improve its task hierarchies across time. Adaptability and Group User Models User-adaptable systems support users in modifying systems according to their own needs (Fischer, 1992). Adaptability is a typical feature of systems for adaptive browsing (Thompson & Croft, 1989), (Tyler & Treu, 1989), (Belew, 1989), (Kok, 1991), (Kaplan, et al., 1993). Our architecture for user modeling allows the user to adapt his individual model in the following ways: - Modification of the individual task-hierarchy. The user can define a task hierarchy of his own. The user is supported here with a library of *task-aggregates* (i.e., parts of task-hierarchies and their information needs) from which he can cut and paste to alter his own task hierarchy. • Creation of new tasks. The user can define his own tasks and add them to the library of task aggregates. For this purpose he is provided with means to select information entities from the hypermedia by browsing and to link them to the task which he wants to create. During the modification of the task-hierarchy and creation of new tasks, the rights of access to information of the user cannot be changed, since the "forbidden" entities are specified in his user class and are therefore invisible for the user. However, he can extend his browsing space and make it more convenient for search. • Selecting the style of viewing. The user can select the "free browsing with an anchor" style or the task-restricted browsing. • Changing the recorded level of experience. This can be done by the user explicitly, i.e. without waiting for the adaptation mechanism to suggest a change in the level. • Changing the presentation preferences. The user can change the values of these parameters directly in his individual model. An important consequence of the availability of tools for adaptation by the user is that it becomes possible for the users to build group models to support cooperative work. They have to find an agreement about the group task hierarchy, the style of viewing, the level of experience and the type of presentation. Conclusions User modeling is a field of increasing importance for industrial applications, especially for information retrieval form large data-bases using browsing as a search strategy. We propose an architecture for user modeling based on empirical analysis of the tasks performed by the users. It ensures adaptive browsing support for users with different levels of experience, data-protection, and a level of adaptability according to the preferences of individual users. This architecture was applied in the development of a user modeling facility for a hypermedia-based information system for hospital information. Currently we are experimenting with the system with four different classes of users and the results are very encouraging. Acknowledgements I am grateful to Ralph Deters, Karin Hertwig, Dr. Krämer for helpful discussions, to the anonymous reviewers for their comments and to Anthony Jameson for helping with the English. References
{"Source-Url": "http://julita.usask.ca/texte/UM94.pdf", "len_cl100k_base": 5288, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18470, "total-output-tokens": 6512, "length": "2e12", "weborganizer": {"__label__adult": 0.0004622936248779297, "__label__art_design": 0.0023517608642578125, "__label__crime_law": 0.0006322860717773438, "__label__education_jobs": 0.004772186279296875, "__label__entertainment": 0.0001589059829711914, "__label__fashion_beauty": 0.0002846717834472656, "__label__finance_business": 0.000579833984375, "__label__food_dining": 0.0005612373352050781, "__label__games": 0.0005931854248046875, "__label__hardware": 0.0019283294677734375, "__label__health": 0.0033664703369140625, "__label__history": 0.0005803108215332031, "__label__home_hobbies": 0.00014925003051757812, "__label__industrial": 0.0010137557983398438, "__label__literature": 0.0006647109985351562, "__label__politics": 0.0002779960632324219, "__label__religion": 0.0006437301635742188, "__label__science_tech": 0.345703125, "__label__social_life": 0.00015223026275634766, "__label__software": 0.07684326171875, "__label__software_dev": 0.55712890625, "__label__sports_fitness": 0.00028705596923828125, "__label__transportation": 0.0007290840148925781, "__label__travel": 0.0002944469451904297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28604, 0.01738]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28604, 0.30376]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28604, 0.93766]], "google_gemma-3-12b-it_contains_pii": [[0, 3862, false], [3862, 8982, null], [8982, 13962, null], [13962, 18843, null], [18843, 23900, null], [23900, 28604, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3862, true], [3862, 8982, null], [8982, 13962, null], [13962, 18843, null], [18843, 23900, null], [23900, 28604, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28604, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28604, null]], "pdf_page_numbers": [[0, 3862, 1], [3862, 8982, 2], [8982, 13962, 3], [13962, 18843, 4], [18843, 23900, 5], [23900, 28604, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28604, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
60d5f005ea79a82baf69905801b00d3830963c40
CSE373: Data Structures & Algorithms Lecture 9: Priority Queues Aaron Bauer Winter 2014 Midterm - On Wednesday, in class - Closed book - Closed note - Closed electronic devices - Closed classmate - Covers everything up through priority queues and binary heaps - does not include AVL tree delete - does not include proof AVL tree has logarithmic height Review - Priority Queue ADT: `insert` comparable object, `deleteMin` - Binary heap data structure: Complete binary tree where each node has priority value greater than its parent - \( O(\text{height-of-tree}) = O(\log n) \) `insert` and `deleteMin` operations - `insert`: put at new last position in tree and percolate-up - `deleteMin`: remove root, put last element at root and percolate-down - But: tracking the “last position” is painful and we can do better Array Representation of Binary Trees From node $i$: - left child: $i \times 2$ - right child: $i \times 2 + 1$ - parent: $i / 2$ (wasting index 0 is convenient for the index arithmetic) Implicit (array) implementation: <table> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> <th>F</th> <th>G</th> <th>H</th> <th>I</th> <th>J</th> <th>K</th> <th>L</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> <td>7</td> <td>8</td> <td>9</td> <td>10</td> <td>11</td> <td>12</td> </tr> </tbody> </table> Pseudocode: insert ```java void insert(int val) { if (size == arr.length - 1) resize(); size++; i = percolateUp(size, val); arr[i] = val; } ``` ```java int percolateUp(int hole, int val) { while (hole > 1 && val < arr[hole / 2]) arr[hole] = arr[hole / 2]; hole = hole / 2; return hole; } ``` This pseudocode uses ints. In real use, you will have data nodes with priorities. This pseudocode uses ints. In real use, you will have data nodes with priorities. **Pseudocode:** deleteMin ```cpp int deleteMin() { if(isEmpty()) throw... ans = arr[1]; hole = percolateDown (1,arr[size]); arr[hole] = arr[size]; size--; return ans; } ``` ```cpp int percolateDown(int hole, int val) { while(2*hole <= size) { left = 2*hole; right = left + 1; if(right > size || arr[left] < arr[right]) target = left; else target = right; if(arr[target] < val) { arr[hole] = arr[target]; hole = target; } else break; } return hole; } ``` <table> <thead> <tr> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>20</td> <td>80</td> <td>40</td> <td>60</td> <td>85</td> <td>99</td> <td>700</td> <td>50</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin <table> <thead> <tr> <th></th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> </tr> </thead> <tbody> <tr> <td></td> <td>4</td> <td>32</td> <td>16</td> <td>67</td> <td>105</td> <td>43</td> <td></td> <td></td> </tr> </tbody> </table> ``` 4 \downarrow 32 16 \downarrow\downarrow 67 105 43 ``` Example 1. insert: 16, 32, 4, 67, 105, 43, 2 2. deleteMin Other operations - **decreaseKey**: given pointer to object in priority queue (e.g., its array index), lower its priority value by \( p \) - Change priority and percolate up - **increaseKey**: given pointer to object in priority queue (e.g., its array index), raise its priority value by \( p \) - Change priority and percolate down - **remove**: given pointer to object in priority queue (e.g., its array index), remove it from the queue - **decreaseKey** with \( p = \infty \), then **deleteMin** Running time for all these operations? Build Heap - Suppose you have $n$ items to put in a new (empty) priority queue - Call this operation `buildHeap` - $n$ inserts works - Only choice if ADT doesn’t provide `buildHeap` explicitly - $O(n \log n)$ - Why would an ADT provide this unnecessary operation? - Convenience - Efficiency: an $O(n)$ algorithm called Floyd’s Method - Common issue in ADT design: how many specialized operations Floyd’s Method 1. Use $n$ items to make any complete tree you want - That is, put them in array indices 1,...,$n$ 2. Treat it as a heap and fix the heap-order property - Bottom-up: leaves are already in heap order, work up toward the root one level at a time ```c void buildHeap() { for (i = size/2; i>0; i--) { val = arr[i]; hole = percolateDown(i,val); arr[hole] = val; } } ``` Example - In tree form for readability - **Purple** for node not less than descendants - heap-order problem - Notice no leaves are **purple** - Check/fix each non-leaf bottom-up (6 steps here) Example - Happens to already be less than children (e.g., child) Example - Percolate down (notice that moves 1 up) Example - Another nothing-to-do step Example - Percolate down as necessary (steps 4a and 4b) Example Example Step 6 But is it right? • “Seems to work” – Let’s prove it restores the heap property (correctness) – Then let’s prove its running time (efficiency) ```c void buildHeap() { for(i = size/2; i>0; i--) { val = arr[i]; hole = percolateDown(i,val); arr[hole] = val; } } ``` Correctness \[ \text{void buildHeap()} \{ \\ \text{for}(i = \text{size}/2; i>0; i--) \{ \\ \text{val} = \text{arr}[i]; \\ \text{hole} = \text{percolateDown}(i, \text{val}); \\ \text{arr}[\text{hole}] = \text{val}; \\ \} \} \] **Loop Invariant:** For all \(j > i\), \(\text{arr}[j]\) is less than its children - True initially: If \(j > \text{size}/2\), then \(j\) is a leaf - Otherwise its left child would be at position \(> \text{size}\) - True after one more iteration: loop body and \text{percolateDown} make \(\text{arr}[i]\) less than children without breaking the property for any descendants So after the loop finishes, all nodes are less than their children Efficiency ``` void buildHeap() { for(i = size/2; i>0; i--) { val = arr[i]; hole = percolateDown(i,val); arr[hole] = val; } } ``` Easy argument: `buildHeap` is $O(n \log n)$ where $n$ is size - $size/2$ loop iterations - Each iteration does one `percolateDown`, each is $O(\log n)$ This is correct, but there is a more precise ("tighter") analysis of the algorithm… Better argument: \texttt{buildHeap} is $O(n)$ where $n$ is \texttt{size} - \texttt{size}/2 total loop iterations: $O(n)$ - 1/2 the loop iterations percolate at most 1 step - 1/4 the loop iterations percolate at most 2 steps - 1/8 the loop iterations percolate at most 3 steps - ... - \[ \left( \frac{1}{2} + \frac{2}{4} + \frac{3}{8} + \frac{4}{16} + \frac{5}{32} + \ldots \right) < 2 \] (page 4 of Weiss) - So at most 2 (\texttt{size}/2) total percolate steps: $O(n)$ **Lessons from buildHeap** - Without `buildHeap`, our ADT already let clients implement their own in $O(n \log n)$ worst case - Worst case is inserting lower priority values later - By providing a specialized operation internal to the data structure (with access to the internal data), we can do $O(n)$ worst case - Intuition: Most data is near a leaf, so better to percolate down - Can analyze this algorithm for: - Correctness: - Non-trivial inductive proof using loop invariant - Efficiency: - First analysis easily proved it was $O(n \log n)$ - Tighter analysis shows same algorithm is $O(n)$ Other branching factors • $d$-heaps: have $d$ children instead of 2 – Makes heaps shallower, useful for heaps too big for memory (or cache) • Homework: Implement a 3-heap – Just have three children instead of 2 – Still use an array with all positions from 1…heap-size used <table> <thead> <tr> <th>Index</th> <th>Children Indices</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2,3,4</td> </tr> <tr> <td>2</td> <td>5,6,7</td> </tr> <tr> <td>3</td> <td>8,9,10</td> </tr> <tr> <td>4</td> <td>11,12,13</td> </tr> <tr> <td>5</td> <td>14,15,16</td> </tr> <tr> <td>…</td> <td>…</td> </tr> </tbody> </table> What we are skipping • **merge**: given two priority queues, make one priority queue – How might you merge binary heaps: • If one heap is much smaller than the other? • If both are about the same size? – Different pointer-based data structures for priority queues support logarithmic time `merge` operation (impossible with binary heaps) • Leftist heaps, skew heaps, binomial queues • Worse constant factors • Trade-offs! Amortized • Recall our plain-old stack implemented as an array that doubles its size if it runs out of room – How can we claim \textbf{push} is \( O(1) \) time if resizing is \( O(n) \) time? – \textit{We can’t}, but we \textbf{can} claim it’s an \( O(1) \) \textbf{amortized operation} • What does amortized mean? • When are amortized bounds good enough? • How can we prove an amortized bound? Will just do two simple examples – Text has more sophisticated examples and proof techniques – \textit{Idea} of how amortized describes average cost is essential Amortized Complexity If a sequence of $M$ operations takes $O(M \cdot f(n))$ time, we say the amortized runtime is $O(f(n))$ Amortized bound: worst-case guarantee over sequences of operations - Example: If any $n$ operations take $O(n)$, then amortized $O(1)$ - Example: If any $n$ operations take $O(n^3)$, then amortized $O(n^2)$ - The worst case time per operation can be larger than $f(n)$ - As long as the worst case is always “rare enough” in any sequence of operations Amortized guarantee ensures the average time per operation for any sequence is $O(f(n))$ “Building Up Credit” - Can think of preceding “cheap” operations as building up “credit” that can be used to “pay for” later “expensive” operations - Because any sequence of operations must be under the bound, enough “cheap” operations must come \textit{first} - Else a prefix of the sequence, which is also a sequence, would violate the bound Example #1: Resizing stack A stack implemented with an array where we double the size of the array if it becomes full Claim: Any sequence of push/pop/isEmpty is amortized $O(1)$ Need to show any sequence of $M$ operations takes time $O(M)$ - Recall the non-resizing work is $O(M)$ (i.e., $M \times O(1)$) - The resizing work is proportional to the total number of element copies we do for the resizing - So it suffices to show that: After $M$ operations, we have done $< 2M$ total element copies (So average number of copies per operation is bounded by a constant) Amount of copying After $M$ operations, we have done $< 2M$ total element copies Let $n$ be the size of the array after $M$ operations - Then we have done a total of: $$\frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \ldots \text{INITIAL_SIZE} < n$$ element copies - Because we must have done at least enough `push` operations to cause resizing up to size $n$: $$M \geq \frac{n}{2}$$ - So $$2M \geq n > \text{number of element copies}$$ Other approaches • If array grows by a constant amount (say 1000), operations are not amortized $O(1)$ – After $O(M)$ operations, you may have done $\Theta(M^2)$ copies • If array shrinks when 1/2 empty, operations are not amortized $O(1)$ – Terrible case: pop once and shrink, push once and grow, pop once and shrink, … • If array shrinks when 3/4 empty, it is amortized $O(1)$ – Proof is more complicated, but basic idea remains: by the time an expensive operation occurs, many cheap ones occurred Example #2: Queue with two stacks A clever and simple queue implementation using only stacks class Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void enqueue(E x){ in.push(x); } E dequeue(){ if(out.isEmpty()) { while(!in.isEmpty()) { out.push(in.pop()); } } return out.pop(); } } enqueue: A, B, C Example #2: Queue with two stacks A clever and simple queue implementation using only stacks class Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void enqueue(E x){ in.push(x); } E dequeue(){ if(out.isEmpty()) { while(!in.isEmpty()) { out.push(in.pop()); } } return out.pop(); } } Example #2: Queue with two stacks A clever and simple queue implementation using only stacks class Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void enqueue(E x) { in.push(x); } E dequeue() { if (out.isEmpty()) { while (!in.isEmpty()) { out.push(in.pop()); } } return out.pop(); } } enqueue D, E Example #2: Queue with two stacks A clever and simple queue implementation using only stacks class Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void enqueue(E x){ in.push(x); } E dequeue(){ if(out.isEmpty()) { while(!in.isEmpty()) { out.push(in.pop()); } } return out.pop(); } } dequeue twice C B A E D in out Example #2: Queue with two stacks A clever and simple queue implementation using only stacks class Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void enqueue(E x){ in.push(x); } E dequeue(){ if(out.isEmpty()) { while(!in.isEmpty()) { out.push(in.pop()); } } return out.pop(); } } Correctness and usefulness - If \( x \) is enqueued before \( y \), then \( x \) will be popped from in later than \( y \) and therefore popped from out sooner than \( y \) - So it is a queue - Example: - Wouldn’t it be nice to have a queue of t-shirts to wear instead of a stack (like in your dresser)? - So have two stacks - \( in \): stack of t-shirts go after you wash them - \( out \): stack of t-shirts to wear - if \( out \) is empty, reverse \( in \) into \( out \) **Analysis** - **dequeue** is not $O(1)$ worst-case because **out** might be empty and **in** may have lots of items. - But if the stack operations are (amortized) $O(1)$, then any sequence of queue operations is amortized $O(1)$ - The total amount of work done per element is 1 push onto **in**, 1 pop off of **in**, 1 push onto **out**, 1 pop off of **out** - When you reverse $n$ elements, there were $n$ earlier $O(1)$ **enqueue** operations to average with Amortized useful? • When the average per operation is all we care about (i.e., sum over all operations), amortized is perfectly fine – This is the usual situation • If we need every operation to finish quickly (e.g., in a web server), amortized bounds may be too weak • While amortized analysis is about averages, we are averaging cost-per-operation on worst-case input – Contrast: Average-case analysis is about averages across possible inputs. Example: if all initial permutations of an array are equally likely, then quicksort is \(O(n \log n)\) on average even though on some inputs it is \(O(n^2)\) Not always so simple • Proofs for amortized bounds can be much more complicated • Example: Splay trees are dictionaries with amortized \(O(\log n)\) operations – No extra height field like AVL trees – See Chapter 4.5 if curious • For more complicated examples, the proofs need much more sophisticated invariants and “potential functions” to describe how earlier cheap operations build up “energy” or “money” to “pay for” later expensive operations – See Chapter 11 if curious • But complicated proofs have nothing to do with the code!
{"Source-Url": "http://courses.cs.washington.edu/courses/cse373/14wi/lecture9.pdf", "len_cl100k_base": 4828, "olmocr-version": "0.1.50", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 72649, "total-output-tokens": 6544, "length": "2e12", "weborganizer": {"__label__adult": 0.0004169940948486328, "__label__art_design": 0.0003654956817626953, "__label__crime_law": 0.0004925727844238281, "__label__education_jobs": 0.0040283203125, "__label__entertainment": 9.208917617797852e-05, "__label__fashion_beauty": 0.00020873546600341797, "__label__finance_business": 0.0001838207244873047, "__label__food_dining": 0.0006170272827148438, "__label__games": 0.0010318756103515625, "__label__hardware": 0.0013275146484375, "__label__health": 0.0007128715515136719, "__label__history": 0.0004277229309082031, "__label__home_hobbies": 0.00018167495727539065, "__label__industrial": 0.0006923675537109375, "__label__literature": 0.0003325939178466797, "__label__politics": 0.0003898143768310547, "__label__religion": 0.0006923675537109375, "__label__science_tech": 0.04193115234375, "__label__social_life": 0.0001928806304931641, "__label__software": 0.004329681396484375, "__label__software_dev": 0.939453125, "__label__sports_fitness": 0.0006437301635742188, "__label__transportation": 0.0009822845458984375, "__label__travel": 0.0002999305725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15607, 0.02846]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15607, 0.66858]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15607, 0.78929]], "google_gemma-3-12b-it_contains_pii": [[0, 90, false], [90, 359, null], [359, 826, null], [826, 1215, null], [1215, 1640, null], [1640, 2540, null], [2540, 2599, null], [2599, 2658, null], [2658, 2717, null], [2717, 2776, null], [2776, 2835, null], [2835, 2894, null], [2894, 3129, null], [3129, 3188, null], [3188, 3736, null], [3736, 4147, null], [4147, 4568, null], [4568, 4772, null], [4772, 4838, null], [4838, 4889, null], [4889, 4927, null], [4927, 4984, null], [4984, 4992, null], [4992, 5008, null], [5008, 5308, null], [5308, 5981, null], [5981, 6383, null], [6383, 6855, null], [6855, 7477, null], [7477, 7991, null], [7991, 8438, null], [8438, 9006, null], [9006, 9578, null], [9578, 9926, null], [9926, 10498, null], [10498, 10942, null], [10942, 11451, null], [11451, 11861, null], [11861, 12254, null], [12254, 12664, null], [12664, 13094, null], [13094, 13486, null], [13486, 13980, null], [13980, 14452, null], [14452, 15063, null], [15063, 15607, null]], "google_gemma-3-12b-it_is_public_document": [[0, 90, true], [90, 359, null], [359, 826, null], [826, 1215, null], [1215, 1640, null], [1640, 2540, null], [2540, 2599, null], [2599, 2658, null], [2658, 2717, null], [2717, 2776, null], [2776, 2835, null], [2835, 2894, null], [2894, 3129, null], [3129, 3188, null], [3188, 3736, null], [3736, 4147, null], [4147, 4568, null], [4568, 4772, null], [4772, 4838, null], [4838, 4889, null], [4889, 4927, null], [4927, 4984, null], [4984, 4992, null], [4992, 5008, null], [5008, 5308, null], [5308, 5981, null], [5981, 6383, null], [6383, 6855, null], [6855, 7477, null], [7477, 7991, null], [7991, 8438, null], [8438, 9006, null], [9006, 9578, null], [9578, 9926, null], [9926, 10498, null], [10498, 10942, null], [10942, 11451, null], [11451, 11861, null], [11861, 12254, null], [12254, 12664, null], [12664, 13094, null], [13094, 13486, null], [13486, 13980, null], [13980, 14452, null], [14452, 15063, null], [15063, 15607, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 15607, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15607, null]], "pdf_page_numbers": [[0, 90, 1], [90, 359, 2], [359, 826, 3], [826, 1215, 4], [1215, 1640, 5], [1640, 2540, 6], [2540, 2599, 7], [2599, 2658, 8], [2658, 2717, 9], [2717, 2776, 10], [2776, 2835, 11], [2835, 2894, 12], [2894, 3129, 13], [3129, 3188, 14], [3188, 3736, 15], [3736, 4147, 16], [4147, 4568, 17], [4568, 4772, 18], [4772, 4838, 19], [4838, 4889, 20], [4889, 4927, 21], [4927, 4984, 22], [4984, 4992, 23], [4992, 5008, 24], [5008, 5308, 25], [5308, 5981, 26], [5981, 6383, 27], [6383, 6855, 28], [6855, 7477, 29], [7477, 7991, 30], [7991, 8438, 31], [8438, 9006, 32], [9006, 9578, 33], [9578, 9926, 34], [9926, 10498, 35], [10498, 10942, 36], [10942, 11451, 37], [11451, 11861, 38], [11861, 12254, 39], [12254, 12664, 40], [12664, 13094, 41], [13094, 13486, 42], [13486, 13980, 43], [13980, 14452, 44], [14452, 15063, 45], [15063, 15607, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15607, 0.04167]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
e8ea8f06862b51e1279620a3d7ebe095b58c28f4
Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class Guidelines Submission. Submit a hardcopy of the report containing all the figures and printouts of code in class. For readability you may attach the code printouts at the end of the solutions. Submissions may be 48 hours late with 50% deduction. Submissions more than 48 hours after the deadline will be given zero. Late submissions should be emailed to the TA as a pdf. Plagiarism. We might reuse problem set questions from previous years, covered by papers and webpages, we expect the students not to copy, refer to, or look at the solutions in preparing their answers. Since this is a graduate class, we expect students to want to learn and not google for answers. Collaboration. The homework must be done individually, except where otherwise noted in the assignments. ’Individually’ means each student must hand in their own answers, and each student must write their own code in the programming part of the assignment. It is acceptable, however, for students to collaborate in figuring out answers and helping each other solve the problems. We will be assuming that, as participants in a graduate course, you will be taking the responsibility to make sure you personally understand the solution to any work arising from such a collaboration. Using other programming languages. All of the starter code is in Matlab which is what we expect you to use. You are free to use other languages such as Octave or Python with the caveat that we won’t be able to answer or debug non Matlab questions. The MNIST dataset The dataset for all the parts of this homework can be loaded into Matlab by typing `load(digits.mat)`. This is similar to the one used in mini-project 1, but contains examples from all 10 digit classes. In addition, the data is split into `train`, `val` and `test` sets. For each split, the features and labels are in variables `x` and `y` respectively. For e.g., `data.train.x` is an array of size $784 \times 1000$ containing 1000 digits, one for each column of the matrix. There are 100 digits each from classes 0, 1, $\ldots$, 9. Class labels are given by the variable `data.train.y`. The `val` set contains 50 examples from each class, and the `test` set contains 100 examples from each class. This is a subset of the much larger MNIST dataset\(^1\). You can visualize the dataset using `montageDigits(x)` function. For example, here is the output of `montageDigits(data.train.x)`. Tip: `montageDigits(x)` internally uses the `montage` command in Matlab which has a `size` option controlling the number of rows and columns of the output. Evaluation For various parts of the homework you will be asked to report the accuracy and confusion matrix. The accuracy is simply the fraction of labels you got right, i.e., $\frac{1}{N} \sum_{k=1}^{N} [y_k = y_{pred_k}]$. The confusion matrix $C$ is a $10 \times 10$ array where $C_{ij} = \sum_{k=1}^{N} [y_k = i] \cdot [y_{pred_k} = j]$. The function provided in the codebase called `[acc, conf] = evaluateLabels(y, ypred, display)` returns the accuracy and confusion matrix. If `display=true`, the code also displays the confusion matrix as seen below: \(^1\)http://yann.lecun.com/exdb/mnist/ 1 Multiclass to binary reductions In the previous homework you implemented a binary classifier using `averagedPerceptronTrain` and `preceptronPredict` functions. In this part you will extend these to the multiclass setting using one-vs-one and one-vs-all reductions. The codebase contains an implementation of the above two functions. You are welcome to use your own implementation, but in case you decide to use the provided functions here are the details. The function `model = averagedPerceptronTrain(x, y, param)` takes as input `x, y` and runs the averaged perceptron training for `param.maxiter` iterations. The output `model` contains the learned weights and some other fields that you can ignore at this point. The function `[y, a]=preceptronPredict(model, x)` returns the predictions `y` and the activations `a = model.w^T x`. The entry code for this part is in the first half of the file `runMulticlassReductions.m`. It loads the data and initializes various parameters. 1.a Multiclass training Implement a function `model = multiclassTrain(x, y, param);` that takes training data `x, y` and return a multi-class model. Right now you will train linear classifiers using `averagedPerceptronTrain`, but later you will extend this to use non-linear classifiers. Your code will do this by setting the parameter `param.kernel.type='linear'`. 1. **[10 points]** When `param.type='onevsall'` the code should return 10 one-vs-all perceptron classifiers, one for each digit class. You may find cell arrays in Matlab useful for this part. 2. **[10 points]** When `param.type='onevsone'` the code should return \( \binom{10}{2} = 45 \) one-vs-one perceptron classifiers, one for each pair of digit classes. 1.b Multiclass prediction Implement a function `ypred = multiclassPredict(model, x, param);` that returns the class labels for the data `x`. 1. **[5 points]** When `param.type='onevsall'` the code should predict the labels using the one-vs-all prediction scheme, i.e., pick the class which has the highest activation for each data point. 2. **[5 points]** When `param.type='onevsone'` the code should predict the labels using the one-vs-one prediction scheme, i.e., pick the class that wins the most number of times for each data point. 1.c Experiments On the `train` set learn one-vs-one and one-vs-all classifiers and answer the following questions: 1. **[2 points]** On the `val` set estimate the optimal number of iterations. You may find that the accuracy saturates after a certain number of iterations. Report the optimal value you found for the `onevsone/onevsall` classifiers? Set `param.maxiter` to that value. 2. **[2 points]** What is the `test` accuracy of `onevsone/onevsall` classifier on the dataset? 3. **[2 points]** Show the confusion matrices on the `test` set for `onevsone/onevsall` classifiers. What pair of classes are the most confused for each classifier, i.e., `i` and `j` that have the \( \max_{ij}(C_{ij} + C_{ji}) \)? 2 Kernelized perceptrons In this part you will extend the perceptron training and prediction to use polynomial kernels. A polynomial kernel of degree \( d \) is defined as: \[ K_{\text{poly}}^{d}(x, z) = (a + bx^T z)^d \] (1) Here, \( a \geq 0, b > 0 \) and \( d \in \{1, 2, \ldots, \infty\} \) are hyperparameters of the kernel. 2.a Training kernelized perceptrons [10 points] In this part you will implement a kernelized version of averaged perceptron in the function \[ \text{model} = \text{kernelizedAveragedPerceptronTrain}(x, y, \text{param}) \] In class we extended the perceptron training to require only dot products, or kernel evaluations between data points. However, it is often more efficient to precompute the kernel values between all pairs of input data to avoid repeated kernel evaluations. Write a function \( K = \text{kernelMatrix}(x, z, \text{param}) \) that returns a matrix with \( K_{ij} = \text{kernel}(x_i, z_j) \). The kernel is specified by \( \text{param.kernel.type} \). It is important to implement this efficiently, for example by avoiding for loops. You may find the Matlab function \( \text{pdist} \) useful for this. The output of the training is a set of weights \( \alpha \) for each training example. However, this alone is not sufficient for prediction since it requires dot products with the training data. You may find it convenient to store the training data for each classifier in the model itself, for example in a field such as \( \text{model.x} \), and the weights \( \alpha \) in \( \text{model.a} \). The entry code for this part is in the second half of the file \( \text{runMulticlassReductions.m} \). Tip: When the kernel is linear then the classifier trained using kernelized averaged perceptron should match the results (up to numerical precision) using averaged perceptron. With polynomial kernels you can do this by setting \( a = 0, b = 1 \) and \( d = 1 \). 2.b Predictions using kernelized perceptrons [5 points] Implement \( [y, a] = \text{kernelizedPerceptronPredict}(\text{model}, x, \text{param}) \), which takes the model and features as input and returns predictions and activations. Once again you may first compute the kernel matrix using \( K = \text{kernelMatrix}(\text{model.x}, x, \text{param}) \) function you implemented earlier and then multiply \( \text{model.a} \) to compute activations. 2.c Going multiclass [5 points] Integrate the above two functions with \( \text{multiclassTrain} \) and \( \text{multiclassPredict} \) functions you implemented in problem 1 to use kernelized perceptrons as the binary classifier. The code should use perceptrons when \( \text{param.kernel.type} = \text{'linear'} \) and kernelized perceptrons otherwise. The code passes the parameters of the polynomial kernel in the field \( \text{param.kernel.poly} \). 2.d Experiments Train a multiclass classifier using a polynomial kernel with \( a = 1, b = 1 \) and \( d = 4 \) as the kernel parameters. This is a degree 4 polynomial kernel. Run the kernelized perceptron for 100 iterations during training and answer the following questions. 1. [2 points] What is the test accuracy of \text{onevsone/onevsall} classifier on the dataset? 2. [2 points] Show the confusion matrices on the test set for \text{onevsone/onevsall} classifiers. What classes are the most confused? 3 Multiclass naive Bayes classifier For this part you will train a naive Bayes classifier for multiclass prediction. Assume that for a given data \( x \) the \( i \text{th} \) feature \( x^{(i)} \), for class \( j \), is generated according to a Gaussian distribution with class specific means \( \mu_{ij} \) and variances \( \sigma^2_{ij} \), i.e., \[ P(x^{(i)}|Y = j) = \frac{1}{\sqrt{2\pi\sigma_{ij}}} \exp \left( -\frac{|x^{(i)} - \mu_{ij}|^2}{2\sigma^2_{ij}} \right). \] (2) In addition each class is generated using a multinomial, i.e, \[ P(Y = j) = \theta_j. \] (3) 1. [4 points] Write down the log-likelihood of the training data \((x_1, y_1), (x_2, y_2), \ldots (x_n, y_n)\) in terms of the parameters. 2. [2 points] What is the maximum likelihood estimate of the parameter \( \theta_j \)? 3. [2 points] What is the maximum likelihood estimate of the parameter \( \mu_{ij} \)? 4. [2 points] What is the maximum likelihood estimate of the parameter \( \sigma_{ij} \)? 3.a Experiments You will implement a naive Bayes model for mutliclass digit classification. The entry code for this part is in runNaiveBayes.m. Before you implement the model, there is one issue to take care of. Suppose that pixel \( i \) is always zero for a class \( j \), then the MLE of the \( \sigma_{ij} \) for that pixel and class will be zero. This leads to probabilities that are zero for any pixel value not equal to zero. Thus a little bit of noise in the pixel can make the total probability zero. One way to deal with this problem is to add a small positive constant \( \tau \) to the MLE of the \( \sigma_{ij} \) parameter, i.e., \[ \hat{\sigma}_{ij}^2 \leftarrow \hat{\sigma}_{ij}^2 + \tau. \] (4) This will assign a small non-zero probability to any noisy pixel you might see in the test data. In terms of Bayesian estimation this is a maximum a-posteriori (MAP) estimate because \( \tau \) acts as a prior to the \( \sigma \) parameter. [10 points] Implement the function \textbf{model=naiveBayesTrain(x, y, } \tau \textbf{)} that estimates the parameters of the naive Bayes model using the smoothing parameter \( \tau \). [5 points] Implement the function \textbf{ypred=naiveBayesPredict(model, x)} that predicts the class with the highest joint probability \( P(x, Y = k) \). Train these models on the \textbf{train} set using \( \tau = 0.01 \) and report accuracies on the \textbf{val, test} set. Tip: Compute log probabilities to avoid numerical underflow. 4 Multiclass logistic regression We can easily extend the binary logistic regression model to handle multiclass classification. Let’s assume we have K different classes, and posterior probability for class $k$ is given by: $$P(Y = k | X = x) = \frac{\exp(w_k^T x)}{\sum_{i=1}^{K} \exp(w_i^T x)}.$$ (5) Our goal is to estimate the weights using gradient ascent. We will also define regularization on the parameters to avoid overfitting and very large weights. 1. [4 points] Write down the log conditional likelihood of the labels, $L(w_1, \ldots, w_K)$ with $L_2$ regularization on the weights. Use $\lambda$ as the tradeoff parameter between the $L$ and regularization, i.e., objective $= L - \lambda \times$ regularization. Show your steps. 2. [4 points] Note that there is not a closed form solution to maximize the log conditional likelihood, $L(w_1, \ldots, w_K)$ with respect to $w_k$. However, we can still find the solution with gradient ascent by using partial derivatives. Derive the expression for the $i^{th}$ component in the vector gradient of $L(w_1, \ldots, w_K)$ with respect to $w_k$. 3. [2 points] Beginning with the initial weights of 0, write down the update rule for $w_k$, using $\eta$ for the step size. 4. [2 points] Will the solution converge to a global maximum? 5. [20 points] Train a multiclass logistic regression using $\eta = 0.01$ and $\lambda = 0.01$ on the training set and run batch gradient ascent for `param.maxiter=100` iterations. Recall that in batch gradients, we sum the gradients across all training examples. In Matlab you can do all the computations efficiently using matrix multiplications. For example, on my laptop the entire training (100 iterations) takes less a second. Report accuracy on the test set. The entry code for this part is in `runMulticlassLR.m`. 5 Two layer neural network Consider a two layer neural network with a hidden layer of $H$ units and $K$ output units, one for each class. Assume that each hidden unit is connected to all the inputs and a bias feature. Use the sigmoid function as the link function. $$\sigma(z) = \frac{1}{1 + \exp(-z)}$$ (6) The output units combines the hidden units using multi-class logistic regression as in the earlier problem. 1. [4 points] Write down the log-likelihood of the labels given all the weights with a $L_2$ regularization on the weights. Use $\lambda$ as the tradeoff parameter between the $L$ and regularization, i.e., objective $= L - \lambda \times$ regularization. Show your steps. 2. [4 points] Derive the equations for gradients for all the weights. The gradients of the second layer weights resemble the earlier problem, while that for the hidden layer can be obtained by back-propagation (or chain-rule of gradients). 3. [2 points] Suppose you run gradient ascent, will the solution converge to a global maximum? 4. [20 points] Train a two layer network with 100 hidden units using $\eta = 0.001$ and $\lambda = 0.01$ on the training set and run batch gradient ascent for `param.maxiter=1000` iterations. Once again you can implement all the computations using matrix multiplications. My implementation takes about 30 seconds for the entire training. Tip: you can do entry-wise matrix multiplication using `.*` operation. Initialize the weights randomly to small weights using the Matlab function `randn(.,.)*0.01`. Report accuracy on the test set. Tip: You can implement this part with small modifications to the multiclass LR code from the earlier part. The entry code for this part is in `runMulticlassNN.m`. Tip: For the previous problem and this one, make sure your objective increases after each iteration. Checkpoints To help with debugging here are the outputs of my implementation. The neural network results might vary slightly due to randomness in initialization. You might be able to get better performance by tuning the hyperparameters. It might be tempting to tune them on the test data, but that would be cheating! Only use the validation data for this. >> runMulticlassReductions OneVsAll:: Perceptron (linear):: Validation accuracy: 75.80% OneVsAll:: Perceptron (linear):: Test accuracy: 75.70% OneVsOne:: Perceptron (linear):: Validation accuracy: 86.40% OneVsOne:: Perceptron (linear):: Test accuracy: 84.30% OneVsAll:: Perceptron (poly):: Validation accuracy: 88.60% OneVsAll:: Perceptron (poly):: Test accuracy: 87.30% OneVsOne:: Perceptron (poly):: Validation accuracy: 88.20% OneVsOne:: Perceptron (poly):: Test accuracy: 87.00% >> runNaiveBayes NaiveBayes:: Validation accuracy: 75.60% NaiveBayes:: Test accuracy: 76.00% >> runMulticlassLR Multiclass LR:: Validation accuracy: 85.40% Multiclass LR:: Test accuracy: 82.70% >> runMulticlassNN Multiclass LR:: Validation accuracy: 87.00% Multiclass LR:: Test accuracy: 85.20%
{"Source-Url": "http://www-edlab.cs.umass.edu/~smaji/cmpsci689/proj/p2.pdf", "len_cl100k_base": 4302, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19112, "total-output-tokens": 4795, "length": "2e12", "weborganizer": {"__label__adult": 0.0005946159362792969, "__label__art_design": 0.0013494491577148438, "__label__crime_law": 0.0007433891296386719, "__label__education_jobs": 0.0677490234375, "__label__entertainment": 0.0002378225326538086, "__label__fashion_beauty": 0.0004565715789794922, "__label__finance_business": 0.0005502700805664062, "__label__food_dining": 0.0009598731994628906, "__label__games": 0.0016183853149414062, "__label__hardware": 0.0025806427001953125, "__label__health": 0.0012140274047851562, "__label__history": 0.0007762908935546875, "__label__home_hobbies": 0.000640869140625, "__label__industrial": 0.001552581787109375, "__label__literature": 0.0006551742553710938, "__label__politics": 0.0005593299865722656, "__label__religion": 0.0010137557983398438, "__label__science_tech": 0.2274169921875, "__label__social_life": 0.0006518363952636719, "__label__software": 0.0150604248046875, "__label__software_dev": 0.6708984375, "__label__sports_fitness": 0.0010042190551757812, "__label__transportation": 0.0011072158813476562, "__label__travel": 0.0004494190216064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16825, 0.04086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16825, 0.80324]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16825, 0.80441]], "google_gemma-3-12b-it_contains_pii": [[0, 1571, false], [1571, 3235, null], [3235, 6205, null], [6205, 9552, null], [9552, 12031, null], [12031, 15687, null], [15687, 16825, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1571, true], [1571, 3235, null], [3235, 6205, null], [6205, 9552, null], [9552, 12031, null], [12031, 15687, null], [15687, 16825, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16825, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16825, null]], "pdf_page_numbers": [[0, 1571, 1], [1571, 3235, 2], [3235, 6205, 3], [6205, 9552, 4], [9552, 12031, 5], [12031, 15687, 6], [15687, 16825, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16825, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
6db189c75f49cdba7f5ccc8dd869c16c892a20b6
[REMOVED]
{"Source-Url": "http://slazebni.cs.illinois.edu/fall18/lec13_gan.pdf", "len_cl100k_base": 4161, "olmocr-version": "0.1.46", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 69786, "total-output-tokens": 6739, "length": "2e12", "weborganizer": {"__label__adult": 0.0006265640258789062, "__label__art_design": 0.001323699951171875, "__label__crime_law": 0.0006928443908691406, "__label__education_jobs": 0.0008673667907714844, "__label__entertainment": 0.0003132820129394531, "__label__fashion_beauty": 0.0003938674926757813, "__label__finance_business": 0.0004105567932128906, "__label__food_dining": 0.0004611015319824219, "__label__games": 0.00147247314453125, "__label__hardware": 0.001929283142089844, "__label__health": 0.001140594482421875, "__label__history": 0.00043582916259765625, "__label__home_hobbies": 0.00013065338134765625, "__label__industrial": 0.0007233619689941406, "__label__literature": 0.0004978179931640625, "__label__politics": 0.0004279613494873047, "__label__religion": 0.0008249282836914062, "__label__science_tech": 0.2490234375, "__label__social_life": 0.00015544891357421875, "__label__software": 0.021759033203125, "__label__software_dev": 0.71533203125, "__label__sports_fitness": 0.00046706199645996094, "__label__transportation": 0.00045418739318847656, "__label__travel": 0.0002970695495605469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16243, 0.01373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16243, 0.39709]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16243, 0.72218]], "google_gemma-3-12b-it_contains_pii": [[0, 61, false], [61, 233, null], [233, 381, null], [381, 471, null], [471, 635, null], [635, 815, null], [815, 969, null], [969, 1169, null], [1169, 1343, null], [1343, 1693, null], [1693, 2223, null], [2223, 2602, null], [2602, 3259, null], [3259, 3839, null], [3839, 4133, null], [4133, 4724, null], [4724, 4889, null], [4889, 5187, null], [5187, 5224, null], [5224, 5597, null], [5597, 5810, null], [5810, 6055, null], [6055, 6300, null], [6300, 6836, null], [6836, 6886, null], [6886, 6938, null], [6938, 7051, null], [7051, 7120, null], [7120, 7170, null], [7170, 7220, null], [7220, 7283, null], [7283, 7475, null], [7475, 7595, null], [7595, 8052, null], [8052, 8550, null], [8550, 9027, null], [9027, 9431, null], [9431, 9800, null], [9800, 9859, null], [9859, 10463, null], [10463, 10629, null], [10629, 10882, null], [10882, 11427, null], [11427, 12601, null], [12601, 13158, null], [13158, 13329, null], [13329, 13547, null], [13547, 13609, null], [13609, 14010, null], [14010, 14184, null], [14184, 14994, null], [14994, 15691, null], [15691, 16243, null]], "google_gemma-3-12b-it_is_public_document": [[0, 61, true], [61, 233, null], [233, 381, null], [381, 471, null], [471, 635, null], [635, 815, null], [815, 969, null], [969, 1169, null], [1169, 1343, null], [1343, 1693, null], [1693, 2223, null], [2223, 2602, null], [2602, 3259, null], [3259, 3839, null], [3839, 4133, null], [4133, 4724, null], [4724, 4889, null], [4889, 5187, null], [5187, 5224, null], [5224, 5597, null], [5597, 5810, null], [5810, 6055, null], [6055, 6300, null], [6300, 6836, null], [6836, 6886, null], [6886, 6938, null], [6938, 7051, null], [7051, 7120, null], [7120, 7170, null], [7170, 7220, null], [7220, 7283, null], [7283, 7475, null], [7475, 7595, null], [7595, 8052, null], [8052, 8550, null], [8550, 9027, null], [9027, 9431, null], [9431, 9800, null], [9800, 9859, null], [9859, 10463, null], [10463, 10629, null], [10629, 10882, null], [10882, 11427, null], [11427, 12601, null], [12601, 13158, null], [13158, 13329, null], [13329, 13547, null], [13547, 13609, null], [13609, 14010, null], [14010, 14184, null], [14184, 14994, null], [14994, 15691, null], [15691, 16243, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16243, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16243, null]], "pdf_page_numbers": [[0, 61, 1], [61, 233, 2], [233, 381, 3], [381, 471, 4], [471, 635, 5], [635, 815, 6], [815, 969, 7], [969, 1169, 8], [1169, 1343, 9], [1343, 1693, 10], [1693, 2223, 11], [2223, 2602, 12], [2602, 3259, 13], [3259, 3839, 14], [3839, 4133, 15], [4133, 4724, 16], [4724, 4889, 17], [4889, 5187, 18], [5187, 5224, 19], [5224, 5597, 20], [5597, 5810, 21], [5810, 6055, 22], [6055, 6300, 23], [6300, 6836, 24], [6836, 6886, 25], [6886, 6938, 26], [6938, 7051, 27], [7051, 7120, 28], [7120, 7170, 29], [7170, 7220, 30], [7220, 7283, 31], [7283, 7475, 32], [7475, 7595, 33], [7595, 8052, 34], [8052, 8550, 35], [8550, 9027, 36], [9027, 9431, 37], [9431, 9800, 38], [9800, 9859, 39], [9859, 10463, 40], [10463, 10629, 41], [10629, 10882, 42], [10882, 11427, 43], [11427, 12601, 44], [12601, 13158, 45], [13158, 13329, 46], [13329, 13547, 47], [13547, 13609, 48], [13609, 14010, 49], [14010, 14184, 50], [14184, 14994, 51], [14994, 15691, 52], [15691, 16243, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16243, 0.05224]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
2579837f57e87ea108bf29da9729e4a4eac3ef1b
Intel® SecL-DC Intel(R) SecL-DC Version 4.0 GA Foundational/WL Use Cases: • Support for Client Intel® PTT (fTPM) has been added. ISecL has always supported the server implementation of PTT. However, the client implementation has a difference in the endorsement certificate hierarchies that would cause the TPM authenticity verification to fail during Trust Agent provisioning. This alternative implementation is now supported. • Support for TPM SHA384 PCR banks has been added. Newer TPMs support SHA384, and ISecL has added support for this algorithm. The addition of another PCR bank algorithm has also forced changes in the way the HVS will behave when it encounters multiple available PCR banks. By default, the most secure PCR bank will be preferred when importing flavors or performing attestations. If a host only has one bank enabled, that bank will be used. If a host has multiple banks enabled, the HVS will choose the “best” available algorithm and disregard the others. In most cases this will not be noticeable. In the specific case where the same flavor will be used on hosts with different PCR banks enabled, however, this may result in “untrusted” results: o Importing a flavor from a host with SHA256 and SHA384 enabled will generate a flavor using SHA384. o If host is attested that only has a SHA256 bank enabled and does not have SHA384, the host will appear untrusted because no flavors can match the SHA256 hashes available, even if the servers are otherwise identical. o This specific scenario requires separate flavors for the SHA256 systems. A datacenter may require a RHEL 8.4 SHA384 flavor, and also a RHEL 8.4 SHA256 flavor. The OS can be identical, but the measurements used at attestation time need to match the measurements in the flavors. • Platform-info gathering from the Trust Agent has been changed to now pull platform details directly from UEFI and ACPI tables, instead of working through intermediary applications like dmidecode. o Previous Trust Agent dependencies like dmidecode have been removed o Additional PCR event logs are now available that were previously invisible. This allows far more visibility and granularity in remote attestations and flavor definitions. • Intel SecL now exposes Flavor templates. Previously the definitions for which PCRs and events would belong in each flavor part were hard-coded. Given specific host conditions (which security technologies are enabled, etc) and the specific flavor part (PLATFORM, OS, etc), the HVS would use specific PCRs and events. These definitions were previously not changeable by the end user. This new feature adds the capability to create new templates that cumulatively apply definitions to flavor parts based on what’s detected on the host. This allows an administrator to define which PCRs and events should be used for each flavor part, and to control those definitions based on conditions present on the host. • The Key Broker now requires that the KMIP client certificate include Subject Alternative Names with the KMIP server’s hostname. NOTE: this requirement may mean that older KMIP client certificates will no longer be valid after an upgrade to version 4.0, if the previously generated certificates did not contain the needed SAN. • Support for the filesystem key manager in the Key Broker Service has been removed. This feature was provided as a POC-level function to help customers get started using the Key Broker and was not intended for production use. This functionality has been removed. The Key Broker now requires a 3rd-party KMIP key manager. • The Trust Agent and HVS now support communication in NATS mode. This does not replace the default HTTP communication but is provided as an alternative. By default, the Trust Agent exposes REST API endpoints Intel® SecL-DC 3 Release Notes through an open port, and communication is initiated by the HVS to make API requests to the Trust Agent. NATS mode provides an option for an alternative messaging system. In NATS mode the Trust Agent does not expose any API endpoints. Instead, the Agent will establish a connection to a NATS server that acts as a messaging system. The HVS will also route Trust Agent communication via the NATS server, allowing the same functionality provided in HTTP mode. NATS is a third-party application; additional documentation can be found at https://nats.io. Sample configuration information for a NATS Server are provided in the Intel® Security Libraries Product Guide. • The Trust Agent has changed how it utilizes TPM ownership. Previously the Agent would assert TPM ownership at service startup. This is no longer required. Instead, TPM ownership is not only required at the time of TPM provisioning, when the AIK is generated and endorsed by the HVS. This makes the Trust Agent much easier to use for administrators who also need to use their TPMs for purposes outside of Intel® Security Libraries. The ownership secret is also no longer stored in the config.yml file. As part of this change, the Trust Agent now supports and defaults to a NULL TPM ownership secret. If the TPM_OWNERSHIP_SECRET variable is provided in trustagent.env, the installer will use the specified secret. If no secret is specified, or if the value of the variable is empty, the Agent will use a NULL TPM ownership secret by default. This change is intended to allow administrators to control and manage the platform TPM and how ownership is used. Defaulting to a NULL secret also means that the Trust Agent will no longer require clearing the TPM ownership in any but the rarest of cases. The Trust Agent will also now support TPM ownership secrets other than the previous 20 byte hex secret. Any ASCII string can now be utilized. To force a string to be treated as hex, prefix the secret with "hex:". Again this allows significantly greater control over the secret for the platform administrator. NOTE: This change will require upgrades to version 4.0 to re-provision the Trust Agent. This means that for upgrades to version 4.0, the installation bearer token must be provided. In addition, if the TPM owner secret is already set, the secret must be provided at provisioning time as well (which will typically be when the upgrade is executed). This can also be an opportunity to clear the TPM ownership and use the NULL secret or one of the other new options. • Support for the Docker container runtime interface has been deprecated for the container confidentiality feature. Intel® SecL now supports only the CRI-O container runtime interface for container confidentiality. Known Issues: **Important Note:** SGX Attestation fails when SGX is enabled on a host booted using tboot **Root Cause:** tboot requires the "noefi" kernel parameter to be passed during boot, in order to not use an unmeasured EFI runtime services. As a result, the kernel does not expose EFI variables to user-space. SGX Attestation requires these EFI variables to fetch Platform Manifest data. **Workaround:** The EFI variables required by SGX are only needed during the SGX provisioning/registration phase. Once this step is completed successfully, access to the EFI variables is no longer required. This means this issue can be worked around by installing the SGX agent without booting to tboot, then rebooting the system to tboot. SGX attestation will then work as expected while booted to tboot. 1. Enable SGX and TXT in the platform BIOS 2. Perform SGX Factory Reset and boot into the “plain” distribution kernel (without tboot or TCB) 3. Install tboot and ISecL components (SGX Agent, Trust Agent and Workload Agent) 4. The SGX Agent installation fetches the SGX Platform Manifest data and caches it 5. Reboot the system into the tboot kernel mode. 6. Verify that TXT measured launch was successful: ``` txt-stat | grep "TXT measured launch" ``` 7. The SGX and Platform Integrity Attestation use cases should now work as normal. **Intel(R) SecL-DC Version 3.6 GA** Foundational/WL Use Cases: - Containerization Deployment of Foundational and Workload Security Use Cases supported and validated with RHEL and Ubuntu OS - Added support for pyKMIP integration with Workload Security Use Cases - Additional performance and scalability improvements - Common Integration Hub to support both Foundational/WL Use Cases and SKC/SGX Attestation Use Cases SKC/ SGX Attestation Use Cases: - Additional performance and scalability improvements - SGX Sample Application with quote verification signature added **Known Issues:** • While upgrading components normally requires no installation answer files, the Integration Hub when upgrading from version 3.5 to 3.6 will require an answer file containing “HVS_BASE_URL=https://hvs.server.com:8443/hvs/v1” variable. This is required because the variable and its corresponding configuration setting for the Hub changed between these release versions. No other variables or env files are otherwise required for upgrades. Intel® SecL-DC **Intel(R) SecL-DC Version 3.5 GA** Foundational/WL Use Cases: • Additional performance and scalability improvements • Added new filter criteria to the /v2/hosts API. Hosts can now be searched by trust status, and the response data when retrieving host details can now optionally also include the host-status and Trusted state. See the HVS Swaggerdoc for details. • Host searches will now return data in a consistent order (based on the timestamp when the host was registered), and can be sorted by ascending or descending order. See the HVS Swaggerdoc for details. • The CLI command "setup server" has been replaced by "setup update-service-config" across all Foundational Security services. See the Product Guide for details. SKC/ SGX Attestation Use Cases: • Containerization Deployment of SKC and SGX Attestation Use Cases supported • Added support for pyKMIP integration with SKC Use Case; See Quick Start Guide for deployment details • Additional performance and scalability improvements • Added new filter criteria to the /v2/hosts API. See the HVS Swagger docs for details. • SGX Sample Application and Verifier Enhancements **Known Issues:** • Sometimes the SGX Compute node may becomes inaccessible after the secure key transfer. See guidance in Product Guide. • Running SHVS Setup Task after changing a config value fails. See guidance in Product Guide. **Intel(R) SecL-DC Version 3.4** Foundational/WL Use Cases: • Some environment variables have changed for clarity/consistency. These changes are in: - populate-users.env - trustagent.env - wpm.env Release Notes See the Product Guide for details. - Backend changes have been made to improve the performance of the HVS, particularly for large scale deployments. SKC/ SGX Attestation Use Cases: - Streamline discovery and registration flows - Upgrades & Bug Fixes: Upgraded to DCAP PV version & few Security and Performance bug fixes - Support added for SGX Sample Verifier App & Integration Intel(R) SecL-DC Version 3.3 Foundational/WL Use Cases: - The Integration Hub installation variables have been adjusted. See the Product Guide for details on the updated .env file options. - Compatibility updates for integration with OpenStack Ussuri - Validated with RHEL8.3 Known Issues - Due to a change in Libvirt behavior, there is a new prerequisite Libvirt configuration change for Workload Confidentiality with Virtual Machines. Before installing the Workload Agent, in /etc/libvirt/qemu.conf, ensure that the setting "remember_owner" is set to 0, and then restart the libvirtd service. If this step is not performed before launching encrypted VMs, on VM restart you will see errors similar to the following: "Error starting domain: internal error: child reported (status=125): Requested operation is not valid: Setting different SELinux label on /var/lib/nova/instances/15d7ec2f-27ad-41ed-9632-32a83c3d10ef/disk which is already in use" SKC Use Cases: - Added Openstack* Ussuri* support for Security Aware Orchestration - Compatibility upgrades to align with the Intel DCAP APIs for easier integration of SKC/SGX Attestation Use Cases Intel(R) SecL-DC Version 3.2 Foundational/WL Use Cases: - The Key Broker now supports both Secure Key Caching and Foundational Security workflows with a single codebase. Previously separate KBS builds were required for each of these use cases, and they have now been merged into a single service. - Vmware Cluster Registration has been re-enabled in the Host Verification Service. This function allows registration of an entire vCenter cluster object, which will cause ESXi hosts to automatically be registered or un-registered for attestation in the HVS as they are added to or removed from the vCenter cluster. - Performance Improvements SKC Use Cases: - SKC workload and SGX Agent support Ubuntu 18.04 for the Secure Key Caching usecase - Added support for SGX Attestation **Intel(R) SecL-DC Version 3.1** Foundational/WL Use Cases: - Added support for CRI-O and Skopeo to the Container Confidentiality use case. Previously only the Docker container runtime was supported for this use case. - The Integration Hub now also pushes information about enabled hardware security features to Kubernetes in addition to the existing Trust and Asset Tag information. - Added deployment support through Ansible Galaxy; please see quick start guide for details. - Postman collections created for SKC Use case; please see quick start guide for details SKC Use Cases: - Added containerized workload support for Secure Key Caching based on Intel(R) SGX technology - Added support for choosing Sandbox or Production PCS through caching service answer file - Added support for Secure Key Caching in the Integration Hub; - Added deployment support through Ansible Galaxy; please see quick start guide for details. - Postman collections created for SKC Use case; please see quick start guide for details **Intel(R) SecL-DC Version 3.0** Updated name - Intel(R) Security Libraries encompasses use cases and enablement solutions for multiple Intel(R) security features. As new features are covered by Intel(R) SecL, the need has arisen to distinguish use cases based on different classes of security features. Documents and applications that build on platform integrity attestation solutions related to hardware Root of Trust technologies will be referred to as "Foundational Security" elements. Other Intel(R) SecL solutions will have their own documentation and applications, including those for SGX. - Added support for Secure Key Caching based on Intel(R) SGX technology - The Verification Service has been renamed the Host Verification Service and rewritten in GO • The baseurl for all HVS APIs has been updated to replace "mtwilson" with "hvs". The previous baseurl will remain functional for a time for backward compatibility, but users are advised to update any integrations to use the new "hvs" baseurl. • All HVS CLI commands have been changed from "mtwilson" to "hvs" • The installation answer file has been renamed from "mtwilson.env" to "hvs.env" • Installation variables and configuration elements have changed; please see the Product Guide for details. • The Integration Hub has been rewritten in GO • The installation answer file has been renamed "ihub.env" • Installation variables and configuration elements have changed; please see the Product Guide for details. • The Hub no longer uses an API webserver, and no longer requires configuration of "tenants" or assigning hosts to tenants. Instead, a separate Hub must be deployed for each integration endpoint (OpenStack, Kubernetes, etc). See the Product Guide for details. • Some CLI commands have changed. See the Product Guide for details. • The Intel(R) SecL Custom Resource Definitions for Kubernetes integration have been updated to be entirely container-based. The installer binary will now configure containers on the Kube Master node to handle the CRD functions. Intel(R) SecL-DC Version 2.2.1 • Removed IP address salt for TA and VS Bug Fixes • Resolved an issue where the TA provisioning would fail with certain TPM modules Intel(R) SecL-DC Version 2.2 Bug Fixes • Resolved an issue where the Key Broker uninstall would fail to remove some files • Resolved an issue that would cause the Integration hub to fail to retrieve attestation report updates • Resolved an issue that would occasionally cause containers protected by the Container Confidentiality feature to have multiple failed start attempts before starting successfully • Updated sample Postgres installation scripts to use an appropriate number of max_connections • Resolved an issue where Trust Agent hosts could appear untrusted after restarting the Agent Intel(R) SecL-DC Version 2.1 Added support for 3rd-party KMIP key managers to the Intel(R) SecL-DC Key Broker • The Key Broker still supports a built-in basic key management system for POCs, not intended for use in production - The Key Broker will support KMIP version 2.0 compatible 3rd-part key managers. **Added support for Trusted Virtual Kubernetes Worker Nodes** - Addresses the Chain of Trust for Kubernetes Worker Nodes running as Virtual Machines - VM Attestation Reports are now created in the Workload Service for all VM starts through libvirt, including VMs not encrypted by the Workload Confidentiality feature. Currently the trust status of the VM is effectively the trust status of the underlying host. - Database clients for the Workload Service and the Authentication and Authorization Service will now validate the database server certificate Subject Alternative Names and Common Name. Corresponding changes to the Verification Service are planned for a future release. - The provided install_pgdb.sh and create_db.sh scripts have been modified to use new env file options (ISECL_PGDB_CERT_DNS, ISECL_PGDB_CERT_IP) to configure the database certificate. If these env file variables are not set, the database scripts will generate a self-signed certificate using localhost and 127.0.0.1 as the SAN list, meaning the database will be accessible only on the local server. These variables must be configured with the appropriate IP address/hostname if a remote database will be used. **Known Issues** When using Workload Confidentiality to launch multiple Docker container replicas, containers may go to the CrashLoopBackOff state. The replicas will still start as expected after a small number of failed attempts, impacting container startup performance. **Intel(R) SecL-DC Version 2.0** The Trust Agent is now written in GO - The Trust Agent installer no longer automatically installs tboot. Instructions for tboot installation are now included in the Product Guide. - The Trust Agent installer no longer automatically installs the Workload Agent. - Trust Agent configuration file and CLI commands have changed with the migration to GO. See the relevant sections in the Product Guide for details. - All services now support a granular permissions-based model for roles (instead of only predefined roles with hard-coded permissions) - Added support for RHEL 8.1 - Removed support for RHEL 7 - Resolved an issue where, if a software manifest was deleted from a Trust Agent host, the host could still appear trusted even though the measurements required in the flavor would now be missing. **Intel(R) SecL-DC Version 1.6.1** Updated the Workload Agent for Workload Confidentiality using Docker Container Encryption. An update to the Docker runtime required adjustment to the Secure Docker Daemon used to manage encrypted containers. Intel® SecL-DC **Intel(R) SecL-DC Version 1.6** **Added the Signed Flavor feature** - Allows the Verification Service to sign Flavors and verify the signature at attestation time to maintain the integrity of the Flavors. **Added the Workload Confidentiality feature** - Allows image owners for virtual machines or Docker containers to encrypt the source images of their workloads. Encryption keys remain under the image owner's control, and are released to specific servers, sealed to that server's TPM, upon a successful integrity attestation with attributes that meet policy requirements determined by the workload image owner. Because the image decryption key is sealed to the TPM of the host that was attested, this means that only a server that meets the requirements of the image owner as proven by an attestation report can successfully access the image. - Adds the new Workload Service (WLS) - The Workload Service manages mapping image IDs (as they exist in image storage, i.e., OpenStack Glance) to key IDs - Adds the new Workload Agent (WLA) - Manages the compute node/worker node operation, intercepting attempted launch of encrypted workloads, makes requests for keys, and manages crypto volumes for accessed images. - Adds the new Key Broker Service (KBS) - Acts as the policy manager for handling key requests. Verifies that received attestation reports are signed by a known Verification Service and that the attestation attributes match policy requirements. - Adds the new Workload Policy Manager (WPM) - Application that encrypts a new workload image Authentication for new components (WLS, WLA) now uses token-based authentication provided by the new Authentication and Authorization service (AAS). This is planned to replace the existing authentication mechanisms for all Intel SecL services in the 1.6 release version. Added the new Certificate Management Service (CMS). This service will replace and centralize all existing certificate management functions in all Intel SecL services for the 1.6 release version. In the BETA release, this is currently integrated for the AAS and WLS. **Intel(R) SecL-DC Version 1.5** - Updated algorithms to use SHA384 instead of SHA256 - Updated key generation to use RSA-3K Release Notes • Added support for additional Root of Trust options – Intel BootGuard and UEFI SecureBoot – including removing the tboot requirement if UEFI SecureBoot is enabled (due to incompatibility) • Added integration support for Kubernetes pod scheduling based on Intel® SecL security attributes • Added the Application Integrity feature • Allows the Chain of Trust to extend above the OS kernel using a new measurement agent (tbootXM) built into initrd • Supports boot-time measurement and attestation of any static files/folders on the bare-metal Linux file system, allowing administrators to identify application-specific collections of files and folders to attest as part of a new SOFTWARE Flavor part. • Includes a default manifest of Intel® SecL Trust Agent components so that the Agent itself will be included in Platform Integrity attestation • Example use cases include creating a SOFTWARE Flavor for QEMU/KVM and Libvirt for virtualization platforms, or for docker.d or other container runtimes for container-based platforms Intel(R) SecL-DC Version 1.4 Resolved Bugs: Additional security enhancements following penetration testing New Features: • Changed "BIOS" Flavor part to "PLATFORM" Flavor Part for more accurate naming and applicability for future features • Removed "COMBINED" Flavor. This feature is better served using Flavorgroups without making special Flavors that do not match the normal Flavor standards. • Updated to support RedHat Enterprise Linux 7.6 • Changed TPM interface to use TSS APIs instead of tpm2-tools and tpm-tools Intel(R) SecL-DC Version 1.3 Resolved Bugs: • Updated the versions of some of the 3rd party open source dependent components to the latest version to address the CVEs found in them. • Updated to use the latest .NET framework and VC runtime version for the Windows Trust Agent. New Features: • Script for Installing the Pre-Requisite Packages for the Linux Build System • Script for automating the complete build process from source and generate docker containers binaries for Linux Trust Agent, Verification Service and Integration Hub • Documentation on steps to run the Pre-Requisite and Build Scripts for Linux and Windows Intel® SecL-DC Version 1.2 New Features: • Added support for running the ISecL services in Docker Containers (Verification Service, Trust Agent, and Integration Hub) • Added support for Platform Attestation of TPM 2.0 ESXi hosts with vSphere 6.7u1. Asset Tag is currently not supported for TPM 2.0 with VMWare hosts at this time; TPM 1.2 ESXi hosts remain supported Intel® SecL-DC Version 1.1 New Features: • Added support for RHEL 7.5 (Verification Service, Trust Agent, and Integration Hub) • Added support for OpenStack Rocky integration System Improvements: • Improved database structure for improved performance and scalability • Added database rotation to natively prevent unbounded disk utilization and improve query performance • Updated default database and other configuration settings for stability at large scale • Improved error handling and performance of queue operations (flavor matching, etc) Intel® SecL-DC Version 1.0.1 Updated Javadoc REST API documentation Intel® SecL-DC Intel(R) SecL-DC Version 1.0. New Features: Hardware-rooted Platform Trust Attestation Intel Security Libraries leverage Intel Trusted Execution Technology and the Trusted Compute Group standards to establish a measured boot environment for servers that use Intel Xeon processors and a Trusted Platform Module. This measured boot environment allows a server's actual boot state to be compared to known-good values, which enables the detection of malicious code injection, rootkits, unacceptable firmware or software version, etc. Remote attestation of this comparison through ISecL allows a clear audit report of the boot state of servers in the datacenter to ensure compliance and improve security. Asset Tag Attestation Intel Security Libraries allow the generation and provisioning of user-defined key/value pairs that can be securely provisioned into the physical TPM of a host and included in the remote attestation process. This allows datacenter administrators or cloud consumers to gain visibility into tagged attributes, such as the location of the server hardware. Support for Red Hat Enterprise Linux, Microsoft Windows Server, and VMWare vSphere Support for TPM 1.2 and TPM 2.0 Unified "Flavor" whitelisting architecture "Flavors" describe acceptable configuration elements in server firmware and software in a standardized, extensible format. Automatic Flavor Matching for easy datacenter lifecycle management The ISecL Host Verification Service features automatic matching of Flavors to Hosts in the datacenter, allowing for easy yet extremely customizable management of acceptable datacenter configurations. Parallel delivery of functionality through integration libraries and combined Services Intel Security Libraries is distributed in two forms: - As a set of integration libraries targeted at system integrators, ISVs, and customers who want to develop their own solutions based on ISecL functions - As a set of full Service components that offer already-integrated functionality and a ready-to-use REST interface Integration Hub provides an easy integration point for scheduler services Scheduler services (such as in OpenStack) can consume the Trust and Asset Tag attestation information to make scheduling decisions, controlling where workloads are allowed to launch or move based on the attestation status or asset tags of the hosts in the datacenter.
{"Source-Url": "https://01.org/sites/default/files/documentation/intel_secl_releasenotes_10.pdf", "len_cl100k_base": 5757, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 28032, "total-output-tokens": 6418, "length": "2e12", "weborganizer": {"__label__adult": 0.0004167556762695313, "__label__art_design": 0.00034236907958984375, "__label__crime_law": 0.00063323974609375, "__label__education_jobs": 0.00025963783264160156, "__label__entertainment": 8.851289749145508e-05, "__label__fashion_beauty": 0.0001914501190185547, "__label__finance_business": 0.0008215904235839844, "__label__food_dining": 0.00025081634521484375, "__label__games": 0.0008921623229980469, "__label__hardware": 0.0109710693359375, "__label__health": 0.0002961158752441406, "__label__history": 0.0002135038375854492, "__label__home_hobbies": 0.00011712312698364258, "__label__industrial": 0.001392364501953125, "__label__literature": 0.0001481771469116211, "__label__politics": 0.0003612041473388672, "__label__religion": 0.00043272972106933594, "__label__science_tech": 0.07769775390625, "__label__social_life": 7.295608520507812e-05, "__label__software": 0.09405517578125, "__label__software_dev": 0.8095703125, "__label__sports_fitness": 0.00027060508728027344, "__label__transportation": 0.0005464553833007812, "__label__travel": 0.00018107891082763672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27418, 0.01166]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27418, 0.05001]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27418, 0.88096]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3072, false], [3072, 6692, null], [6692, 8525, null], [8525, 10572, null], [10572, 12406, null], [12406, 14666, null], [14666, 16940, null], [16940, 19570, null], [19570, 21833, null], [21833, 23934, null], [23934, 25012, null], [25012, 27418, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3072, true], [3072, 6692, null], [6692, 8525, null], [8525, 10572, null], [10572, 12406, null], [12406, 14666, null], [14666, 16940, null], [16940, 19570, null], [19570, 21833, null], [21833, 23934, null], [23934, 25012, null], [25012, 27418, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27418, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27418, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3072, 2], [3072, 6692, 3], [6692, 8525, 4], [8525, 10572, 5], [10572, 12406, 6], [12406, 14666, 7], [14666, 16940, 8], [16940, 19570, 9], [19570, 21833, 10], [21833, 23934, 11], [23934, 25012, 12], [25012, 27418, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27418, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
eb37604a7a0adbd7d8b5190a2b64a3d2478f662b
TOWARD AGENT ORIENTED SOFTWARE ENGINEERING FOR DISTRIBUTED SCHEDULING ANA MADUREIRA\textsuperscript{a} JOAQUIM SANTOS\textsuperscript{a} NUNO GOMES\textsuperscript{b} ILDA FERREIRA\textsuperscript{c} GECAD – Knowledge Engineering and Decision Support Group Institute of Engineering – Polytechnic of Porto Porto, Portugal Abstract – Software engineers have derived a progressively better understanding of the complexity characteristics in software. It is now widely recognised that interaction is probably the most important single characteristic of complex software. Agent-based computing can be considered as a new general purpose paradigm for software development, which tends to radically influence the way a software system is conceived and developed, and which calls for new agent specific software engineering approaches. This paper addresses distributed manufacturing scheduling and describes an architecture following Agent Oriented Software Engineering (AOSE) guidelines through specification defined by Ingenias methodology. This architecture is based on a Multi-Agent System (MAS) composed by a set of autonomous agents that cooperate in order to accomplish a good global solution. Keywords: AOSE Paradigm, Distributed Scheduling, Multi-Agent Systems, Meta-Heuristics. 1. INTRODUCTION A major challenge in the area of global market economy is the development of new techniques for solving real world scheduling problems. Indeed, any industrial organization can only be economically visible by maximizing customer services, maintaining efficient, low cost operations and minimizing total investment. Traditional scheduling methods encounter great difficulties when they are applied to some real-world situations. The interest in optimization algorithms for dynamic optimization problems is growing and a number of authors have proposed an even greater number of new approaches, the field lacks a general understanding as to suitable benchmark problems, fair comparisons and measurement of algorithm quality [1][2][7][14]. Current practices and newly observed trends lead to the development of new ways of thinking, managing and organizing in enterprises, where autonomy, decentralization and distribution are some of the challenges. In manufacturing, a new class of software architectures, and organizational models appeared to give form to the Distributed Manufacturing System concept [5]. Since the 1980s, software agents and multi-agent systems have grown into what is now one of the most active areas of research and development activity in computing generally. There are many reasons for the current intensity of interest, but certainly one of the most important is that the concept of an agent as an autonomous system, capable of interacting with other agents in order to satisfy its design objectives, is a natural one for software designers. Different proposals in the field of Agent Oriented Software Engineering (AOSE) try to integrate results from agent research with engineering practices, some from the perspective of agent theory, some as an evolution of object-oriented systems, other as task execution models, or from a knowledge-based systems approach. In the recent years, the characteristics and expectations of software systems have changed dramatically having as result that a variety of new software engineering challenges have arisen [3][23][24]. In this work we have two main purposes, first the resolution of more realistic scheduling problems in the domain of manufacturing environments, known as Extended Job-Shop Scheduling Problems [15-16], combining Multi-Agent Systems (MAS) and Meta-Heuristics technologies. The second is to demonstrate that is important for MAS development the integration of Software Engineering concepts like the AOSE paradigm. The proposed Team-based architecture is rather different from the ones found in the literature; as we try to implement a system where each agent (Machine Agent) is responsible to achieve a near optimal solution to schedule operations related with one specific machine. through Tabu Search or Genetic Algorithms. After local solutions are found, each Machine Agent is required to cooperate with other Machine Agents in order to achieve a global optimal schedule. The remaining sections are organized as follows: Section 2 summarizes some related work and the research on the use of multi-agent technology for dynamic scheduling resolution. In Section 3 are introduced some terms and definitions like coordination, negotiation, cooperation in Multi Agent Systems. This section presents some Agent-Oriented Methodologies and describes some considerations regarding Software Architectures and Multi-Agent Systems. In section 4 the scheduling problem under consideration is defined. Section 5 presents the Team-Work based Model for Dynamic Manufacturing Scheduling and a proposal by Ingenias methodology. Finally, the paper presents some conclusions and puts forward some ideas for future work. 2. RELATED WORK Dynamic scheduling is one that is receiving increasing attention amongst both researchers and practitioners. In spite of all previous contributions the scheduling problem still known to be NP-complete [2]. This fact incites researchers to explore new directions. Multi-Agent technology has been considered as an important approach for developing industrial distributed systems. In [19] Shen and Norrie presented a state-of-the-art survey referencing a number of publications that attempted to solve distributed dynamic scheduling problems. According to these authors, there are two distinct approaches in the mentioned work. The first is based on an incremental search process that may involve backtracking. The second approach is based on systems in which an agent represents a single resource and is therefore responsible for scheduling that resource. Agents then negotiate with other agents in order to accomplish a feasible solution. For further works developed on MAS for dynamic scheduling, see for example, [7][15]. The characteristics and expectations of software systems have changed dramatically in the last few years, with the result that a range of new software engineering challenges have arisen [3][23]. First, most software systems are concurrent and distributed, and are expected to interact with components and exploit services that are dynamically found in the network. Second, software systems are becoming “always-on” entities that cannot be stopped, restored, and maintained in the traditional way. Finally, current software systems tend to be open, because they exist in a dynamic operating environment where new components can join and existing components can leave the system on a continuous basis, and where the operating conditions themselves are likely to change in unpredictable ways. From the literature we can conclude that Agent-based computing is a promising research approach for developing applications in complex domains. However, despite the great research effort [14][24][25], there still exist a number of challenges before making agent-based computing a widely accepted paradigm in software engineering practice. In order to realize an engineering change in agent oriented software engineering, it is necessary to turn agent oriented software abstractions into practical tools for facing the complexity of modern application areas. 3. MULTI-AGENT SYSTEMS Agents and multi-agent systems (MAS) have recently emerged as a powerful technology to deal with the complexity of current Information and Communication Technologies environments. In this section we will describe some issues and considerations regarding the developing of the MAS following a software engineering perspective. A. Terms and Definitions The development of multi-agent systems requires powerful and effective modelling, architectures, methodologies, notation techniques, languages and frameworks. Agent-based computing can be considered as a new general purpose paradigm for software development, which tends to radically influence the way a software system is conceived and developed, and which calls for new, agent specific, software engineering approaches [23]. The main term of Multi-Agent based computing is an Agent. However the definition of the term Agent has not common consent. In the last few years most authors agreed that this definition depends on the domain where agents are used. In Ferber [10] is proposed a definition: “An agent is a virtual or physical autonomous entity which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The agent should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result.” An agent can be generally viewed as a software entity with the some characteristics [21] like: - Autonomy - where an agent has its own internal thread of execution, typically oriented to the achievement of a specific task, and it decides for itself what actions it should perform at what time. - Situatedness - agents perform their actions while situated in a particular environment. - Proactivity - in order to accomplish its design objectives in a dynamic and unpredictable environment the agent may need to act to ensure that its set goals are achieved and that new goals are opportunistically pursued whenever appropriate. - Sociability - agents interact (cooperate, coordinate or negotiate) with one another, either to achieve a common objective or because this is necessary for them to achieve their own objectives. A Multi-Agent System (MAS) can be defined as “a system composed by population of autonomous agents, which cooperate with each other to reach common objectives, while simultaneously each agent pursues individual objectives’ [10]. According to Russell and Norving [18] multi-agent systems “[...] solve complex problems in a distributed fashion without the need for each agent to know about the whole problem being solved”. We can see MAS like a society of agents that cooperates to work in the best way possible. With this we gain the ability of solve complex problems like dynamic and distributed scheduling. Considering the complexity inherent to the manufacturing systems, the dynamic scheduling is considered an excellent candidate for the application of agent-based technology. In many implementations of multi-agent systems for manufacturing scheduling, the agents model the resources of the system and the tasks scheduling is done in a distributed way by means of cooperation and coordination amongst agents [23]. There are also approaches that use a single agent for scheduling (centralized scheduling algorithm) that defines the schedules that the resource agents will execute [21][23]. When responding to disturbances, the distributed nature of multi-agent systems can also be a benefit to the rescheduling algorithm by involving only the agents directly affected, without disturbance to the rest of the community that can continue with their work. The main advantages of a Multi-Agent system are the abilities of coordination and cooperation in order to accomplish a common objective. B. Coordination, Negotiation and Cooperation The development of MAS must consider some important organizational issues, like Coordination, Negotiation and Cooperation. These kinds of issues perform an important role, because the set of autonomous agents can only act like MAS if they can communicate in a flexible and trustable way. Coordination is defined in the literature like “the act of working like a group in a harmonious way”[12]. This means that autonomous agents must be an active part of the system despite of all their own goals. A coordinated system is needed in order to allow pursuit objectives independently of their individual or global. Cooperation is the act of combining efforts in order to pursue common objectives that can not be reached individually. To allow this cooperation, autonomous agents must be gifted with a certain social ability that will allow interaction with other agents, trough a communication protocol [17][23]. Negotiation can be defined as the process in which at least two operators, a sender and a receiver, communicate across a communication protocol in order to accomplish an agreement. MAS must implement a set of mechanisms that can differ with systems objectives. If autonomous agents are intended to work like a team, a cooperation mechanism should be considered, instead of that, if they are intended to pursue their own individual goals a negotiation mechanism is probably the best option to consider. The above definitions are not absolute neither have not common consent, but in our opinion this can be considered a good way, because it allow a clarification in which mechanism is advised for what system. 4. Agent-Oriented Software Methodologies Software agents and multi-agent systems have grown into what is now one of the most active areas of research and development activity in computing generally. There are many reasons for the current intensity of interest, but certainly one of the most important is that the concept of an agent as an autonomous system, capable of interacting with other agents in order to satisfy its design objectives, is a natural one for software designers. Several methodologies for the analysis and design of MAS have been proposed in the literature, however only few of them focus on organizational abstractions. MASE Methodology [20] provides guidelines for developing MAS based on a multi-step process. In analysis, the requirements are used to define use-cases and application goals and sub-goals, and eventually to identify the roles to be played by the agents and their interactions. In design, agent classes and agent interaction protocols are derived from the outcome of the analysis phase, leading to a complete architecture of the system. MESSAGE methodology [4] exploits organizational abstractions that can be mapped into the abstractions identified by Gaia. In particular, MESSAGE defines an organization in terms of a structure, determining the roles to be played by the agents and their topological relations (i.e., the interactions occurring among them). In addition, in MESSAGE, an organization is also characterized by a control entity and by a workflow structure. GAIA methodology described in Zambonelli [25] is an extension of the version described in Wooldridge et al. [22]. The first version of GAIA, provided a clear separation between the analysis and design phases. However, as already noted in this paper, it suffered from limitations caused by the incompleteness of its set of abstractions. The objective of the analysis phase in the first version of GAIA was to define a fully elaborated role model, derived from the system specification, together with an accurate description of the protocols in which the roles will be involved. This implicitly assumed that the overall organizational structure was known a priori (which is not always the case). In addition, by focusing exclusively on the role model, the analysis phase in the first version of GAIA failed to identify both the concept of global organizational rules (thus making it unsuitable for modelling open systems and for controlling the behaviour of self-interested agents) and the modelling of the environment (which is indeed important, as extensively discussed in this paper). The new version of GAIA overcomes these limitations. The TROPOS methodology first proposed in [9], adopt the organizational metaphor and an emphasis on the explicitly study and identification of the organiza- Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 5. PROBLEM DEFINITION Most real-world multi-operation scheduling problems can be described as dynamic and extended versions of the classic or basic Job-Shop scheduling combinatorial optimization problem. The general Job-Shop Scheduling Problem (JSSP) can be generally described as a decision-making process on the allocation of a limited set of resources over time to perform a set of tasks or jobs. Most real-world multi-operation scheduling problems can be depicted as dynamic as already described before. In this work we consider several extensions and additional constraints to the classic JSSP, namely: the existence of different job release dates; the existence of different job due dates; the possibility of job priorities; machines that can process more than one operation in the same job (recirculation); the existence of alternative machines; precedence constraints among operations of different jobs (as quite often, mainly in discrete manufacturing, products are made of several components that can be seen as different jobs whose manufacture must be coordinated); the existence of operations of the same job, on different parts and components, processed simultaneously on different machines, followed by components assembly operations (which characterizes the Extended Job-Shop Scheduling Problem (EJSSP)[15-16]). Moreover, in practice, scheduling environment tend to be dynamic, i.e. new jobs arrive at unpredictable intervals, machines breakdown, jobs are cancelled and due dates and processing times change frequently. 6. MULTI-AGENT SYSTEM FOR DISTRIBUTED MANUFACTURING SCHEDULING WITH GENETIC ALGORITHMS AND TABU SEARCH This section describes the architecture proposed for dynamic and distributed scheduling and proposes a methodology through Ingenias for its specification. A. MASDScheGATS Architecture Distributed environment approaches are important in order to improve scheduling systems flexibility and capacity to react to unpredictable events. It is accepted that new generations of manufacturing facilities, with increasing specialization and integration, add more problematic challenges to scheduling systems. For that reason, issues like robustness, regeneration capacities and efficiency are currently critical elements in the design of manufacturing scheduling system and encouraged the development of new architectures and solutions, leveraging the MAS research results. The work described in this paper is a system where a community of distributed, autonomous and often conflicting behaviours, cooperating and asynchronously communicating machines tries to solve scheduling problems. A global system behaviour can emerge with requested abilities of reactivity and flexibility to accomplish all the external perturbations. The main purpose of MASDScheGATS (Multi Agent System for Distributed Manufacturing Scheduling with Genetic Algorithms and Tabu Search) is to create a Multi-Agent system where each agent represents a resource (Machine Agents) in a Manufacturing System. Each Machine Agent is able to find an optimal or near optimal local solution through Genetic Algorithms or Tabu Search meta-heuristics, to change/adapt the parameters of the basic algorithm according to the current situation or even to switch from one algorithm to another. In our case the dynamic scheduling problem is decomposed into a series of Single Machine Scheduling Problems (SMSP)[15-16]. The Machine Agents obtain local solutions and cooperate in order to overcome inter-agent constraints and achieve a global schedule. Agents agree to work together in order to solve a problem that is shared by all agents in the team. Such approach allows for the resolution of large-scale problems that a single agent would not be able to solve. Moreover, Team-based architecture has the ability to meet global constraints given the capability that agents possess to act in concert. As we shall see later, this characteristic is critical for the problem treated in this work. The proposed architecture (Figure 2) is based on three different types of agents. In order to allow a seamless communication with the user, a User Interface Agent is implemented. This agent, apart from being responsible for the user interface, will generate the necessary Task Agents dynamically according to the number of tasks that comprise the scheduling problem and assign each task to the respective Task Agent. Task Agent will process the necessary information regarding the task. That is to say that this agent will be responsible for the generation of the earliest and latest processing times, the verification of feasible schedules and identification of constraint conflicts on each task and the decision on which Machine Agent is responsible for solving a specific conflict. Finally, the Machine Agent is responsible for the scheduling of the operations that require processing in the machine supervised by the agent. This agent will implement meta-heuristic and local search procedures in order to find best possible operation schedules and will communicate those solutions to the Task Agent for later feasibility check (Figure 3). B. Proposal methodology through Ingenias The development cycle that is proposed by INGENIAS (http://grasia.fdi.ucm.es/ingenias/) methodology sees MAS like a computational representation of a set of models. Each of these models has a partial view of the system: definition of the autonomous agents that compose the system, interaction between agents, system organization, domain, tasks and objectives. In order to specify how must be these models, the definition of meta-models is needed. One meta-model is a representation of all types of entities that can exist in a model, their relations and application restrictions. The meta-models used in this methodology are an evolution of MESSAGE methodology work [4]. This methodology uses five different kinds of meta-models that describe the correspondent diagrams: 1. Organization meta-model: defines groups of agents, system functionality and restrictions to agent’s behaviour. Is equivalent to system architecture in MAS. The important value for these models is the definition of workflows. 2. Interaction meta-model: details how agents coordinate and communicates among them. The definition of systems interaction allows identifying dependencies among components. 3. Agent meta-model: describes agents, excluding interactions with other agents, and the mental states that they have in their life cycle. This meta-model is centred in agent functionality and in is control drawing. It gives information about the responsibilities or tasks that an agent is able to perform. 4. Tasks and Objectives meta-model: is used to attach an agent mental state to the task that executes. Is used to collect MAS motivations, to define the identified actions in organization, interaction or agents models, and like it assigns actions. 5. Environment meta-model: Defines everything that is present in the environment and the way in that each agent understands it. Is main function is identify all environment elements and define a relation with the other entities. It seems a promising tool for the generic modelling of the system. We have noted that this approach has particular drawbacks for the specification of negotiation mechanism and for self-parameterization behaviour of the agents. 7. CONCLUSIONS AND FUTURE WORK The Team-Work based architecture for distributed scheduling that we propose in this paper seems to be a good way to solve real world scheduling problems, because a good global solution may emerge from a set of autonomous agents that cooperates to a communication mechanism to accomplish a common goal. Coordination seems to be the edge in MAS, because it is not possible for all autonomous agents to work together in a effective way if even one intervenent in the system is not like an active part of the system. In our opinion depending of MAS objectives, agents can cooperate in two distinct ways like cooperation if the global goal is considered more important that all the individual ones. If an agent pursues first an individual goal instead of the global, the system probably must incorporate a negotiation mechanism to improve system performance. We consider that the AOSE paradigm can perform an important role when a MAS in being developed, because with this definition it becomes easier to find problems observing global system structure. When a structure problem is discovered is the middle of systems implementation, most of the times it signifies an important lost of time. Work still to be done in the MASDScheGATS system includes the testing of the system and negotiation mechanisms under dynamic environments subject to several random perturbations. The proposed AOSE approach needs to be refined in order to support dynamic environments with unexpected disruptions that can not be strictly considered in the modelling because they can happen without any specific warning. Despite of this, in our opinion this kind of work can be very significant in order to turn MAS development a structured process that doesn’t go from modelling to implementation without any intermediate test and validation. ACKNOWLEDGEMENTS The authors would like to acknowledge FCT, FEDER, POCTI, POCI 2010 for their support to R&D Projects and GECAD Unit. REFERENCES
{"Source-Url": "http://www.wseas.us/e-library/conferences/2006tenerife/papers/541-550.pdf", "len_cl100k_base": 4625, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17967, "total-output-tokens": 6339, "length": "2e12", "weborganizer": {"__label__adult": 0.00028061866760253906, "__label__art_design": 0.0004200935363769531, "__label__crime_law": 0.0003726482391357422, "__label__education_jobs": 0.0009474754333496094, "__label__entertainment": 5.888938903808594e-05, "__label__fashion_beauty": 0.00016736984252929688, "__label__finance_business": 0.0005640983581542969, "__label__food_dining": 0.0003452301025390625, "__label__games": 0.0005741119384765625, "__label__hardware": 0.0007419586181640625, "__label__health": 0.0004277229309082031, "__label__history": 0.0002429485321044922, "__label__home_hobbies": 9.620189666748048e-05, "__label__industrial": 0.0012054443359375, "__label__literature": 0.00020563602447509768, "__label__politics": 0.00032591819763183594, "__label__religion": 0.0003905296325683594, "__label__science_tech": 0.049224853515625, "__label__social_life": 7.343292236328125e-05, "__label__software": 0.0099945068359375, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.00028228759765625, "__label__transportation": 0.0006651878356933594, "__label__travel": 0.0001838207244873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29578, 0.0191]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29578, 0.57458]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29578, 0.92109]], "google_gemma-3-12b-it_contains_pii": [[0, 4071, false], [4071, 9747, null], [9747, 15723, null], [15723, 19155, null], [19155, 22777, null], [22777, 29578, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4071, true], [4071, 9747, null], [9747, 15723, null], [15723, 19155, null], [19155, 22777, null], [22777, 29578, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29578, null]], "pdf_page_numbers": [[0, 4071, 1], [4071, 9747, 2], [9747, 15723, 3], [15723, 19155, 4], [19155, 22777, 5], [22777, 29578, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29578, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
9409bd1943299065cea380025f8ef96e81992a2f
Getting Started with iBeacon Overview Introduced in iOS 7, iBeacon is an exciting technology enabling new location awareness possibilities for apps. Leveraging Bluetooth Low Energy (BLE), a device with iBeacon technology can be used to establish a region around an object. This allows an iOS device to determine when it has entered or left the region, along with an estimation of proximity to a beacon. There are both hardware and software components to consider when using iBeacon technology, and this document will give an introduction to both, along with suggested uses and best practices to help ensure a highly effective deployment leading to an outstanding user experience. iBeacon has 3 different audiences. You may fall into one, two, or possibly all three of these categories, depending on your role. 1. **App Developers** If you want to add new location awareness to your application, you would use the Core Location APIs in iOS to be notified when the iOS device has moved into or out of a beacon region. You can also determine approximate proximities to a device generating iBeacon advertisements. Everything you need to get started is included in the iOS SDK, no additional license is required. 2. **People Deploying Devices With iBeacon Technology** Whether you manage a sports arena, a museum, a retail store, or any of the myriad other physical locations where beacons could be employed, you need to be aware of how these devices work, issues surrounding signal strength and materials, and understand how to calibrate and test your deployment. If you are interested in using the iBeacon Logo on signage at a venue, but will not make devices with iBeacon technology, you will need to obtain an iBeacon logo license before using the iBeacon logo. Please visit https://developer.apple.com/ibeacon/ to apply for a license to use the iBeacon logo. 3. **People Making Devices With iBeacon Technology** If you are interested in manufacturing devices with iBeacon technology, you will need to obtain a license before building these devices. Please visit https://developer.apple.com/ibeacon/ to apply for an iBeacon license. Licensees receive access to technical specifications, a license to use the iBeacon logo, and the iBeacon Identity Guidelines. **Devices with iBeacon Technology** Devices with iBeacon technology can be powered using coin cell batteries for a month or longer, or operate for months at a time using larger batteries, or can be powered externally for extended periods of time. iOS devices can also be configured to generate iBeacon advertisements, although this functionality is limited in scope. This would be appropriate for uses such as a Point Of Sale or kiosk application, or for an application that wants to become an iBeacon for a short time while someone is actively using the application. An iBeacon advertisement provides the following information via Bluetooth Low Energy: The UUID, major and minor values provide the identifying information for the iBeacon. Generally speaking, this information is hierarchical in nature with the major and minor fields allowing for subdivision of the identity established by the UUID. UUIDs can be generated by using the `uuidgen` command line utility in OS X, or programmatically using the NSUUID Foundation class. The following table shows examples of how these values may be used for a nationwide retail store. The UUID is shared by all locations. This allows an iOS device to use a single identifier to recognize any of the stores with a single region. Each specific store, San Francisco, Paris, and London, is then assigned a unique major value, allowing a device to identify which specific store it is in. Within each individual store, departments are given separate minor values, although these are the same across stores to make it easier for an app on a device to readily identify departments. <table> <thead> <tr> <th>Store Location</th> <th>San Francisco</th> <th>Paris</th> <th>London</th> </tr> </thead> <tbody> <tr> <td><strong>UUID</strong></td> <td>D9B9EC1F-3925-43D0-80A9-1E39D4CEA95C</td> <td></td> <td></td> </tr> <tr> <td><strong>Major</strong></td> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td><strong>Minor</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Clothing</td> <td>10</td> <td>10</td> <td>10</td> </tr> <tr> <td>Housewares</td> <td>20</td> <td>20</td> <td>20</td> </tr> <tr> <td>Automotive</td> <td>30</td> <td>30</td> <td>30</td> </tr> </tbody> </table> Using this information, an iOS device could identify when it has entered or left one of the stores, which specific store it is, and what department the user might be standing in. These values are determined by the person or organization deploying the beacon device. UUIDs, and major & minor values are not registered with Apple. iBeacon relies on BLE, and therefore require an iPhone 4S (or later), iPod touch (5th generation), iPad (3rd generation or later), or iPad mini. For more details about incorporating iBeacon technology in a product, you will need to obtain a license from Apple. Please visit <https://developer.apple.com/ibeacon> to apply for an iBeacon license. iBeacon Software — Core Location APIs Prior to iOS 7, Core Location used regions defined by a geographic location (latitude and longitude) and a radius, known as a ‘geofence’. iBeacon enables a new level of flexibility by defining regions with an identifier. This allows beacons to be affixed to objects that are not tied to a single location. For example, a beacon device could be used to set a region around a movable object like a food truck or on a cruise ship. Furthermore, the same identifier can be used by multiple devices. This would enable a retail chain to use beacons in all their locations and allow an iOS device to know when it enters any one of them. Privacy and Location Because iBeacon is part of Core Location, the same user authorization is required in order to be used. Users will see the same location authorization alert when an application attempts to use the iBeacon APIs: Applications that use beacon region APIs in CoreLocation will appear in the Settings app under Privacy > Location Services and users can allow or deny an application’s access to iBeacon functionality at any time. Furthermore, any Bluetooth packets that are associated with iBeacon are excluded from the CoreBluetooth APIs. As with geofence region monitoring, when in active use the status bar will show a hollow arrow. When using ranging, the status bar will show the solid location arrow. Accuracy of iBeacon To ensure an effective user experience, it is important to consider how signals from a beacon are detected and used to determine accuracy. When an iOS device detects a beacon’s signal, it uses the strength of the signal (RSSI, or Received Signal Strength Indication) to determine both proximity to the beacon, as well as the accuracy of its estimation of proximity. The stronger the signal, the more confident iOS can be about the proximity to the beacon. The weaker the signal, the less confident iOS can be about the proximity to the beacon. Accuracy can best be understood by relating it to how GPS works in iOS today. When an iOS device can clearly receive GPS signals, such as when a device is in the open outdoors with an unobstructed line of sight to the orbiting GPS satellites, the more accurately your location can be determined. This can easily be seen in the Maps application where the location accuracy is represented by a blue circle surrounding your current location indicator. If a device is indoors or the line of sight to the satellites is obstructed, a large blue circle indicates a lower accuracy. That is, the device could be located anywhere within the blue circle. As the line of sight to the satellites improves (e.g. the device is taken outdoor or removed from a backpack) the accuracy improves, represented by a smaller blue circle. With a better received signal strength the device can narrow the margin of error and be more confident of its location. June 2, 2014 With signals are received from a device with iBeacon technology, the signal strength is generally correlated to how far away a device is from the beacon. In an ideal condition (that is, unobstructed line of sight between a device's antenna and the beacon) the closer the person is the more accurate the result. As shown in Figure 1, when a device is far away from a beacon, the signal strength will be lower than when it is close. Due to this diminished signal strength, iOS does not have high confidence on accuracy of the proximity estimate to the beacon. This is similar to the large blue circle in the GPS example above. As the device gets closer to the beacon, the received signal strength increases and therefore the accuracy of the proximity estimate increases. This would be analogous to the smaller blue circle in the GPS example. Shown in Figure 2, a device that is closer to a beacon will have a higher confidence about its proximity to the beacon emitting the signal. However, just as GPS signal strength can be diminished by physical objects like buildings or being placed in a backpack, purse or pocket, so can a beacon's signal strength. Signal attenuation, or the loss of intensity of a signal, can be caused by many factors. Physical materials surrounding the beacon, such as the wall depicted in Figure 3 between the device and the beacon will affect the received signal strength. This may cause the device to believe that the beacon is further away than it actually is. The human body itself is an excellent attenuator of Bluetooth signals. Simply having your back to a beacon (i.e. where your body is positioned between the device and the beacon) will affect the signal strength and thereby lower the accuracy. Figure 4 shows this signal strength being... diminished when somebody is physically positioned between the iOS device and the beacon. When building an application that uses either GPS or beacon, it is important to consider this accuracy. The values reported by the Core Location objects (the horizontalAccuracy property in the CLLocation class, or the accuracy property in the CLBeacon class) indicate this level of uncertainty, or the margin of error. Both are measured in meters. The higher the value, the lower the certainty of the position of the device or beacon. Keep in mind that depending on the physical surroundings a low accuracy may not be possible. **Region Monitoring** Similar to the existing geofence region monitoring, an application can request notifications when a device enters or leaves a region defined by a beacon. When an application makes this request to begin monitoring a beacon region it must specify the UUID of the iBeacon advertisement. While an app is limited to 20 regions being monitored, by using a single UUID in multiple locations, a device can easily monitor many physical locations simultaneously. Using the retail store example shown in the table earlier, a device can monitor 3 separate physical locations (San Francisco, Paris, and London) using the same UUID. The impact of this UUID-based approach compared to geofences cannot be understated: with a single line of code an application can establish monitored regions around an arbitrary number of objects or locations. In addition to the UUID, an application can optionally supply the major and minor fields to further specify a beacon region to be monitored. Continuing with our retail chain example, if the app only specifies a UUID for the beacon region then it will be notified when the user enters or leaves any of the retail stores. Since the major field is being used to determine specific stores, if the user only wanted to be notified when entering a specific store, the application could configure the beacon region using the UUID + major value. Or perhaps the user is only interested in being notified when they have entered a specific department in that store. In that case the app would configure the beacon using UUID + major + minor values. This level of granularity is up to the app developer and can be specified dynamically at runtime. As with the existing region monitoring, when the user enters or exits the beacon region, the application will be notified. If the application is not currently running (for example, if it was terminated due to memory pressure on the device), then the application is launched in the background and the notification delivered. One important consideration is in iOS 7 if the user explicitly disallows Background App Refresh (either globally or specifically for your app) then your app will no longer receive region monitoring notifications. It can continue to use the ranging APIs, however. Being based on Bluetooth Low Energy, the typical range will be in the tens of meters, which provides a more accurate monitoring than geofence region monitoring (typically on the order of 100 meters minimum). As discussed above, geofences tend to be less accurate indoors so using iBeacon technology can dramatically improve the region monitoring results for indoor use cases. However, the accuracy can be affected by the physical positioning of a beacon, whether the user has their device in their pocket, or simply whether the beacon is in front of or behind the user can all affect the point at which a region has been entered or exited. **Ranging** iOS 7 introduces a new set of APIs for determining the approximate proximity to a device using iBeacon technology, a process known as “ranging”. Based on common usage scenarios, iOS applies filters to the accuracy estimate to determine an estimated proximity to a beacon. This estimate is indicated using one of following four proximity states: <table> <thead> <tr> <th>Proximity State</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Immediate</td> <td>This represents a high level of confidence that the device is physically very close to the beacon. Very likely being held directly up to the beacon.</td> </tr> <tr> <td>Near</td> <td>With a clear line of sight from the device to the beacon, this would indicate a proximity of approximately 1-3 meters. As described in the section on accuracy, if there are obstructions between the device and the beacon which cause attenuation of the signal, this Near state may not be reported even though the device is in this range.</td> </tr> <tr> <td>Far</td> <td>This state indicates that a beacon device can be detected but the confidence in the accuracy is too low to determine either Near or Immediate. An important consideration is that the Far state does not necessarily imply “not physically near” the beacon. When Far is indicated, rely on the accuracy property to determine the potential proximity to the beacon.</td> </tr> <tr> <td>Unknown</td> <td>The proximity of the beacon cannot be determined. This may indicate that ranging has just begun, or that there are insufficient measurements to determine the state.</td> </tr> </tbody> </table> **iBeacon User Experience Considerations** While there is a correlation between the proximity states and accuracy, the mapping is not necessarily 1:1. Consider the example of our nationwide retail store where beacons have been deployed throughout the store. An app might use region monitoring to detect the entry to the store to trigger a local notification welcoming the user to the store and inviting them to launch the app. To avoid annoying the user, the app may want to only show this notification once, only the first time the user enters the store. Once inside the app, a custom in-store interface could be presented. If the major value contained in the iBeacon advertisement represents the specific store location then the app knows immediately what store the user is in. With the app frontmost, the device in the user’s hand, and the screen turned on, this is an ideal situation to begin ranging for all the beacons in the store. Large home improvement stores tend to have many aisles and departments. With beacons positioned at the end of each aisle and within departments, an app should be able to use the proximity states for the beacons that can be seen, the app could display the user’s approximate location on a map. While many situations in this example might lead to proximity states of Near or Immediate (for example, if the user held their iPhone up to a particular display where a beacon device was positioned), due to the physical objects (typically metal shelving, large bulky display items, etc) or other customers in the store, the app may only see proximities of Far. In this situation, an app might display an interface that highlights information about nearby beacons but not lock the user into a specific beacon. Instead, the app may want to let the user choose the item that would be most relevant to them (either due to their interest or because they can readily identify which beacon actually is closest). **Passbook Integration** Passbook passes can take advantage of devices with iBeacon technology as well. By including the UUID of beacon, a Passbook pass can be made relevant when it is in the beacon’s region. This works the same way specifying latitude and longitude values in the locations array of your pass. You can specify the UUID and, optionally, the major and minor values as an array in the beacons key for your pass. **Deploying iBeacon** As you prepare to deploy any implementation based on iBeacon technology, you need to carefully evaluate the real world performance of your solution. **Physical Limitations** iBeacon devices use Bluetooth Low Energy to broadcast signals. BLE is based on the 2.4 GHz frequency and as such is subject to attenuation by various physical materials such as walls, doors or other physical structures. The 2.4 GHz frequency can also be affected by water, which means the human body will also affect signals. This is important to be aware of because when the Bluetooth signal is attenuated, or lessened, this affects the signal strength received by an iOS device. As discussed above, when the received signal strength is lessened, an iOS device’s ability to estimate the proximity to an iBeacon device is diminished. **Calibrating iBeacon** To provide the best user experience, it is critical to perform calibration in your deployment environment. As each beacon is installed you should perform a calibration step. Core Location uses an estimation model that requires calibration at a distance of 1 meter away from the beacon. To perform this calibration you should: - Install the beacon and have it emitting a signal. - Using an iPhone or iPod touch running iOS 7 or later, and has a Bluetooth 4.0 radio, repeatedly sample the signal strength at a distance of 1 meter for a minimum of 10 seconds. When taking these signal strength readings you should hold the device in a portrait orientation with the top half of the device unobstructed. • Move the device slowly back and forth on a 30cm line, maintaining orientation, and remaining equidistant from the measuring device (see diagram) • For the duration of the calibration process, gather the values reported in the CLBeacon's `rssi` property. • Average the collected `rssi` values to obtain the Measured Power value. • Apply this Measured Power value to the beacon. Consult the details provided for the beacon used, as they may differ from manufacturer to manufacturer. As discussed above, the physical surroundings can affect the signal strength. Since the surroundings will almost certainly vary between installation locations it is important to repeat these steps for each beacon that is installed. **Best Practices** For an optimal user experience and successful deployment, be sure to consider the following best practices: • Ranging API are not expected to be used in the background. For best results, ranging should be used when your app is frontmost and the user is interacting with your app. • When using the ranging APIs and multiple devices with iBeacon technology are detected, Core Location will report the beacons in an order that is a best guess of their proximity. Due to the issues of signal attenuation discussed above, this order may not be correct. For example, if two beacons are attached to objects, and are detected by the iOS device, but one signal is considerably stronger than the other but is physically further away, this may cause the farther away beacon to be reported first. Apps should carefully inspect the proximity zone reported by the beacon and if all beacons are in the Far zone then consider presenting to the user that the two objects have been detected nearby, and allow the user to select the object that interests them. - Leverage the optional text field within the location authorization alert to explain why the app is asking to use the user’s locations. If your app has on-boarding screens, explain the benefits of why the user should agree to allowing the app to knowing their location. You can specify this optional text using the NSLocationUsageDescription key in your app’s Info.plist file. - If you are purchasing a 3rd party device with iBeacon technology, it is important to understand how these devices can be configured and who is going to do the installations, maintenance, etc. - When deploying beacon devices in the field, be sure to train any employees that might need to interact with them. For example, if you are deploying a retail solution, make sure your retail sales associates are trained on how the iOS app interacts with the devices, what the benefits are to your customers, supported iOS device models, suggestions for troubleshooting, etc. - If you plan to have signage in your location, it is encouraged that you have the iBeacon trademark and logo license. Please visit <https://developer.apple.com/ibeacon> to apply for an iBeacon license. Common Questions and Answers Q: Can I use iBeacon technology to precisely show a user’s location on a map when indoors? A: Due to the issues around signal strength and the variabilities in deployment environments, iBeacon technology is not intended to be used for specific location identification. It should be capable of providing room-level accuracy, but there are many factors that need to be considered to build a successful deployment. Number of beacons, where they are positioned, expected use cases, and many more factors need to be examined to provide a good user experience. Q: How can I prevent other apps from detecting my devices with iBeacon technology? A: In order for an app to be able to respond to a device that transmits iBeacon advertisements, it must know the UUID contained with the advertisement. Because a beacon device is advertising using BLE, it is possible for the UUID to be “sniffed” off the air and once that UUID is known, it could be used by other apps. Q: Does using iBeacon technology put user’s private data at risk? A: iBeacon advertisements only contain UUID, major and minor values. This is a unidirectional broadcasting; there is no bidirectional communication between a beacon device and an iOS device via iBeacon technology, therefore iBeacon technology cannot be used to receive by a beacon to receive information from a user. What an app does in response to a notification triggered by an iBeacon advertisement is a separate matter, but this is no different from using existing geofencing technologies. Q: Can I use an iOS device to issue iBeacon advertisements? A: Yes. Any app can use the Core Bluetooth APIs to send iBeacon advertisements. Q: Can I use an iOS device to issue iBeacon advertisements while my app is in the background? A: No. For an iOS device to issue iBeacon advertisements, the app requesting this functionality must be frontmost, with the screen turned on and the device unlocked. Q: If my app starts monitoring beacon regions, how will that affect battery performance? A: iOS devices that support iBeacon can efficiently monitor iBeacon regions in the background with marginal power drain. Monitoring iBeacon regions is significantly less power demanding than running normal location updates constantly in the background. Document Revision History <table> <thead> <tr> <th>Date</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>2014-06-02</td> <td>Initial version</td> </tr> </tbody> </table> Nothing herein is intended to modify the iOS Developer Program License Agreement, Mac Developer Program License Agreement, the iOS Developer Program Enterprise License Agreement, the iOS Developer Program University Agreement, the iOS Developer Program University Student License Agreement ("Agreement") and/or the App Store Review Guidelines, as they may be modified by Apple from time to time. In the event of any conflict or inconsistency between the Agreement or Guidelines and this document, the Agreement or Guidelines shall govern. Apple may at any time, and from time to time, with or without prior notice to You modify this document as well as any features, functionality or services described herein. You understand that any such modifications may require You to change or update Your Applications at Your own cost. Apple shall not be liable for any losses, damages or costs of any kind incurred by You or any other party arising out of or related to any modification or discontinuation of this document or any of the features, functionality or services described here.
{"Source-Url": "https://developer.apple.com/ibeacon/Getting-Started-with-iBeacon.pdf", "len_cl100k_base": 5285, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23143, "total-output-tokens": 5574, "length": "2e12", "weborganizer": {"__label__adult": 0.0004229545593261719, "__label__art_design": 0.000514984130859375, "__label__crime_law": 0.00044417381286621094, "__label__education_jobs": 0.0002294778823852539, "__label__entertainment": 6.455183029174805e-05, "__label__fashion_beauty": 0.0002899169921875, "__label__finance_business": 0.0005130767822265625, "__label__food_dining": 0.000286102294921875, "__label__games": 0.0008697509765625, "__label__hardware": 0.017242431640625, "__label__health": 0.0002574920654296875, "__label__history": 0.0002522468566894531, "__label__home_hobbies": 0.0001493692398071289, "__label__industrial": 0.0005259513854980469, "__label__literature": 0.0001456737518310547, "__label__politics": 0.00015926361083984375, "__label__religion": 0.0003039836883544922, "__label__science_tech": 0.0276031494140625, "__label__social_life": 5.245208740234375e-05, "__label__software": 0.0377197265625, "__label__software_dev": 0.91064453125, "__label__sports_fitness": 0.0003504753112792969, "__label__transportation": 0.0008015632629394531, "__label__travel": 0.0002007484436035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25116, 0.01503]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25116, 0.25768]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25116, 0.93248]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2928, false], [2928, 5011, null], [5011, 7917, null], [7917, 9697, null], [9697, 12592, null], [12592, 15868, null], [15868, 18685, null], [18685, 20354, null], [20354, 23314, null], [23314, 25116, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2928, true], [2928, 5011, null], [5011, 7917, null], [7917, 9697, null], [9697, 12592, null], [12592, 15868, null], [15868, 18685, null], [18685, 20354, null], [20354, 23314, null], [23314, 25116, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25116, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25116, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2928, 2], [2928, 5011, 3], [5011, 7917, 4], [7917, 9697, 5], [9697, 12592, 6], [12592, 15868, 7], [15868, 18685, 8], [18685, 20354, 9], [20354, 23314, 10], [23314, 25116, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25116, 0.16667]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
65e67da21633293e2857350a9495a42be8bb39ba
The Next Generation of Ground Operations Command and Control; Scripting in C# and Visual Basic George Ritter Computer Science Corporation (CSC), Huntsville, Alabama, 35806 And Ramon Pedoto COLSA Corporation, Huntsville, Alabama 35806 Scripting languages have become a common method for implementing command and control solutions in space ground operations. The Systems Test and Operations Language (STOL), the Huntsville Operations Support Center (HOSC) Scripting Language Processor (SLP), and the Spacecraft Control Language (SCL) offer script-commands that wrap tedious operations tasks into single calls. Since script-commands are interpreted, they also offer a certain amount of hands-on control that is highly valued in space ground operations. Although compiled programs seem to be unsuited for interactive user control and are more complex to develop, Marshall Space flight Center (MSFC) has developed a product called the Enhanced and Redesign Scripting (ERS) that makes use of the graphical and logical richness of a programming language while offering the hands-on and ease of control of a scripting language. ERS is currently used by the International Space Station (ISS) Payload Operations Integration Center (POIC) Cadre team members. ERS integrates spacecraft command mnemonics, telemetry measurements, and command and telemetry control procedures into a standard programming language, while making use of Microsoft’s Visual Studio for developing Visual Basic (VB) or C# ground operations procedures. ERS also allows for script-style user control during procedure execution using a robust graphical user input and output feature. The availability of VB and C# programmers, and the richness of the languages and their development environment, has allowed ERS to lower our “script” development time and maintenance costs at the Marshall POIC. Nomenclature \begin{itemize} \item \textbf{ASP} = Active Server Page \item \textbf{CLI} = Command Line Interface \item \textbf{CSC} = Computer Science Corporation \item \textbf{C2} = Command and Control \item \textbf{C3ISR} = Command, Control, Communications, Intelligence, Surveillance and Reconnaissance \item \textbf{C#} = “C” “sharp”, a programming language \item \textbf{EHS} = Enhanced HOSC System \item \textbf{EPC} = Enhanced Personal Computer (MSFC HOSC Telemetry and command tool-set) \item \textbf{ERS} = Enhanced and Redesigned Scripting Language \item \textbf{HOSC} = Huntsville Operations Support Center \item \textbf{HTML} = Hypertext Markup Language \item \textbf{ISS} = International Space Station \item \textbf{MSFC} = Marshall Spaceflight Center \item \textbf{PHP} = Hypertext Preprocessor \item \textbf{POIC} = Payload Operations Integration Center \item \textbf{SCL} = Systems Control Language \item \textbf{SLP} = Scripting Language Processor \item \textbf{STOL} = Systems Test and Operations Language \item \textbf{TSTOL} = The System Test and Operations Language \item \textbf{VB} = Visual Basic \item \textbf{VS} = Visual Studio \end{itemize} 1 Software Development Team Lead, Software Engineering, CSC, 310 Bridge St., Huntsville, Al. 35806 2 Computer Scientist, Software Engineering, COLSA Corporation, 6728 Odyssey Drive, Huntsville, Al. 35806 American Institute of Aeronautics and Astronautics I. Introduction A Script serves as a pre-planned set of instructions to execute or perform a task such as the dialog in a play or a small job in a computer. While traditional programming languages evolved primarily for the purpose of solving complex, computational intensive problems, scripting languages or scripts have become mainstream tools for solving more of the “house-keeping” computer problems like managing file systems. Where the focus on compiled programming languages is on performance, script commands are geared towards ease of use. In the typical script command, task performance is less of an issue permitting the use of an interpreted environment. Control of space systems from ground operations sites requires a certain amount of repetitive or scripted actions to control remote systems. The ease of use of script commands makes creation and modification of ground initiated flight procedures simple and less prone to error. A number of flight operation scripting languages have evolved from today’s commonly used computer scripting languages. These scripting languages integrate with the native command and telemetry systems through unique script commands. The Marshall Space Flight Center’s (MSFC) Huntsville Operations Support Center (HOSC) has developed a tool called the Enhanced and Redesigned Scripting Language (ERS). ERS combines the ease of use and low risk potential of the typical interpreted scripting language with the power and richness of a full programming language. ERS has streamlined development of scripts and enhanced remote control of on-board systems at the Huntsville Payload Operations Integration Center (POIC). II. The Value of Scripting in Flight Operations Since the beginning of scripting languages like IBM Job Control Language (JCL) to the current object oriented languages like Ruby and Perl, system administrators and software developers have sought to “avoid the compiler” in an effort to simplify common and repetitive tasks that are not performance oriented. Space ground systems have adapted and extended scripting concepts to the command and telemetry processing domains. What makes the interpreted languages so useful in the Space Operations world? Is there a more advanced solution that combines the graphical richness of a compiled language with the ease and flexibility of a script development environment? A. Mini-History of Scripting Languages In the early 1960’s IBM introduced Job Control Language (JCL) on their OS/360 where files could be copied from one location to another using only 9 lines of instructions! JCL was followed by Data General’s Command Line Interface (CLI) and later followed by the Unix Bourne Shell. These Scripting Languages store a series of commands in a file. Data General called them “macros”, Unix called them “shell scripts.” In all cases, the scripts ran as interpreted statements where no time was spent compiling. Local variables and flow control were slowly added. As complexity grew, these languages continued to be interpreted, most likely because compute power was also increasing. A small list of the more popular scripting languages of today, often calling themselves dynamic programming languages, includes Perl, Python, Ruby, Hypertext Preprocessor (PHP), Active Server Page (ASP), and JavaScript. Perl is known for its text processing capabilities; Python for readability and object oriented constructs; Ruby for object oriented; and PHP, ASP, and JavaScript for their ability to be used on Web applications inside Hypertext Markup Language (HTML) files. Today’s scripting languages have progressed beyond simple file manipulation to common (and even complex) programming tasks. They satisfy the needs of providing fast-to-develop and easy-to-maintain programming solutions in many of today’s computer problem domains. B. Some Space Ground Operations Scripting Languages Companies such as SRA International and some NASA Centers have found scripting languages very powerful for space operations applications. Systems Control Language (SCL) developed by SRA International “is a full-featured scripting language with which you can easily build, test and operate diverse control systems across mission critical command and control (C2) and Command, Control, Communications, Intelligence, Surveillance and Reconnaissance (C3ISR) domains. SCL greatly reduces workload and automates routine tasks through procedural, time-sequenced and event based responses to real-time data.” We would have to have a closer relationship with SRA to show more details of SCL, but they do go on to list features of SCL that include “Full-featured scripting language.” SCL also provides interfaces to let you adapt your proprietary C2 and telemetry acquisition systems to the SCL engine. SCL was chosen by Kennedy Space Center as the script engine for the Constellation Launch Control System. The Systems Test and Operations Language (TSTOL) is an interpreted language developed at Goddard Space Flight Center and “is derived from generations of the Systems Test and Operations Language (STOL) used in existing NASA satellite control centers. TSTOL is a procedural command language consisting of a core set of generic commands, supplemented by mission-specific extensions.” TSTOL includes typical programming capabilities such as various data types, arithmetic, logical, and relational operators, global and local variables, and looping constructs. TSTOL also has built-in “procedures” or commands specific to the Goddard Mission Systems. TSTOL allows for the creation of custom procedures or commands so that the language can be adapted to new programs and interfaces. The Scripting Language Processor (SLP) at the MSFC POIC and also currently used by the Chandra X-ray Observatory Control Center is based on the TSTOL design and provides the operations teams with the ability to develop scripts and to control script execution. SLP scripts consist of text files made up of statements, called directives, which the SLP interpreter can recognize and execute. In SLP many of the same things can be done as in TSTOL with respect to arithmetic, logical, and looping functions. The SLP directives also include POIC (and Chandra) spacecraft command and telemetry specific actions for remote control of the ISS Payload (and Chandra spacecraft). Examples of the SLP directives are included in Table 1. <table> <thead> <tr> <th>Table 1. Sample SLP Directives</th> </tr> </thead> <tbody> <tr> <td><strong>ask prompt [variable]</strong></td> </tr> <tr> <td><strong>Sample next</strong></td> </tr> <tr> <td><strong>update command mnemonic from file binary image filename</strong></td> </tr> <tr> <td><strong>uplink command mnemonic [verify car/fsv/crr]</strong></td> </tr> </tbody> </table> C. Why Scripts for Ground Operations Compared to compiled languages, scripts are easier to write due to the simple and limited language syntax. When the language is simple, training time is decreased and the need for language experts is lessened. Scripts are also faster to develop because each statement does less work, i.e., “uplink command,” and because the script developer gets immediate feedback from the interpreter. Scripting languages also provide easy to use constructs for user-controlled program flow. In many operations scenarios, it is necessary to have flight operations personnel monitor the script or operations procedure progress and respond to queries. With SCL, TSTOL, and SLP, the script user has many control options. Scripting languages are not historically known for being rich in graphic capabilities. At the MSFC POIC, the operations Cadre personnel use a combination of scripts (SLP and now ERS), a data display tool, and custom programs or comps (compiled languages) to provide them with the best combination of all the features they need to automate their flight operations tasks. We wondered if it was possible to combine the features of a scripting language like TSTOL and SLP with the graphical richness of a simple programming language like Visual Basic. Some of the scripting language syntax for standard program operations had become more complex and more work to maintain just to provide features that basic programming languages already offer. Visual Basic and C# programmers are becoming more readily available. The idea was to let programming languages do what they do best, and add in classes and methods (script commands or directives) that do lots of work, i.e. “uplink command.” We also had to develop a way to provide run-time “script” control that made execution feel and act like a script. Thus was born the Enhanced and Redesigned Scripting language or ERS. III. ERS The Enhanced and Redesigned Scripting (ERS) is one of a suite of applications of the Enhanced Personal Computer (EPC) tool at the MSFC POIC. EPC is a Windows based tool-set that allows local and remote POIC users to display and analyze telemetry, and uplink commands to the ISS payload systems for control purposes. ERS is the latest “scripting” product developed at Marshall to support procedural style remote operations control. A. Script Creation and Script Help ERS uses the Microsoft Visual Studio programming environment for development and also offers custom controls that enable an ERS developer to extend the Visual Basic and Visual C# languages to interface with the POIC telemetry and command system. Although the “programs” are compiled, ERS offers execution control features that make an ERS program feel and operate like a script. Visual Basic and Visual C# provide a rich set of graphical development tools that make for an extremely user friendly end-product. In ERS, the richness of a full featured programming language is combined with the flexibility and user control of a script. ERS consists of ERS Operations, Microsoft Visual Studio, and a number of custom wizards developed at Marshall to provide point and clink integration with the POIC ground system’s core libraries. Figure 1. ERS Operation Main Window The ERS Operations application, Fig. 1, is the main entry-point into the ERS system. Here the user can create, edit, delete, and run ERS scripting projects. Each ERS “script” is actually a .NET project that must be first compiled before it is run. Part of the compiling phase includes steps that validate the correctness of an ERS script. From ERS Operations, scripts can be validated and then run in various data modes including real-time and playback. ERS Operation also displays any currently running ERS scripts. When creating a new ERS project, the ERS developer selects “add new script” from the File menu option in ERS Operation, which will launch Visual Studio’s New Project Wizard, Fig. 2. In the New Project Wizard, the user then selects a desired programming language, the ERS Script Project template and a project name. Clicking “okay” brings up the ERS New Project Wizard, Fig. 3, and the user must select an active EPC session to run against. The EPC session selection connects the ERS project to a specific set of telemetry and command meta-data within the POIC ground system. Clicking “okay” causes this Wizard to create a new .NET project that contains a code file plus an ERS-specific Validation File (Fig. 4: “DemoScript.Validation.ers”). The code file is where the user-defined scripting logic is inserted. The validation file is used to map EPC telemetry and commanding objects into the project. These objects then become accessible in the user-defined code. In programming terms, an ERS script is just a custom .NET class that extends a base class called “ScriptEngine”. This base class is defined in the ERS library and contains many useful EPC-related methods. These methods will appear in Visual Studio’s Intellisense, Fig. 4, whenever the user types code. Included in the Intellisense are descriptions for each method. A list and description of some of the ERS methods available for use within ERS script classes are found in Table 2. Notice the similarities between these methods and the directives in Table 1 for the SLP. American Institute of Aeronautics and Astronautics Figure 4. ERS Development Showing Intelescense Table 2. A Sample of ERS Methods <table> <thead> <tr> <th>Method</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Ask(variable)</td> <td>Prompts the operator for input and waits until a resume directive is entered. The value entered is stored in variable. If variable is not specified, the process is halted until the user enters a resume directive.</td> </tr> <tr> <td>Ask(Of (T)) (variable)</td> <td>Prompts the operator for input and waits until a resume directive is entered. The value entered is stored in variable. If variable is not specified, the process is halted until the user enters a resume directive.</td> </tr> <tr> <td>AskPulldown(String, array&lt;String&gt;[][])</td> <td>Creates a dialog box that will prompt the operator to select a single answer to a prompt by clicking a selection from a list of text items in a pulldown list.</td> </tr> <tr> <td>AskPushButton(String, array&lt;String&gt;[][])</td> <td>Creates a dialog box that will prompt the operator to select a single answer to a prompt by pressing a pushbutton with the specified text.</td> </tr> <tr> <td>LoggingOptions</td> <td>Contains various options for controlling logging behavior. These options must be set prior to calling any logging methods. Upon calling the first logging method, these options become read-only.</td> </tr> <tr> <td>SampleLatest()</td> <td>Updates all MSIDs with their latest packet values.</td> </tr> <tr> <td>SampleMSID(Of (T)) (String, Processing, Boolean, Boolean)</td> <td>Initializes a MSID object from which values and statuses will be sampled.</td> </tr> <tr> <td>SampleNext()</td> <td>Updates all MSIDs with their next packet values.</td> </tr> <tr> <td>StartDisplay(String)</td> <td>Starts the identified display within an instance of the Display Operations application.</td> </tr> <tr> <td>UpdateCommandForm(String)</td> <td>Shows the update form for the specified command.</td> </tr> <tr> <td>UplinkCommand(String, Verify)</td> <td>Initiates the transmission or uplink to the spacecraft for the command referenced by a unique mnemonic as defined in the Operational Command Database.</td> </tr> </tbody> </table> Wait(Int32) Suspendse execution until the time has elapsed and then resumes execution with the next statement in the script. Write(Object, array<Object>[][]) Constructs a string of text and displays it on the operator’s screen. Expressions may consist of user defined variables, intrinsic functions, and quoted strings. All expressions will be formatted in ASCII. The message will NOT be sent to Message Handler. WriteFormat(String, array<Object>[][]) Logs a string using the composite formatting feature of the .NET Framework. More detailed help is also available for each method by pressing the “F1” key while in the Visual Studio IDE. This brings up the full set of ERS documentation, Fig. 5, which contains all the methods and objects that can be used in an ERS project. ERS Help is also searchable by keywords. Figure 5. ERS Detailed Help B. Measurements and Commands as Script Variables To properly use many of the methods or procedures previously presented, individual telemetry mnemonics or Measurement Stimulus Identifiers (MSIDs) and Command Mnemonics must be assigned to variables in the program. The validation file (Fig. 4: “DemoScript.Validation.ers”) that was generated at project creation time is used to map these objects into an ERS project. Double-clicking on the validation file will display the validation dialog shown in Fig. 6. Here the user can define the telemetry measurements and commands to include in the code project. Figure 6. Validation Dialog The validation dialog (Fig. 6) maps an EPC object (i.e., a measurement, command, etc.) to a variable name. The variable can then be addressed by the user in the code file. The variable itself exposes properties and methods that make sense for that particular EPC object type. For example, a measurement variable will have properties that return a sampling status or value. In the code segment below, the “r” measurement variable is being inspected for new data. \[ \text{If STRING.ISDataNew Then} \\ \quad \text{Write("New data is available!")} \\ \text{End If} \] A search dialog is used to select the telemetry measurement for variable assignment (Fig. 7). Measurements can be searched on several criteria, including whether or not the measurements have pre-defined limits or expected states. Figure 7. Select MSIDs From the measurement search dialog, the details of each displayed measurement can be examined, Fig. 8. ![Figure 8. Measurement Detail](image) Commands can also be added to an ERS project via the command tab back on the Validation Dialog, Fig. 6. Like MSIDs, the commands available are searchable by name (Fig. 9). From the Select Commands dialog, the details of each command can be examined in a window similar to Fig. 8. ![Figure 9. Select Command Dialog](image) American Institute of Aeronautics and Astronautics C. Script Control ERS scripts can be controlled at creation time in debug mode inside the Visual Studio debugger where the script developer is provided with a rich set of the latest source level debug tools provided by Microsoft. ERS provides script control for a verified ERS script at run-time through ERS provided input/output methods. These include methods to write out messages and to prompt a user for input. An ERS script that performs input and output is assigned its own logging window where both the input prompts and output messages are displayed. Figure 10 is an example of an ERS script prompting the user for a number, which it then uses to output a measurement sample that number of times. Note that both the input and output of an ERS script are tagged with a time stamp. This is because, by default, all input/output generated by a script is also logged to a standard file associated with that script. The log files can be accessed from the logging window itself or via the ERS Operations application for post flight analysis. ![Figure 10. ERS Input and Output Windows](image) IV. Conclusion Scripts for flight operations control have proven to be very useful. They offer a relatively simple syntax that does not require specialized training and an environment that is integrated well with the host ground system. Scripts also offer flow-control functions that make them ideal for remote control including execution confirmation. Other than the very newest Web-oriented scripting languages, most are graphically challenged and maintaining the typical program constructs that are not specific to ground systems functions becomes time consuming and even unnecessary. ERS makes use of Microsoft’s Visual Basic (and C#) language which is also simple to understand and easy to learn. ERS adds “Wizards” to allow program access to the ground systems telemetry and command objects. Using a standard programming language removes the need to maintain basic programming syntax in a script interpreter and frees us to concentrate on new ways to interface to our base system. ERS also provides a powerful input/output flow-control feature that allows for the user to monitor progress and for confirmation execution of critical tasks if necessary, while also recording all actions in textual log files. ERS programs are not scripts. But ERS programs offer the same features as many scripting languages, such as the HOSC’s SLP, while also including the ability to include rich graphical components. ERS is an Enhanced and Redesigned Scripting environment that is lowering our scripting development and maintenance costs while increasing our ground systems automation capability. Appendix ERS Development Architecture The development environment for ERS consists of a computer with both EPC (version 6.5 or higher) and Visual Studio 2008 (or later) installed. As part of the EPC installation, Visual Studio is extended to include an ERS project wizard and a new editor for an ERS-specific file type. Once installed, creating ERS “scripts” consists of three distinct stages discussed below. 1. Development Stage A new ERS project is created within Visual Studio via a custom ERS project wizard. This creates a .NET project consisting of code files, written in either C# or VB.NET, plus an ERS-specific file used to map EPC’s telemetry and commanding objects into the project. The code includes methods defined in the ERS Library to do Enhanced HOSC System (EHS)-related directives, as well as any other user code that is allowed by the .NET languages. All the Visual Studio IDE features such as IntelliSense, help, and debugging are now part of the scripting development experience. In programming terms, an ERS “script” is just a .NET class that extends a base ERS class called “ScriptEngine”. This base ERS class contains EHS-related methods. In addition to using the EHS-related methods, the user can work with EPC telemetry measurements and commands. These are represented as .NET objects with various methods and properties that the user can code against. Telemetry and commanding objects are introduced into an ERS project by way of a custom editor provided by ERS. The custom editor is associated with a “validation” file that is part of the ERS project. From this editor, the user can select various telemetry measurements and commands. Once selected, they become mapped to a variable and, thereafter, are accessible by code. 2. Validation Stage Building a scripting .NET project is similar to building a regular .NET project, except the script project has an additional custom build step that handles validation. Building a project and validating a project go hand-in-hand. Validation involves converting the validation file of an ERS project into proper set of .NET variables, by generating “behind the scenes” code. This generated code is then compiled together with the user code. The final output of this build is a .NET assembly .dll. This assembly contains the ERS validation information, “script” code, and any other code and/or resources the user included into the project. A script project can be validated in one of two ways. The first way, as mentioned above, is by building the ERS project inside of the Visual Studio IDE. Alternatively, an EPC application called Bulk Validation, can be used to build one or more ERS projects. 3. Run Stage A .NET assembly containing ERS scripting code can be run via the EPC Scratchpad Line, a scratchpad line in Display Operations (i.e. button) or via Script Operations. Additionally, because it in a .dll form, ERS scripting code can be used as a method by other libraries or executables. Alternatively, an ERS project can be converted into an EXE project type and run as a standalone application. Regardless of how it is started, before any ERS-related methods are executed the ERS code must initialize itself with an EPC session. This initialization process verifies that the ERS code was validated correctly against the same database version in the EPC session. Referenced telemetry measurements and commands are further checked to make sure they exist in the specified EPC session. **Script Syntax Compared** Sample Unix Shell Script that accepts one argument: ```bash if [ $# = 0 ]; then echo Error! Parameter required. exit 1 fi if [ "$1" = "Dog" ]; then echo Bow Wow! elif [ "$1" = "Cat" ]; then echo Meow! else echo Error!: Invalid Parameter fi ``` Scripts that update a command's bandwidth field and then uplinks the command: **SLP Script** ``` begin_script script_name declarations system_section global_section local_section end_declarations update command SET_TEMP_BANDWIDTH fields BANDWIDTH = 10 endupdate uplink command SET_TEMP_BANDWIDTH end_script ``` **ERS “Script”** ``` With SetTempBandwidth.Fields Bandwidth = 10 End With SetTempBandwidth.Update() SetTempBandwidth.Uplink() ``` Or ``` SetTempBandwidth.Update(10) SetTempBandwidth.Uplink() ``` **References**
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20100020941.pdf", "len_cl100k_base": 5833, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 29580, "total-output-tokens": 6421, "length": "2e12", "weborganizer": {"__label__adult": 0.0003542900085449219, "__label__art_design": 0.0002084970474243164, "__label__crime_law": 0.00026416778564453125, "__label__education_jobs": 0.0005741119384765625, "__label__entertainment": 7.933378219604492e-05, "__label__fashion_beauty": 0.0001653432846069336, "__label__finance_business": 0.0002390146255493164, "__label__food_dining": 0.0003108978271484375, "__label__games": 0.0007109642028808594, "__label__hardware": 0.00342559814453125, "__label__health": 0.00033855438232421875, "__label__history": 0.00029730796813964844, "__label__home_hobbies": 0.0001080632209777832, "__label__industrial": 0.0009069442749023438, "__label__literature": 0.0001621246337890625, "__label__politics": 0.00022268295288085935, "__label__religion": 0.0003843307495117187, "__label__science_tech": 0.05401611328125, "__label__social_life": 6.824731826782227e-05, "__label__software": 0.0148468017578125, "__label__software_dev": 0.92041015625, "__label__sports_fitness": 0.00035953521728515625, "__label__transportation": 0.0012035369873046875, "__label__travel": 0.00021016597747802737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28147, 0.01036]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28147, 0.83763]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28147, 0.89744]], "google_gemma-3-12b-it_contains_pii": [[0, 3327, false], [3327, 7789, null], [7789, 12770, null], [12770, 14948, null], [14948, 16219, null], [16219, 18092, null], [18092, 19545, null], [19545, 20395, null], [20395, 20914, null], [20914, 23604, null], [23604, 26021, null], [26021, 28147, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3327, true], [3327, 7789, null], [7789, 12770, null], [12770, 14948, null], [14948, 16219, null], [16219, 18092, null], [18092, 19545, null], [19545, 20395, null], [20395, 20914, null], [20914, 23604, null], [23604, 26021, null], [26021, 28147, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28147, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28147, null]], "pdf_page_numbers": [[0, 3327, 1], [3327, 7789, 2], [7789, 12770, 3], [12770, 14948, 4], [14948, 16219, 5], [16219, 18092, 6], [18092, 19545, 7], [19545, 20395, 8], [20395, 20914, 9], [20914, 23604, 10], [23604, 26021, 11], [26021, 28147, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28147, 0.1092]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
55f6447466b722687942bb025e26eca5aedd3fe9
Raising Time Awareness in Model-Driven Engineering Amine Benelallam, Thomas Hartmann, Ludovic Mouline, Francois Fouquet, Johann Bourcier, Olivier Barais, Yves Le Traon To cite this version: Amine Benelallam, Thomas Hartmann, Ludovic Mouline, Francois Fouquet, Johann Bourcier, et al.. Raising Time Awareness in Model-Driven Engineering. ACM/IEEE 20th International Conference on Model Driven Engineering Languages and Systems., Sep 2017, Austin, Texas, United States. hal-01580554 HAL Id: hal-01580554 https://hal.archives-ouvertes.fr/hal-01580554 Submitted on 1 Sep 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract—The conviction that big data analytics is a key for the success of modern businesses is growing deeper, and the mobilisation of companies into adopting it becomes increasingly important. Big data integration projects enable companies to capture their relevant data, to efficiently store it, turn it into domain knowledge, and finally monetize it. In this context, historical data, also called temporal data, is becoming increasingly available and delivers means to analyse the history of applications, discover temporal patterns, and predict future trends. Despite the fact that most data that today’s applications are dealing with is inherently temporal, current approaches, methodologies, and environments for developing these applications don’t provide sufficient support for handling time. We envision that Model-Driven Engineering (MDE) would be an appropriate ecosystem for a seamless and orthogonal integration of time into domain modelling and processing. In this paper, we investigate the state-of-the-art in MDE techniques and tools in order to identify the missing bricks for raising time-awareness in MDE and outline research directions in this emerging domain. Index Terms—Model-Driven Engineering, Analytics, Big Data, Temporal Data, Internet of Things 1. Introduction "History is a Greek word which means, literally, just investigation" —ARNOLD TOYNBEE 1.1. Time-Aware Data-Driven Applications Thanks to the strong emergence of modern data analytics platforms, data surrounding organisations (enterprises, scientific researchers) is being mobilised to provide reliable insights and a clear and consistent picture within and around their ecosystems. As a matter of fact, according to Forbes,[1] a study conducted by Accenture and General Electric shows that 84% of enterprises see the combination of modern analytics and IoT as essential for competitive growth. This undeniable gain is rushing organisations into adopting this new trend (a.k.a. big data integration). [1] https://tinyurl.com/hg7s2x9 The achievement of any big data integration project is tied to the ability of capturing relevant data from different sources (sensors, customer’s behaviour, etc.), efficiently storing it in a reliable and consistent way, finally, turning it into domain knowledge by uncovering data patterns and insights. The quality and relevance of this data play a major role in making accurate data analytics happen. In particular, acquiring the evolution of data in time contributes to high-quality data and delivers means to analyse the history of applications, discover temporal patterns, and predict future trends. Indeed, temporal data is one of the most common forms of data in data-driven applications. In runtime applications, data evolution is unlikely stationary but rather evolving over time. Nonetheless, currently, existing approaches, methodologies, and environments for developing data-driven applications do not have a native support for time. For instance, existing conceptual modelling languages are not yet capable of capturing the essential semantics of time-evolving information at design time, nor describing evolution constraints over it. Moreover, existing graph processing libraries and query languages are not well-adapted for writing temporal queries and algorithms (e.g. temporal pattern matching). We envision that MDE, thanks to its level of abstraction, would be an appropriate ecosystem for a seamless and orthogonal integration of time, during the whole development lifecycle of data-driven applications. Moreover, we expect that by raising the awareness of temporal aspects, in application development with MDE, we can significantly enhance the quality of data by giving it a well-defined structure and constrain its undesirable evolution over time. 1.2. Time Awareness in MDE: What Does it Take? Promoting temporal awareness in application development with MDE is not a new subject. Rivera et al. [1] have identified Time as one of the three challenges that should be addressed by the MDE community. Many approaches have been proposed to extend existing practices in MDE with temporal aspects [2], [3], [4]. To name a few, Bousse et al. [2] use trace management facilities to enable omniscient debugging of xDSLMs. E-Motions [5] extends in-place graph transformation rules with quantitative model of time to allow the analysis and simulation of DSLs. Kanso et al. [3] extended OCL with support for temporal constraints. specification for controlling systems behaviour over time. Nonetheless, as of today, most of these approaches focus either on the behavioural aspect or the structural evolution of the system over time. We notice that facets related to temporal data evolution are understudied, namely, its modelling, persistence, and processing. Moreover, most approaches represent time-evolving data as a simple sequence of snapshots of a model \[6, 7\], e.g. one snapshot per change. Such discretization not only leads to lots of duplicated data (unchanged elements are duplicated in the snapshots of a model) but, more importantly, the state of a model between two snapshots is not defined. This results in losing the semantic of continuously evolving data. How the continuous semantic of time can be efficiently preserved is a challenging research direction \[8, 9\]. As such, new approaches should be proposed so that MDE may deliver productivity, quality, and maintainability promises to data-driven application development. In this perspective, we pinpoint the following research directions, on which we will elaborate as we proceed: (i) a modelling language and approach to design time-aware data-driven applications, (ii) an adequate persistence framework for persisting and indexing historical data, (iii) an expressive query language for processing historical data. 1.3. Outline of the Paper In the remainder of the paper, we first introduce a motivational example in Section 2, followed by some preliminary concepts in Section 3. Afterwards, we investigate what progress has been made and what remains to be done in Sections 4-6. We give an overview of the state of the art, and we review some of the main challenges and limitations when applies. In Section 7, we describe how we conceive a time-aware modelling language, then we discuss the integration of temporal aspects as a first-class entity in modelling environments. Finally, Section 8 closes the paper and reveals our future work. 2. The Smart Grid Use Case To exemplify the need for raising temporal awareness, we use throughout this paper a smart grid case study \[9\]. Smart grids emerge as the new generation of existing electricity grids to keep pace with the raising demand for energy. They are expected to provide utility companies with full and remote control by leveraging modern information and communications technologies. To turn this vision into a reality, smart grids accommodate a variety of devices, which organise the grid in self-adaptive and dynamic micro-grids. The important devices for the context of this paper are: **Smart meters**, which are used to continuously measure the consumption of customers and to remotely report it to utility companies throughout data concentrators, for monitoring and billing purposes. **Repeaters** are regular smart meters acting as a bridge for other smart meters. This is useful if, for example, a smart meter can due to disturbances and noise not directly reach a data concentrator. **Data concentrators** control, collect, and store data from the smart meters connected to it. **Central systems** is the main station where all data is aggregated, stored, and analysed. The topology of a smart-grid network is organised in dynamic subtrees, where each concentrator acts as the root element of the tree. Depending on the signal strength, which, for example, can be influenced by the distance from a smart meter to a data concentrator, weather conditions, noise and other disturbances smart meters dynamically connect to the data concentrator with the best connection characteristics. In addition, at any time new smart meters and concentrators can be added to the network or removed from it. These dynamic changes can be considered as the evolution of the network topology over time. Data about energy consumption is sent on a regular basis from smart meters to their data concentrators. The intervals of collecting consumption data may vary, for example, from 5, 10 to 60 minutes. A common interval is 15 minutes \[10\]. When extracting knowledge from this data, the temporal dimension of data must be taken into consideration. An example is scheduling charging cycles of electric cars. In order to decide if there could be an overload risk if too many cars charge on the same cable at the same time, the usual load on this cable for this time and date (weekend/working day, winter/summer) needs to be considered. These tasks require efficient ways to structure, represent, query, and store temporal data efficiently. The temporal dimension of data often results in inefficient data querying and iteration operations to find and aggregate the requested data. Therefore, it is of utmost importance to be able to efficiently navigate and query temporal data. Taking again our smart grid example, for each customer, several consumption values per hour are collected. In order to predict the electric load in a certain area (i.e. at one cable), all customers currently connected to this cable need to be queried and then the history of their consumption values need to be analysed. This illustrates the need for expressive temporal queries. In fact, this counts for many application domains. Most data is inherently temporal: from our smart grid example, through self-driving cars, financial applications, medical systems, to insurance applications. 3. Background Hereafter, we describe the essential background: temporal relational databases, time granularity, and temporal graphs. 3.1. Temporal Relational Databases Temporal databases have been under active investigation for the last three decades. Some of these studies have focused on how to best address the persistence of temporal data with regards to the nature and the intent of the application under design. Most of these approaches perceive data evolution over time as a sequence of snapshots, each representing a single state of the real-world. In relational temporal databases, temporal aspects commonly include two attributes, *valid time*, which is the time period during which a fact is true in the real world, and *transaction time* that represents the period during which a record stored in a database is known. Bitemporal databases include both attributes. In this section, we focus on valid time temporal databases. Temporal databases using valid time add two more fields to temporal objects, *valid_from* and *valid_to*. These fields specify the period during which a field or relation is valid. Every time the object evolves in time, a new record is inserted with a validity value starting from the insertion time. The *valid_to* field of the previous record is updated to state that the record is no longer valid. In an early work, Clifford et al. [11] define a formal semantic for historical databases and intentional logic. Rose and Segev [12] suggest to incorporate temporal structures in the data model itself and extend the entity-relationship data model with temporal concepts. They also discuss the need for a temporal query language for their model and propose some examples. Some of these ideas have been introduced as an extension to the SQL standard, which plenty of commercial tools implement. Similarly to traditional relational databases, temporal ones face serious scalability issues and new approaches based on NoSQL databases have been proposed [13]. ### 3.2. Time Granularity Data evolution management requires using timestamps to capture the evolution of data over time. Timestamps may vary in granularity as well as representation. A granularity can be regarded as a mapping from integers to a subset of a time domain. Two well-known formalisms have been proposed to express time granularities, collections and slices. Collections come with two classes of operators called *dice* and *slice*, and a primitive type called *calendar*. A calendar is an ordered collection composed of infinite intervals. The dice operator enables to divide an interval to a more fine-grained collection. Whilst, the slice operator selects specific intervals from collections. Similarly to collections, the slices formalism is based on the calendar primitive. However, calendars are considered as a circular list with no first nor last element. Calendars can be dynamically generated from existing calendars. Finally, a slice is a symbolic expression denoting a set of not necessarily consecutive intervals identified by their starting point and their duration. The expressions $C$ and $S$ denotes the list of all the Mondays of Year 2016 expressed in the collections formalism and the slices formalism respectively: - $C = \{\text{Mondays : during : Years} = \{2016\}\}$ - $S = \{2016\}.\text{Years}+\text{all}.\text{Weeks}+\{1\}.\text{Days} \triangleright 1.\text{Days}$ Time granularity is a major challenge when developing applications concerning temporal data [8]. ### 3.3. Temporal Graphs One of the early works on temporal graphs were introduced by Vassilis Kostakos [14] as a mechanism for understanding the dynamic properties of systems. The temporal dimension leads to completely new insights and knowledge that are not present in static graphs. For example, while the page rank algorithm enables ranking web pages in search engines, the temporal page rank may bring more insights about how page ranks change over time, and potentially why. According to his work, a temporal graph is a graph that changes over time. A change may be characterised by either adding a new vertex or removing an existing one. Although his definition focuses only on the evolution of the graph topology, in many data-driven applications, developers may also be interested in the evolution of attribute values. Following the work of Kostakos, several temporal graph processing and storage systems have been proposed in recent years. For example, Chronos [15], and its extension ImmortalGraph [16] are storage and execution engines for iterative graph computation on temporal graphs. Other examples are Historical Graph Store (HGS) [17], Kineograph [18], and GraphTau [6]. With the exception of the work of Hartmann et al. [8], [9], most of these approaches propose data models which basically define temporal graphs as sequences of graph snapshots at specific timepoints, plus deltas in-between these snapshots. These approaches vary in the granularity level at which they track changes over time, as well as at the underlying data model. ### 4. Time-aware Modelling The MDE approach has already proved its capacity to cope with software and hardware heterogeneity, executability, and scalability in self-adaptive systems (e.g. IoT). In particular, the models@runtime approach has gained acceptance, and has become the de-facto MDE-based approach to model and execute dynamic adaptive systems. Fouquet et al. introduced an alternative meta-modelling framework, KMF, adequate for modelling and generating models@runtime-based tools. Rapidly, an ecosystem of tools, DSLs, and code generation techniques was built around it in order to simplify the provisioning, deployment, and reconfiguration of systems software. Similarly to KMF, CloudMF [19] also targets the integration of dynamic adaptive systems in the Cloud. At a different level of abstraction come MindCPS (doMaIN moDel for CPS) [20] and ThingML [21], two DSLs and code generation frameworks targeting CPS and IoT respectively. While MindCPS consists of a DSML to specify software for CPS using a MAPE-K loop style, ThingML is inspired by the UML component and state-chart diagrams to separate the architecture design from the action language. These languages provide a high-level of abstraction to address the heterogeneity and complexity of systems, however, none of them seems to handle the temporal dimension. On the other hand, the ER (Entity Relationship) community has a long-standing history of groundwork for conceptual modelling to support temporal models. Different approaches have been proposed [22], [23], [24]. Generally, either they adapt the semantics of the existing ER model constructs to support temporal data, or introduce new constructs to the ER model. Motivated by the lack of support for temporal features in existing (meta-)modelling languages and approaches, we identify the following features that require special attention: 4.1. Evolving Topology and Attributes Most approaches on systems modelling focus on the static view of the world and neglect the representation of its dynamics. It is only at development time that software developers introduce temporal concerns into the application’s business logic. Such knowledge can be captured in advance, by raising time awareness at design time. In this perspective, many proposals have been introduced in late 90’s for modelling temporal data, however, most of them are designed for ER modelling, but more importantly, they are based on discrete time scale. Spaccapietra et al. [22] propose a conceptual temporal model based on the well-known conceptual modelling principles. The authors introduce their solution, MADS, for spatio-temporal applications modelling. The authors defined a concise semantics of different timestamping strategies and levels (attribute, class, and relationship), and introduced four dynamic relationships for modelling dynamic aspects. Finally, the authors showed that their conceptual model can be mapped to traditional temporal databases. Other approaches have followed the same research direction, and an interesting survey expanded on different existing ER temporal models [25]. Complementary approaches [23], [24] were also proposed to guide application designers modelling temporal aspects. As pointed out by our motivation example, in modern data-driven applications, changes in the system, depending on a concept, do not change with the same frequency. More importantly, changes in some concepts need to be tracked on near real-time (pseudo-continuous). Unfortunately, none of the existing approaches seems to support these features. 4.2. Evolution Constraints While the timestamping mechanism enriches the static view of data by recording its evolution in the form of contiguous intervals, evolution constraints are imposed to control how data should evolve over time. Usually, modelling languages do not include constructs to express dynamic constraints. OCL [26] (Object Constraint Language) is the de-facto constraint specification language in MDE. Constraints expressed in OCL must hold at any point in time. They are evaluated against a single system state, except when having @pre or @post, they are evaluated with respect to the previous or next state as well. This type of constraints is suitable for attributes or associations values that either have a constant value or regular types. As we discussed earlier the need for handling continuous data types, standard OCL stands not suitable. Several studies [3], [27], [28] have extended OCL with time. In their paper [27], Hamie et al. introduced two operators to the OCL language, eventually for describing liveness constraints, and initially for describing initial constraints. Conrad et al. [28] extend with a set of operators, inspired from temporal logic, to enable better expression of both future and past tenses. Kanso et al. [3] propose a closely similar extension, which they augment with the concept of state change events. None of the existing approaches handles time granularities nor continuous time. The constraint specification language should have the ability to not only consider time as a discrete set of state changes but also, one global state continuously evolving over time. 4.3. Further Development We identify the need for a time-aware modelling as the first essential brick. Most of the proposed conceptual temporal models agreed on several requirements when dealing with modelling time evolving data. In particular, they consider orthogonality as the most important requirement. It consists in the ability to specify temporal constructors separately and independently from other static constructors (classes, attributes, and relationships). The language should give the application designer the freedom to decide whether to add or not the temporal dimension to a concept in the system. An annotation mechanism can be adopted for this purpose (as it is a common practice among MDE developers) and a well-defined and concise semantics should be provided. A user-friendly notation for time granularities should also be embedded in the language to help simplify temporal travelling. Finally, the modelling language itself should be simple, visual, and user-friendly. The second brick being an evolution constraint language, it would be intended to constrict the evolution of data over time. To do so, new temporal types and operators should be supported. The proposed language should be able to traverse the model and time travel regardless of the time granularity used at design time. Finally, the constraint language should consider time as continuous rather than a discrete time scale. Optionally, the language may support the definition of events, actions, and notifications with different severity levels. In Section [7] we present a brief example of how we imagine our modelling language and constraint evolution language. In future work, we plan to provide a concise semantics and syntax for both languages. 5. Temporal Data Representation and Storage The interest on scalable model persistence has grown significantly in recent years. Several approaches have been proposed, each one relying on different persistence model and backend [29], [30], [31], [32], [33]. These approaches store only the latest state of the model. None is designed for storing temporal data. A notable exception is the work of [8], [34], which specifically discusses the lack of native mechanisms to efficiently support the notion of history and time in the context of MDE in general and models@run.time [35], [36] in particular. They propose kind of a delta storage based on key/value stores to efficiently persist the history of time-evolving models. This has been implemented and integrated into the open-source framework GreyCat. Other research... efforts in MDE have concentrated on maintaining the history of the evolution of model-based artefacts. EMFStore [31] and CDO are some examples among others. They are designed to leverage modelling in collaborative environments and do not store the evolution of objects and attributes over time. Data in MDE is represented in its essential form by Directed Typed Attributed Graphs. In this perspective, an intuitive representation to retain temporal information in MDE would be temporal graphs. Unfortunately, existing temporal graph databases do not support storing typed graphs. A possible solution to cope with this, is to provide a mapping from temporal typed graphs to existing high-performance databases. Campos et al. [37] propose a mapping based on a graph database. This mapping differentiates between four kinds of nodes: object nodes, edge nodes, attribute nodes, and value nodes. Each node is identified by a UID, a name, and an interval in which the node is or was valid. Portal [35] represents a temporal graph using four SQL relations. Two valid-time relations are used to represent vertex and edges separately. The other two relations are used to represent vertex and edge attributes. In this section, we survey existing model-data representations in MDE and exhibit their limitations. Later, we discuss possible temporal model-data mappings in MDE. 5.1. Graph-data Representations in MDE Several approaches have been proposed to store model-data in MDE on top of different kind of databases. In what follows, we present existing frameworks and data models. 5.1.1. Relational databases. Mapping data in MDE to relational databases is an elderly subject. CDO [32] is the de facto standard solution to handle large models in EMF. It relies on relational databases for mapping and storing models. It was initially envisioned as a framework to manage large models in a collaborative environment with a low memory footprint. CDO adopts a traditional UML2Relational mapping, where Classes and multivalued references are mapped to relations, class attributes are organised in columns in the corresponding relations, and finally, objects are represented by tuples. Unfortunately, the model-data mapping in CDO does not support the temporal dimension. 5.1.2. NoSQL databases. One of the good examples illustrating different model-data mappings to NoSQL databases is NEOEMF [39]. It is a multi-backend model persistence framework that couples state-of-the-art NoSQL stores. NEOEMF/MAP [31] relies on a map-based data model to store model elements using a hashtable data structure. NEOEMF/COLUMN [30] is designed to enable the development of distributed MDE-based applications by relying on a distributed column store. Finally, NEOEMF/GRAPH [29] uses GraphDB to store model-data in its natural form by means of an attributed labelled graph database. The model-data mapping in NEOEMF/GRAPH is straightforward, except for model elements type. It is represented through a relationship labelled INSTANCEOF towards a vertex representing the type. NEOEMF/COLUMN uses a single table with three column families to store the information of the models (Type, Properties, and Container). NEOEMF/ MAP uses a similar mapping to NEOEMF/COLUMN where column families are represented as separate hashtables. 5.2. Further Development Except GreyCat, none of the presented solutions does support storing typed temporal graphs. However, GreyCat is not EMF-compliant and relies on its own meta-modelling framework, thereby complicating the use of existing EMF-based tools. The reason for this disruptive change is the use of time as a first class entity, cross-cutting all model access. The pro and cons for this change is longer discussed into Section 7. Nonetheless, we believe that novel model-data mappings to existing temporal databases should be envisaged, by considering time as a special attribute. The Portal database can be an inspiration for CDO as both rely on SQL as an underlying data model. However, as pointed by existing work [31], implementing complex algorithms atop relational databases fails drastically due to the expensive cost of join operators. Likewise, NEOEMF/GRAPH and NEOEMF/COLUMN model-data mapping can get inspired by existing work [15], [37] and propose a novel model-data mapping for supporting temporal dimension. The proposed mapping should be tailored in a way that guarantees good performance while carrying out common temporal graph operations (Section 6). Moreover, novel data caching and indexing techniques adequate for storing temporal graphs should be investigated. Indeed, the temporal dimension exhibits a new layout other than the structural structure locality. Instead of having connected elements stored in the same partition, which is good for performing graph traversals on the same snapshot, one may be interested instead in storing consecutive timepoints, which is good for performing time travels. Also, graph data compaction techniques can be thought of to reduce the amount of persisted data. Finally, new means to automatically generate APIs that are adapted for manipulating temporal models should be provided. In future work, we plan to extend existing persistence frameworks such as NEOEMF with capabilities to store temporal data and generate adequate API to query it. Conversely, we also plan to continue the development of GreyCat to bring closer traditional meta-modelling techniques with meta-model with time as first class entity. 6. Temporal Data Processing in MDE The most important step in a data integration projects is the analysis and the processing of collected data to extract valuable knowledge. A very known activity-example in temporal data analysis is temporal data mining. This activity seeks to exhibit temporal patterns in time evolving data. Typical tasks involved in temporal data mining include temporal clustering, temporal prediction, temporal pattern analysis. These tasks, as well as other temporal ones, involve iterative interaction with temporal data stores before acquiring the desired knowledge. Although languages for querying temporal data exist, they are not adapted for expressing temporal graph traversals and temporal graph matching. In particular, most existing languages are SQL-like [37], [38]. Expressing graph queries with such a family of languages is not transparent, and requires application developers to be aware of the underlying graph-data mapping. Moreover, SQL-like languages have a heavy aggregation syntax that results in unmanageable queries. Furthermore, recent work [40] argue that, even though commonly-used graph algorithms and operations such as depth-first search and breadth-first search are well-defined for static graphs, they are non-trivial when considering the temporal dimension. We believe that by providing a temporal graph querying language, we would simplify the development of new temporal graph algorithms and operations in an easy and intuitive manner. In this section, we investigate existing temporal graph query languages and identify their limitations. 6.1. Temporal query languages The most predominant language style for querying temporal databases is SQL-like. Inspired by the recent extension to SQL:2011 (TSQL2), existing temporal query languages supply different kinds of temporal databases with means to effectively retrieve and process data. TSQL2 introduces three temporal types to query data, date-time, period, and interval. The first one corresponds to a time t, without duration. The second one corresponds to a set of consecutive snapshots at a precise period, identified by two boundaries, while the last one corresponds to a duration, which is not anchored in time axis. TSQL2 provides a syntactic extension to SQL statements that let users specify the period of interest. Intervals are used inside the where clauses. However, the period columns (valid_from & valid_to) should be explicitly mentioned. TSQL2 is also shipped with period predicates for expressing conditions involving one or more time interval such as contains, overlaps, equals, etc. Campos et al. [37] introduce TEG-QL, a graph query language inspired by SQL and Cypher [41]. The from clause contains the pattern to be matched, in the form of one or more paths. The select indicates paths or attributes to be returned. The from clause is used to retrieve one or more paths, over which a selection is performed, while the where expresses the filtering predicate. TEG-QL defines two temporal modifiers, snapshot and in. While snapshot enables the slicing of the results in a specific time granularity, the in modifier enables the specification of a time interval where attribute values, nodes, and edges are valid. TSQL2 is not well-suited for expressing temporal pattern matching. And both languages fail to express graph traversal. Portal [38] proposes a powerful API to query and process temporal graphs. It relies on a graph algebra, TGraph, that extends relational algebra by specifying how temporal graph operations are applied to temporal relations. In particular, TGraph introduces the slice operator, which is responsible for cutting a temporal slice from a TGraph w.r.t. a time period. TGraph comes with a set of aggregation operators such as avg, sum, over time-evolving values. Portal is well-suited for defining temporal graph processing operation, however, it is not suitable for processing typed temporal graphs. 6.2. Further Development Many graph processing frameworks exploited the strong emergence of systems and programming models for distributed and parallel processing to leverage the processing of big data. Some of these frameworks are provided with high-level declarative languages designed for specific applications such as data warehousing, querying, etc.. Unfortunately, none of these languages is suitable for temporal graphs queries and traversals over typed attributed graphs. We argue that a declarative high-level language for querying and traversing temporal graphs is of big importance. Except for Portal, existing languages are not suitable for expressing temporal graph algorithms. The language should enable the expression of temporal pattern matching and temporal graph traversal, as they are keystones for expressing temporal graph algorithms. Moreover, SQL-like temporal languages express time constraints only in the where clause. Expressing two distinct time travel expressions (or more) over two different subgraphs within the same graph traversal, requires two (or more) nested select expressions. This results in a cumbersome query expression. The proposed language should flawlessly enable moving back from the temporal dimension to the structural one, and vice-versa. The language should support temporal aggregation operations, grouping data by time or by structure, and may express queries that return time intervals. Finally, the language should embed a user-friendly notation to express different granularities within the same query. Performance-wise, the language should deliver efficiency both in terms of time and memory consumption. We can rely on indexing and caching techniques provided by the persistence framework to faster query evaluation, and data compacton techniques to enable a low memory footprint. 7. Discussion 7.1. Towards a Time-aware Modelling Language Listing [1] shows an example of how a temporal smart grid metamodel (as described in Section [2]) could be defined using an imagined textual modelling language, which allows defining temporal properties. The language used in this example is inspired by existing work [3]. The particularity of this language is the integration of temporal annotations, namely temporalSensitivity, temporalPeriodicity, temporalConstraint, and precision. With these annotations, attributes and relationships can—in a declarative way—be extended with temporal semantic. For example, in the SmartMeter metaclass, the attribute activeEnergyConsumed is decorated with an annotation temporalSensitivity, which declares a granularity of 15 minutes. Based on this definition, all consumptions values would only be stored every 15 minutes, regardless of the pace of measurements of the real sensor. All values in-between would be averaged. Similarly, the concentrator relationship is annotated with a granularity of 1 second in order to be able to track its changes in near real-time. The flexibility of such annotation mechanism enables a metamodel design that mixes data which evolves at different paces. The `temporalPeriodicity` annotation has no effect on storage, however, it allows reasoning engines to optimise their checks according to the expected periodicity of changes. Following this idea, we could declare that voltage attributes can be compared daily. As for the annotation `precision`, it mixes temporal and domain information. Used on the voltage attribute, it describes the fact that temporal variations are meaningless if not greater than a threshold value. In other words, it specifies that the voltage should be stored only if a variation of more than 0.1 volts is measured. Under this threshold, values are ignored. Such specification allows the model to automatically filter insignificant data based on experts knowledge. Finally, the annotation `temporalConstraint` is used to specify evolution constraints over attributes or relationships. For example, in the metaclass `Cable`, the temporal constraint over the attribute `isOverloaded`, says that a Cable cannot be overloaded for more than 15 minutes. The `temporalConstraint` uses a conditional expression (if–then–endif) to check that the attribute value `isOverloaded` has been equal to true for less than 15 minutes. The operation `maxDuration(15.MINUTE)` returns true if the current value has been valid for less than 15 minutes. ### 7.2. Time as a First-Class Entity Today, OCL [26] is still the de-facto standard in the modelling community for describing functional and non-functional properties in form of metamodel extensions. OCL follows one of the main principles of model-driven engineering, the separation of concerns between structure and behaviour definition. In the past, various extensions of OCL have been proposed, e.g. [28] and more recently [42] [3], to add support for temporal constraints. All of this work have in common that they propose to use temporal OCL in order to check the evolution of model instances over time. In this paper, we envisaged the use of time as a fully functional property, which can enable to define the temporal behaviour of a system over time. For the sake of simplicity and to introduce temporal properties as smooth as possible, we suggest in this paper to define temporal behaviour as a cross-cutting annotation in an OCL-like style. We envisage the structure not as a static canvas for data, but as an entry to dynamically evolving data. In other words, an attribute with temporal semantic does not have a value, but one value for every given point in time. This can be compared with time series [43], which lately raised lots of attention. On the other hand, we also discussed the idea to consider time as a first-class property, cross-cutting any model element, i.e. every element in a model, as suggested by Hartmann et al. [34]. In this way, every model element would always have a temporal semantic and would be able to independently evolve over time. This profound shift from static modelling to temporal aware modelling enforces temporal considerations for every element. Traditional modelling concepts could be seamlessly mixed with temporal definitions, such as `@temporalSensitivity(-1)` which could mean that every temporal variation will be stored in a single timepoint. This shift has already been engaged in the database community, which already defines the notion of temporal graphs, where every node has a time attribute [9], [15], [37]. Our hypothesis is that temporal knowledge is part of a domain itself and we believe that MDE and its tooling ecosystem has the potential to pave the way to more structured, typed, and safe temporal data management systems. ### 8. Conclusion In this paper we have argued about the need for raising time awareness in MDE. In particular, we discussed that raising time awareness in MDE involves, at least, the integration of the temporal dimension in an orthogonal and seamless manner, the scalable persistence of historical data, finally, the ability to intuitively query and process this data. We investigated the state of the art in these areas. We exhibited the missing points, then we pointed to some research direction towards achieving time awareness in MDE. Finally, we gave a quick overview of how we imagine a time-aware modelling language as well as an evolution constraint language. In future work, we plan to provide a full support for these two languages and we intend to provide an adequate persistence framework and an expressive query language to enable the development of temporal graph algorithms and operations. ### References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01580554/file/raising-time-awareness.pdf", "len_cl100k_base": 7957, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28140, "total-output-tokens": 8482, "length": "2e12", "weborganizer": {"__label__adult": 0.0004642009735107422, "__label__art_design": 0.0006270408630371094, "__label__crime_law": 0.0004520416259765625, "__label__education_jobs": 0.0009317398071289062, "__label__entertainment": 0.00013589859008789062, "__label__fashion_beauty": 0.000274658203125, "__label__finance_business": 0.0005826950073242188, "__label__food_dining": 0.0004901885986328125, "__label__games": 0.0006628036499023438, "__label__hardware": 0.001220703125, "__label__health": 0.0009541511535644532, "__label__history": 0.0006194114685058594, "__label__home_hobbies": 0.00015270709991455078, "__label__industrial": 0.0008702278137207031, "__label__literature": 0.0005807876586914062, "__label__politics": 0.0004258155822753906, "__label__religion": 0.0006241798400878906, "__label__science_tech": 0.251220703125, "__label__social_life": 0.00014340877532958984, "__label__software": 0.01263427734375, "__label__software_dev": 0.72412109375, "__label__sports_fitness": 0.0003294944763183594, "__label__transportation": 0.0011806488037109375, "__label__travel": 0.0003027915954589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40984, 0.02828]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40984, 0.44656]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40984, 0.91043]], "google_gemma-3-12b-it_contains_pii": [[0, 1118, false], [1118, 5595, null], [5595, 11525, null], [11525, 17723, null], [17723, 23787, null], [23787, 29901, null], [29901, 36142, null], [36142, 40984, null], [40984, 40984, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1118, true], [1118, 5595, null], [5595, 11525, null], [11525, 17723, null], [17723, 23787, null], [23787, 29901, null], [29901, 36142, null], [36142, 40984, null], [40984, 40984, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40984, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40984, null]], "pdf_page_numbers": [[0, 1118, 1], [1118, 5595, 2], [5595, 11525, 3], [11525, 17723, 4], [17723, 23787, 5], [23787, 29901, 6], [29901, 36142, 7], [36142, 40984, 8], [40984, 40984, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40984, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
96468e99ad6262e04f1bed3874e0b71d914399ef
Introduction for using UML Edition 2, 2006 Mikael Åkerholm, Ivica Crnković, Goran Mustapić, Mikael Davidsson Mälardalen University, Västerås, Sweden, 2006 Abstract The purpose with this document is to provide a brief introduction to the Unified Modeling Language (UML) for usage in the jointly held course in distributed development in Västerås and Zagreb. The focus is on using UML in general, not in combination with a particular tool, problem domain or programming technique. UML has intentionally been developed as a language for modeling object-oriented systems. Its use has however widely been spread out. Today UML is used for system specifications. In different domains UML (for example in distribution of electrical power) is used for specification and standardization of different systems or parts of the systems. This standardization makes it possible that different vendors produce products that comply with the standard specification. From UML it is possible to automatically generate different types of descriptions (for example specifications in XML), or even automatic creation of software code. UML is becoming a standard tool for software and system engineers. TABLE OF CONTENT 1. INTRODUCTION 3 2. DIFFERENCES BETWEEN UML 1.4 AND 2.0 5 2.1 New Diagrams 5 2.2 Diagram Updates 6 3. BASIC BUILDING BLOCKS OF UML 7 3.1 Things 8 3.2 Relationships 10 4. USE CASE DIAGRAMS 12 5. CLASS DIAGRAMS 14 6. SEQUENCE DIAGRAMS 18 7. STATE DIAGRAMS 20 8. ACTIVITY DIAGRAMS 23 9. A SMALL SYSTEM DESIGN 26 10. REFERENCES 31 1. INTRODUCTION UML is a result of the evolution of object-oriented modeling languages. It was developed by Rational Software Company by unifying some of the leading object-oriented modeling methods, - Booch by Grady Booch, - OMT (Object Modeling Technique), by Jim Raumbaugh and - OOSE (Object-Oriented Software Engineering), by Ivar Jacobson. The authors of these languages are sometimes called the three amigos of software engineering. They were participating in the around twenty people strong group which was formed in '94 and submitted UML 1.0 to the Object Management Group (OMG) in '97. The current version of UML is 2.0 (published in Oct 2004). UML 2.0 is divided into 4 parts, the first part, the Superstructure Specification, is ready but they are still working on three smaller parts they are to be official at the end of 2005. UML is used for modeling software systems; such modeling includes analysis and design. By an analysis the system is first described by a set of requirements, and then by identification of system parts on a high level. The design phase is tightly connected to the analysis phase. It starts from the identified system parts and continues with detailed specification of these parts and their interaction. For the early phases of software projects UML provide support for identifying and specifying requirements as use cases. Class diagrams or component diagrams can be used for identification of system parts on a high level. During the design phase class diagrams, interaction diagrams, component diagrams and state chart diagrams can be used for comprehensive descriptions of the different parts in the system. We will start by giving the basic building blocks of UML, and then introduce the most essential UML diagrams one by one, each with a small example. Finally an example with a couple of diagrams is given to illustrate how the diagrams can be combined to describe the design of a small software system. 2. DIFFERENCES BETWEEN UML 1.4 AND 2.0 Between version 1.4 and 2.0 there was a version 1.5 that was released in 2002. In version 1.5 there were some minor changes, one change was the adding of action semantics to UML, this was made to make UML a programming language. It was made for the wait of version 2.0 so that people could start to use this kind of functionality. In UML 2.0 there are some major changes and new functionality. The most obvious change is that there are five new diagrams. Two of them (Object and Package) was used in UML 1 but they were not official diagrams. Composite Structure, Interaction Overview and Timing are the three other new diagrams. The collaboration diagram from UML 1 has changed name to Communication diagram. The old diagrams have also got some updates, some more than others. 2.1 New Diagrams Composite Structure Diagrams; Depicts the internal structure of a classifier (such as a class, component, or use case), including the interaction points of the classifier to other parts of the system under runtime. Timing Diagrams; they are used to explore the behaviors of one or more objects throughout a given period of time. There are two basic flavors of timing diagram, the concise notation and the robust notation. Timing diagrams are often used to design embedded software, such as control software for fuel injection system in an automobile, although they occasionally have their uses for business software too. Interaction Overview Diagrams; a variant of an activity diagram which overviews the control flow within a system or business process. Each node/activity within the diagram can represent another interaction diagram. 2.2 Diagram Updates Most of the updates made to diagrams are things that were not commonly used in UML 1.x, therefore removed from 2.0. Class Diagrams; Discontinuous multiplicities is removed, no more \([2, 4]\) meaning (2 or 4). Now it is only the standard multiplicities like 1, 0..1, *, etc. The frozen property has been removed. Classes can require interfaces as well as provide them. Instances on objects are instance specifications. The Active Class symbol has changed. Sequence Diagrams; an interaction frame notation has been added to this diagram. This makes it possible to handle iterative, conditional and various other controls of behavior. The old iteration marks and guards on messages have been removed from the sequence diagram. State Machine Diagrams; there is no longer any separation between short-lived actions from long-lived activities as there were in UML 1. In UML 2 both are called activities; the term do-activity is used for long-lived activities. Activity Diagrams; In UML 1 these were considered a special case of state diagrams. This caused some problems for people modeling workflows, so in UML 2 this is no longer the case. This means that we don’t need to balance our forks and joins anymore as we had to in UML 1. Do not use multiple incoming flows to an action block; put a join before the action block instead. 3. BASIC BUILDING BLOCKS OF UML The basic building blocks in UML are things and relationships; these are combined in different ways following different rules to create different types of diagrams. In UML, there are 13 types of diagrams, below is a list and brief description of them. The more in-depth descriptions in the document will focus on the first five diagrams in the list, which can be seen as the most general, sometimes also referred to as the UML core diagrams. 1. Use case diagrams; shows a set of use cases, and how actors can use them 2. Class diagrams; describes the structure of the system, divided in classes with different connections and relationships 3. Sequence diagrams; shows the interaction between a set of objects, through the messages that may be dispatched between them 4. State chart diagrams; state machines, consisting of states, transitions, events and activities 5. Activity diagrams; shows the flow through a program from an defined start point to an end point 6. Object diagrams; a set of objects and their relationships, this is a snapshot of instances of the things found in the class diagrams 7. Communication Diagrams (Collaboration diagrams in UML 1); a way to show how objects are linked together and how messages are sent between them. 8. Component diagrams; shows organizations and dependencies among a set of components. These diagrams address static implementation view of the system. 9. Deployment diagrams; show the configuration of run-time processing nodes and components that live on them. 10. Package diagrams; is used to group classes at compile time to get an easier overview of a bigger system with a lot of classes. 11. Composite Structure diagrams; runtime decomposition of a class, it is like a package diagram but it shows the grouping at runtime instead of compile-time. 12. Interaction Overview diagrams; a mix of sequence diagrams and activity diagrams. 13. Timing diagrams; shows interaction between objects base on timing. This type of diagram is mostly intended for hardware design. 3.1 Things Things are used to describe different parts of a system; existing types of things in UML are presented in table 1. <p>| Types of Things | |-----------------|----------------|----------------------------|----------------| | Name | Symbol | Description | Variations/other related elements | | Class | ![Class Symbol] | Description of a set of objects that share the same attributes, operations, relationships and semantics. | - actors - signals - utilities | | Interface | ![Interface Symbol] | A collection of operations that specify a service of a class or component. | | | Collaboration | ![Collaboration Symbol] | An interaction and a society or roles and other elements that work together to provide some cooperative behavior that is bigger than the sum of all the elements. Represent implementation of patterns that make up the | |</p> <table> <thead> <tr> <th><strong>Actor</strong></th> <th>The outside entity that communicates with a system, typically a person playing a role or an external device.</th> </tr> </thead> <tbody> <tr> <td><strong>Use Case</strong></td> <td>A description of a set of sequence of actions that a system perform that produces an observable result of value to a particular actor. Used to structure behavioral things in the model.</td> </tr> <tr> <td><strong>Active class</strong></td> <td>A class whose objects own a process or execution thread and therefore can initiate a control activity on their own.</td> </tr> <tr> <td><strong>Component</strong></td> <td>A component is a physical and replacable part that conforms to and provides the realisation of a set of interfaces.</td> </tr> <tr> <td><strong>Node</strong></td> <td>A physical resource that exists in run time and represents a computational resource.</td> </tr> <tr> <td><strong>Interaction</strong></td> <td>Set of messages exchanged among a set of objects within a particular context to accomplish a specific purpose.</td> </tr> <tr> <td><strong>State Machine</strong></td> <td>A behavior that specifies the sequences of states an object or an interaction goes through during its lifetime in response to events, together with its responses to those events.</td> </tr> <tr> <td><strong>Activity</strong></td> <td>A behavior that specifies the sequences of steps a computational process performs during its lifecycle.</td> </tr> <tr> <td><strong>Packages</strong></td> <td>General purpose mechanism of organizing elements into groups.</td> </tr> </tbody> </table> 3.2 Relationships The types of UML relationships are shown in the table 2, relationships are used to connect things into well defined models (UML diagrams). <table> <thead> <tr> <th>Types of Relationships</th> <th>Name</th> <th>Symbol</th> <th>Description</th> <th>Specialization</th> </tr> </thead> <tbody> <tr> <td></td> <td>Dependency</td> <td></td> <td>A semantic relationship between two things in which a change to one thing may affect the semantics of the dependent thing.</td> <td>Specialization</td> </tr> <tr> <td></td> <td>Association</td> <td>0..1</td> <td>Structural relationship that describes a set of links, where a link is a connection between objects. Aggregation and composition are “has-a” relationship. Aggregation (white diamond) is an association indicating that one object is temporarily subordinate or the other, while the composition (black diamond) indicates that an object is a subordinate of another through its lifetime.</td> <td>Aggregation</td> </tr> <tr> <td></td> <td>Generalization</td> <td></td> <td>Specialization/generalization relationship in which objects of the specialized element are substitutable for objects of the generalized element.</td> <td></td> </tr> <tr> <td></td> <td>Realization</td> <td></td> <td>Semantic relationship between two classifiers, where one or them specifies a contract and the other guarantees to carry out the contract. They are used between: - interfaces and classes or components - use cases and collaborations that realize them</td> <td></td> </tr> </tbody> </table> Table 2, UML relations 4. USE CASE DIAGRAMS Use case diagrams are done in an early phase of a software development project. They express how it should be possible to use the final system. It is important to focus on specifying how an external user interacts with the system, not trying to specify how the system shall solve the tasks. The granularity of a use case is typically larger than a single operation, but smaller than systems. Use cases are a good way to express the functional requirements of a software system, they are intuitive and easy to understand so they can be used in negotiations with non-programmers. The participants in a UML use case diagram are use cases, one or several actors and relations, associations and generalizations between them. Some small examples are given in figures 1 to 3. Figure 1 shows how a cash dispenser system can be used, an actor customer are associated to the use cases Withdraw Money and Get Account Balance. In general use case diagrams should be as simple as in figure 1. ![Figure 1, use cases of a cash dispenser system](image) In figure 2, some use cases of a web based fruit shop is shown, in this example the generalization symbols are used. There are generalizations between the actors Shop Assistant and User and Customer and User, meaning that both a Shop Assistant and a Customer is a user and shall both be able to browse fruits. But it is only a Shop Assistant that can use the system through the use case Add Fruits, likewise it is only a Customer that is associated with the use case Buy Fruits. ![Use case diagram for a web shop](image) **Figure 2, Use case diagram for a web shop** Figure 3, illustrates that use cases can include each other, in this example it shows that using a web surfing station includes a login. The user which is the actor can use the system to access the internet through the Access Web use case, but that includes the Login use case. ![Use case for a web surfing station](image) **Figure 3, Use case for a web surfing station** 5. CLASS DIAGRAMS Class diagrams can profitably be used both in the early phases of a project and during detailed design activities. This is possible mainly because of the ability to draw class diagrams as conceptual for high level activities, as well as detailed later in the project. Class diagrams express the static structures of a system divided into different parts called classes and also which relations the classes have to each others. A class diagram consists of the parts classes, associations and generalizations, and can exist in several different levels. Below is an identification of three different useful levels, starting with the least detailed. - Conceptual class diagrams (conceptual model), represent concepts of the problem domain - High level class diagrams (type model), describe static views of a solution to a problem, through a precise model of the information that is relevant for the software system - Detailed class diagrams (class model), include data types, operations and possibly advanced relations between classes The following example illustrates a design of the software system in a vending machine for soft drinks, during different phases in the development. Figures 4, is a conceptual class diagram done early in the project, while figure 5 is a high level design, and finally figure 6 is the detailed design. In figure 4, the conceptual class diagram of the problem domain of vending machines is showed. The diagram shows that the problem domain is concerned with Coins, Vending Machines, Soda Cans, and Customers; it also shows how they are related to each others through un-directed associations. For instance, it shows that the Vending Machine is associated to all other conceptual classes in the diagram, whereas Coins are only associated to the Vending Machine class and the Customer class. Figure 4, conceptual class diagram of a vending machine Figure 5, shows the high level class diagram for the same vending machine software project. Some names of the classes in figure 4 and 5 may be the same, but they do not represent the same thing; in the former conceptual case the classes show different physical concepts of the problem domain and how they relate, this diagram could have been specified by a non-programmer. The high level class diagram in figure 5 shows precisely what information the software system must handle and represents a solution to the problem, in form of attributes in some of the classes. The solution add two classes, one class Coin Handler for handling the current amount inserted and dealing with the coins, and another class Stock for handling the soda cans. Figure 5, high level class diagram The customer is no longer related to the Coins class or the Soda Can class, this is a point where there is an obvious difference between the physical relations in the conceptual diagram (figure 4). The customer class could for instance have been related to the Coins class, but they are not related in this particular problem solution. Furthermore, the Coins class has three specializations in the solution, representing coins with different values. Figure 6 is a detailed class diagram showing the concrete data type of attributes, and operations provided by the different classes for the same example. A detailed class diagram also refines the relations between different classes, often through aggregations or compositions with multiplicity defined. The difference between aggregation and composition, (filled or not filled diamond) is a little bit vague so a suggestion is to make practice to use one of them consequently. Figure 6, a detailed class diagram of the vending machine Starting from the Customer class, we can see that it is related to the Vending Machine class, which provide three operations. The Vending Machine class also has one instance of the Stock class, and one of the Coin Handler class. If we focus on the aggregation diamond between the Coin Handler class and the Coins class, it gives the information that one Coin Handler can have zero to many Coins. The relations between the Stock and the Soda Can classes are refined in a similar way, i.e., with aggregation and multiplicity. UML sequence diagrams are used to model the flow of control between objects. It can be hard to understand the overall flow in a complex system without modeling it. Sequence diagrams model the interactions through messages between objects; it is common to focus the model on scenarios specified by use-cases. It is also often useful input to the detailed class diagram to try to model the specified use cases with sequence diagrams, necessary forgotten operations and relations are usually found. The diagrams consists of interacting objects and actors, with messages in-between them. Figure 7, shows an example of the use case when a customer successfully buys a soft drink from the vending machine modeled by class diagrams in the class diagram chapter. First of all the vertical line denotes time, and the rectangles on the lines appear ance of the objects. In the diagram, the actor initiates the activity by the message chooseSoda() to the Vending Machine. The Vending Machine in turn triggers the operation checkAmount() which is implemented by the CoinHandler class. The CoinHandler class has several instances of the Coin class (one for each coin), and for every one of them the CoinHandler class calls the getValue() operation. In this sequence diagram there is no distinguished notation for iterative calls, such as the mentioned case when every instance is called. However, sometimes it might be useful to express iterations such as this, and UML has support for this, but most often it is better to keep down the complexity of the figures. The hatched arrow from Coins to CoinHandler is a return arrow, indicating that some kind of return flow are taking place, it is the same type of return arrow from CoinHandler to VendingMachine. Return arrows can be used if desired, but they are often not necessary, it is again often best to keep it as simple as possible. Moving to the Stock class and looking at the communication it generates, we can see that there is one arrow, the one marked with deliverSoda() that is going to the class itself. That is an internal method call, which is generated by the evoked getSoda() method. UML State charts are most often used for low level design, like modeling the internal behavior of a complicated class. But they are also useful on a higher level on modeling different states of a whole system; this can be compared to the usage of class diagrams on several levels. The basic elements in a state chart are states and transitions, in figure 8 the basic elements and the notation is shown. To the left in the figure there is a state with an arrow symbolizing a state transition going up and then back to itself again, so in this case it is not a transition to another state. The notation of a state transition is showed in the text on top of the arrow. Event is the event that triggers the transition. Inside the braces it stands Guard Condition, which is an additional condition that needs to be fulfilled for the transition to be taken. Eventually, the Action is the action that will take place during the transition. Rightmost in the figure we have two special symbols indicating the start and the stop state respectively. In figure 9 is a high level state chart diagram, showing state transitions on the system level in an airplane. This diagram is completely without events, guards and actions, but if they are possible to identify and give relevant information on this level they shall be used. The start state is located to the left in the figure, the initial state is On Ground, followed by Take Off, to Flying, to Landing and finally back to On Ground again. An example of a lower level state diagram is the one in figure 10. It is a candy machine and the start state is again located to the left in the figure. From the start state there is an unconditional transition to the IDLE state, during the transition it is an action Clear Amount taking place. From the idle state it is two transitions, one leading to the IDLE state again which is triggered by the Candy Choice event, and another which is triggered by the Money Insertion event leading to the READY TO SERVE state with the Increase Amount action. The transition to and back to the READY TO SERVE state is triggered by two different events. One is the Candy Choice event again with a guard Amount To Small, this shall be treated as if the condition is true the transition is taken. Notice that there is a transition from the READY TO SERVE STATE to the AMOUNT ENOUGH state, which is triggered by the same event Candy Choice but has another guard condition Amount Enough. This illustrates the usage of guard conditions. If the current state is READY TO SERVE and an incoming event is Candy Choice, the machine takes the transition to the AMOUNT ENOUGH state if the condition Amount Enough is true, otherwise the Amount To Small condition must be true and the machine takes the transition back to the READY TO SERVE state again. Figure 10, state diagram of a candy machine 8. ACTIVITY DIAGRAMS Activity diagrams can be in many places in the design process; sometimes even before use case diagrams for understanding the workflow of a process. But they can also be used for defining how use cases interact or even for detailed design. Basic elements in activity diagrams are activities, branches (conditions or selections), transitions, forks and joins. An example of the workflow of building a house is visualized in figure 11. The diagram starts in black dot in the top, which is the start state. The first activity in the house building process is to select a site, followed by making a bid plan. After the bid plan there is a branch in the workflow either the bid plan is accepted, or in the case when it is not accepted the flow is leading back to the bid plan activity again which means that the bid plan has to be refined. Continuing down to the fork in the case we have accepted the bid plan, it is two flows out from it meaning that it is parallel activities. Trade work and site work are taking place in parallel and ends in the join symbol below them, with only one single workflow out of it leading to the construction activity. Which is the last activity before the flow reaches the end symbol. Figure 11, the workflow of building a house Figure 12, is an example of an on-line record store. In this diagram, there is a feature called swim lanes represented. Swim lanes are useful when a designer wants to express how related activities can be grouped. They can for instance be grouped since they are in the same use case, or in the same organizational unit, different nodes in a distributed system or whatever. Here the modeler has chosen to have the swim lanes Customer, Shop System and Shop Assistant. The flow starts in the swim lane for Customer related activities with a login, if the Shop System accepts the login it lists the records. The customer picks some records, and has to confirm the choice after a listing from the shop system. Upon a confirmation from the customer the Shop System calculates the price for the records and prints the invoice. The invoice is later picked up by a Shop Assistant who actuates the order. Figure 12, activity diagram of the flow in a web based record store 9. A SMALL SYSTEM DESIGN Imagine the software system at a library. The main task should be keeping track of all books and the status of each book (out of loan, in stock etc). In this example we are modeling and explaining the system with a sequence of UML diagrams. The first step is the system analysis, and the input to the analysis is the specification of the requirements. In an object oriented and UML approach the requirements are identified with help of identifying of cases of use of the system. This is done by UML use case diagrams. The main goal of this part is to identify the most characteristic use cases, and the actors (i.e. people or other types of “users” of the system). In figure 13, a UML use case diagram shows examples of how the system is intended to be used. To the left in the figure we have identified an actor Librarian, which is a librarian. The librarian can use the system in four ways. Firstly it is possible to add a new book to the system visualized by the AddNewBook use case. Secondly, if a book is out of loan, it is possible for the librarian to make a reservation for a customer; this is showed by the ReserveBook use case. It should also be possible to loan books, i.e. the LoanBook use case. Finally the ReturnBook use case shows that a librarian should also be able to return a book to the system when a customer hands a previously borrowed book in. Figure 13, a UML use-case diagram, visualizing how a librarian can use a library booking system. The next step in the analysis and design process is to identify the objects the system deal with. Form the problem description and use case diagrams it is easy to identify the objects involved in the system: The library, the books, and the librarian. In addition to this we also have a librarian system itself which we can specify as a set of services. We specify the objects by specifying the classes with their attributes and services (methods) they provide. In UML this is done by a class diagram. This diagram also includes specifications of relations between the objects. In Figure 14, a UML class diagram shows the four classes Book, Library, LoginService and Librarian and how they are related to each others. Internal attributes and methods are also shown in the figure. The UML class diagram shows the static characteristics (i.e. the structure) of the system. The dynamic behavior of the system can be described by state chart diagrams and interaction diagrams. State chart diagrams are used to express the states of the systems, or internal inside classes, and its transition from a state to a state triggered by a particular event. State chart diagrams are variations of finite-state machines, a standard method used in software design and programming. Figure 15 shows UML state chart diagram showing the internal state of the class Book. The diagram shows that a book can be in one of the four states In stock, Out of loan, Reserved and out of loan and Reserved and in stock. The starting point for a new book is marked with the black dot to the left in the figure. When a new book is entered in the system it enters the In stock state, when the method loanBook is executed the book enters the Out of loan state etc. This diagram has no guards or actions specified, and all events are actually methods in the book class. This might not be the most common usage of state chart diagrams, but it fit quite nicely in this design. Figure 15, UML state-chart diagram for the class book UML sequence diagrams are used to show interaction between different objects in a time sequence. The vertical line denotes time, the rectangles appearance of the objects, and the arrows invocation of services of particular objects, or interaction between the objects. In Figure 16, the LoanBook use-case is further developed with a UML interaction diagram, showing the sequence of interactions required to solve the use-case. Firstly the Librarian has to use the login() method provided by the LoginService class. Then a reference to the book is required, and the librarian has to utilize the getBookByTitle() method provided by the Library class. Finally the loaning process can be accomplished by the loanBook method provided by the class representing a book. Figure 16, a UML sequence diagram, solving the loan book use-case 10. REFERENCES Suggestions for further reading! Craig Larman, applying UML and Patterns, an Introduction to Object-Oriented Analysis and Design and Iterative Development, Third edition, Prentice Hall 2005
{"Source-Url": "http://www.idt.mdh.se/kurser/cd5490/2011/lectures/UML-overview%202005.pdf", "len_cl100k_base": 6418, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 48584, "total-output-tokens": 7516, "length": "2e12", "weborganizer": {"__label__adult": 0.00032782554626464844, "__label__art_design": 0.0006461143493652344, "__label__crime_law": 0.00030112266540527344, "__label__education_jobs": 0.003726959228515625, "__label__entertainment": 5.3822994232177734e-05, "__label__fashion_beauty": 0.00013530254364013672, "__label__finance_business": 0.0002112388610839844, "__label__food_dining": 0.0002627372741699219, "__label__games": 0.0004935264587402344, "__label__hardware": 0.0005202293395996094, "__label__health": 0.0003597736358642578, "__label__history": 0.00032591819763183594, "__label__home_hobbies": 7.49826431274414e-05, "__label__industrial": 0.00035452842712402344, "__label__literature": 0.00035452842712402344, "__label__politics": 0.00021088123321533203, "__label__religion": 0.0004658699035644531, "__label__science_tech": 0.0122222900390625, "__label__social_life": 8.726119995117188e-05, "__label__software": 0.00887298583984375, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.00026726722717285156, "__label__transportation": 0.00043129920959472656, "__label__travel": 0.0002046823501586914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30774, 0.03285]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30774, 0.84199]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30774, 0.93803]], "google_gemma-3-12b-it_contains_pii": [[0, 158, false], [158, 1184, null], [1184, 1551, null], [1551, 3205, null], [3205, 3505, null], [3505, 5181, null], [5181, 6533, null], [6533, 7973, null], [7973, 9493, null], [9493, 10863, null], [10863, 12523, null], [12523, 12523, null], [12523, 13794, null], [13794, 14529, null], [14529, 16248, null], [16248, 17205, null], [17205, 18500, null], [18500, 18717, null], [18717, 19473, null], [19473, 20854, null], [20854, 22337, null], [22337, 23665, null], [23665, 23709, null], [23709, 24944, null], [24944, 25748, null], [25748, 25953, null], [25953, 27346, null], [27346, 28223, null], [28223, 29384, null], [29384, 30201, null], [30201, 30267, null], [30267, 30774, null]], "google_gemma-3-12b-it_is_public_document": [[0, 158, true], [158, 1184, null], [1184, 1551, null], [1551, 3205, null], [3205, 3505, null], [3505, 5181, null], [5181, 6533, null], [6533, 7973, null], [7973, 9493, null], [9493, 10863, null], [10863, 12523, null], [12523, 12523, null], [12523, 13794, null], [13794, 14529, null], [14529, 16248, null], [16248, 17205, null], [17205, 18500, null], [18500, 18717, null], [18717, 19473, null], [19473, 20854, null], [20854, 22337, null], [22337, 23665, null], [23665, 23709, null], [23709, 24944, null], [24944, 25748, null], [25748, 25953, null], [25953, 27346, null], [27346, 28223, null], [28223, 29384, null], [29384, 30201, null], [30201, 30267, null], [30267, 30774, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30774, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30774, null]], "pdf_page_numbers": [[0, 158, 1], [158, 1184, 2], [1184, 1551, 3], [1551, 3205, 4], [3205, 3505, 5], [3505, 5181, 6], [5181, 6533, 7], [6533, 7973, 8], [7973, 9493, 9], [9493, 10863, 10], [10863, 12523, 11], [12523, 12523, 12], [12523, 13794, 13], [13794, 14529, 14], [14529, 16248, 15], [16248, 17205, 16], [17205, 18500, 17], [18500, 18717, 18], [18717, 19473, 19], [19473, 20854, 20], [20854, 22337, 21], [22337, 23665, 22], [23665, 23709, 23], [23709, 24944, 24], [24944, 25748, 25], [25748, 25953, 26], [25953, 27346, 27], [27346, 28223, 28], [28223, 29384, 29], [29384, 30201, 30], [30201, 30267, 31], [30267, 30774, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30774, 0.15278]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
6d54419c279b78c67fabb587df1de792ae28e47f
ABSTRACT The current trend in mutation testing is to reduce the great testing effort that it involves, but it should be based on well-studied cost reduction techniques. Evolutionary Mutation Testing (EMT) aims at generating a reduced set of mutants by means of an evolutionary algorithm, which searches for potentially equivalent and difficult to kill mutants to help improve the test suite. However, there is little evidence of its applicability to other contexts beyond WS-BPEL compositions. This study explores its performance when applied to C++ object-oriented programs thanks to a newly developed system, GiGAn. The conducted experiments reveal that EMT shows stable behavior in all the case studies, where the best results are obtained when a low percentage of the mutants is generated. They also support previous studies of EMT when compared to random mutant selection, reinforcing its use for the goal of improving the fault detection capability of the test suite. CCS Concepts • Software and its engineering → Software testing and debugging; Search-based software engineering; Keywords mutation testing; evolutionary computation; genetic algorithm; object orientation; C++. 1. INTRODUCTION A test suite is developed in order to reveal possible faults in a system under test. Mutation testing provides a means for measuring its ability to detect coding errors. In this technique, new versions of the code with injected faults (mutants) are used to stress the test suite. The outputs after their execution against the test suite should be different from the expected ones in all the cases (mutants should be killed). Undetected or alive mutants reflect weaknesses in the test suite. The goals when a system undergoes a mutation testing process are: (1) evaluate to what extent the test suite is able to identify faults and (2) improve the test suite with new test cases based on the inspection of alive mutants. Several techniques have been suggested in the past to ease the cost of applying mutation testing [11]. While most of them are useful for (1), Evolutionary Mutation Testing (EMT) [8] was recently presented with a focus on (2). EMT proposes the generation of a subset of mutants through an evolutionary algorithm. That subset should contain a high proportion of the mutants that can guide on the creation of new test cases (called strong mutants): potentially equivalent (currently not detected) and difficult to kill mutants (detected by one test case only killing that mutant). EMT was successfully put into practice regarding WS-BPEL compositions [8], but it has not been assessed in other domains after that. As a result, its applicability to other contexts is an open question. In this paper we analyze the performance of this technique in object-oriented systems. To that end, GiGAn has been developed to make use of a genetic algorithm (GA) in connection with MuCPP [6], a C++ mutation tool implementing operators at the class level. The evaluation in this paper replicates existing studies but in an object-oriented context, revealing that the technique effectively outperforms the random selection of mutants, as a smaller percentage of mutants is needed to achieve a percentage of strong mutants. Another interesting finding of the performance of EMT is that it only varies slightly among case studies and percentage of mutants produced. The paper is structured as follows. Section 2 describes the fundamental aspects of EMT. Section 3 explores the details of its application to C++ object-oriented applications through GiGAn. Section 4 is about the empirical evaluation carried out and discussion about results. The last section presents conclusion and future research lines. 2. EVOLUTIONARY MUTATION TESTING 2.1 Definition Evolutionary Mutation Testing [8] proposes the use of an evolutionary algorithm to produce only a subset of the full set of mutants in order to reduce the cost. This algorithm works under the assumption that there are some mutants with greater potential than others in guiding the tester to the design of new test cases with high fault detection capability, which are referred to as strong mutants. The generation of strong mutants is favored by the evolutionary algorithm search, thereby reducing the number of mutants while retaining the power to refine the test suite. These two kinds of mutants are regarded as strong mutants: - **Potentially equivalent**: mutants not detected by the initial test suite. These mutants either lead to the generation of new test cases or result in equivalent mutants once they are inspected. - **Difficult to kill**: mutants detected by only one test case that detects no other mutants. Ideally, all potentially equivalent mutants help improve the test suite with new test cases. However, some of those mutants may turn out to be equivalent as this is an undecidable problem and they cannot be discarded automatically. ### 2.2 Fitness function Although the algorithm favors the generation of strong mutants, each of the mutants receives a fitness. The fitness of a mutant decreases as a) the number of test cases detecting the mutant increases and b) the number of mutants killed by those test cases increases. Therefore, to calculate the fitness function every mutant has to be executed against every test case. Equation 1 shows how the fitness of mutant $I$ is computed with respect to test suite $S$, where $M$ is the number of mutants, $T$ is the number of test cases in $S$ and $m_{ij} = 1$ when mutant $i$ is detected by test case $j$, and 0 otherwise. $$\text{Fitness}(I, S) = M \times T - \sum_{j=1}^{T} \left( m_{ij} \times \sum_{i=1}^{M} m_{ij} \right)$$ Equation 1 According to this fitness function, if the mutant $I$ is: - Potentially equivalent, it receives the maximum value $(M \times T)$ because $m_{ij} = 0$ for all $j$. - Difficult to kill, it receives a fitness of $M \times T - 1$ because $m_{ij} = 0$ for all $j$ except for one $(z)$, which kills no other mutants $(m_{iz} = 1$ and $\sum_{i=1}^{M} m_{iz} = 1)$. - Weak, it receives a fitness lower than $M \times T - 1$. The more test cases kill $I$, the lower the fitness; also, the more mutants those test cases kill, the lower the fitness. As a final remark, invalid mutants (i.e., they infringe the language rules and cannot be executed) neither are assigned a fitness nor affect the fitness computation of the rest of valid mutants. ### 2.3 Individuals In EMT, the mutants are the individuals for the GA and, as such, they must be uniquely identified. To this end, each mutant is encoded with 3 fields: **operator** (identifier of the mutation operator), **location** (order in the code of the mutants of an operator) and **attribute** (variant inserted in a location). As an illustration, consider the information in Figure 1. The mutant depicted in the figure is identified as: - **Operator = 1**: The first operator is applied. - **Location = 2**: The second relational operator in the code is mutated. - **Attribute = 3**: The relational operator is changed by the third variant in the predefined set of attributes. ### 2.4 Genetic algorithm The GA produces several generations of mutants during its execution, which is directed by the search of strong mutants through the fitness function. The algorithm performs two main steps in each generation: 1. Generation of mutants: - **First generation**: mutants are generated randomly. - **Next generations**: mutants are generated both randomly and with reproductive operators. 2. Execution of the mutants generated. The fitness assigned to the mutants generated is computed with respect to: - **First generation**: the mutants in that generation. - **Next generations**: all the mutants generated so far. This is achieved by storing a second population with the mutants created in previous generations, which helps the fitness function to produce better estimations (see [8] for an example). Regarding the reproductive operators, the GA can apply mutation operators and crossover operators to individuals from the previous generation to create new ones (the roulette wheel method is used to select the mutants): • Mutation operators: One of the three fields (operator, location or attribute) is mutated. • Crossover operators: Starting from two parents (operator1, location1, attribute1) and (operator2, location2, attribute2), one of these crossover points is selected: - Point 1: generates (operator1, location2, attribute2) and (operator2, location1, attribute1). - Point 2: generates (operator1, location1, attribute2) and (operator2, location2, attribute1). We should note that a process of normalization of the fields avoids that invalid representations of mutants are produced. Further details on the essentials of the technique are included in the paper by Domínguez-Jiménez et al [8]. 3. GIGAN: EMT IN OBJECT-ORIENTED SYSTEMS FOR C++ 3.1 Class mutation operators for C++ Most of the studies in the literature covering issues related to the cost of mutation testing have been carried out with traditional operators [2]. However, it remains unclear whether the same benefits apply to operators at the class level. Studies on class operators [14] consistently show that they exhibit different properties when compared to traditional operators: they generate fewer mutants but a higher equivalence percentage. In particular, EMT has only been applied to WS-BPEL in the past. However, it is unknown to what extent the reduction achieved in that study holds in other contexts. The GA described in Section 2.4 is implemented in GAmera [7], and this tool makes use of MuBPEL to analyze, generate and execute mutants for these compositions. Recently, the mutation tool MuCPP [6] has been developed including a set of class operators for C++ programs. In order to reuse the same GA, we developed a new tool, GiGAn, to connect the algorithm in GAmera and the mutation tool MuCPP. In the experiments in this paper, the same list of 31 operators shown in the study by Delgado-Pérez et. al [6] is applied. 3.2 GiGAn Figure 2 displays how GiGAn connects MuCPP and GAmera to apply EMT to C++ object-oriented systems. As it can be seen, GiGAn acts as a bridge between both tools, translating the commands that each of the tools uses and mapping mutant identifiers so that MuCPP and GAmera can work together. Moreover, GiGAn presents two main changes with respect to the original description of the technique, which can impact the results: • Attribute: Some class operators in MuCPP [6] produce multiple mutations from a single location. The available mutations depend on the context, so the range of the attribute field is unknown in advance. As a result, the tool treats each of these mutations as its own location and consequently all mutants have attribute = 1. Thus, we limit reproductive operators to mutation of operator and location fields and point 1 crossover (see Section 2.4), as the rest of operators would result in the same mutant being created. • Mutants in different source files: MuCPP allows several source files of a project to be analyzed in the same execution. As a result, mutants from different files can be generated when the location field is changed to produce new individuals from previous ones (notice that in GiGAn the location of each mutation in each file is known because files are sorted beforehand). Even though classes in a project often use a similar design pattern, it is possible that the behavior of a mutation operator varies for different classes, especially when they belong to different source files. 4. EMPIRICAL EVALUATION 4.1 Research Questions This empirical study investigates the relative effectiveness of the use of EMT in real C++ programs using object orientation. In particular, this experiment aims to know 1) how many mutants EMT needs to generate to find different percentages of strong mutants and 2) whether this technique produces better results when compared to random mutant selection. Thus, these are the research questions to answer: RQ1: How does EMT behave when searching different percentages of strong mutants in C++ object-oriented systems? RQ2: Does EMT outperform the random selection of mutants? 4.2 Case Studies This study includes four open-source programs, which were chosen because they make use of C++ object-oriented facilities and are distributed with a test suite. As it can be seen in Table 1, MuCPP generates a different number of mutants for these applications. Thanks to a previous execution of all the mutants, we also know how many mutants are strong with the current test suite (used as a ground truth to compute our results). 4.3 Experiment design EMT needs to be configured with several parameters: Table 1: Mutant distribution and number of test cases in the applications under study <table> <thead> <tr> <th></th> <th>TCL</th> <th>DPH</th> <th>TXM</th> <th>DOM</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Total</td> <td>137</td> <td>219</td> <td>614</td> <td>1,146</td> <td>2,116</td> </tr> <tr> <td>Valid</td> <td>135</td> <td>208</td> <td>433</td> <td>681</td> <td>1,457</td> </tr> <tr> <td>Strong (%)</td> <td>33.3%</td> <td>49.5%</td> <td>36.7%</td> <td>51.1%</td> <td>45.0%</td> </tr> <tr> <td>Test cases</td> <td>17</td> <td>61</td> <td>57</td> <td>46</td> <td>181</td> </tr> </tbody> </table> TCL=Matrix TCL, DPH=Dolphin, TXM=Tinyxml2, DOM=QtDOM Table 2: Genetic algorithm configuration <table> <thead> <tr> <th>Parameter configuration</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Population size</td> <td>5%</td> </tr> <tr> <td>Individuals generated randomly</td> <td>10%</td> </tr> <tr> <td>Individuals generated by reproductive operators</td> <td>90%</td> </tr> <tr> <td>- Mutation probability</td> <td>30%</td> </tr> <tr> <td>- Crossover probability</td> <td>70%</td> </tr> </tbody> </table> - Population size: individuals in each generation. It is a percentage of the number of mutants in each program. - Individuals generated randomly and by reproductive operators: since all the mutants in a generation are produced through these two ways (see Section 2.4), the sum of both percentages has to be 100%. - Mutation and crossover probability: the probability that mutation or crossover operators are used when a mutant is generated through reproductive operators. As in the previous item, they have to sum 100%. The values selected for these parameters (see Table 2) are the ones found as optimal for the execution of this algorithm in the experiments where the technique was presented [8]. In these experiments, we want to know the ability of EMT to find strong mutants. Thus, all mutants were generated and executed against all test cases in a previous execution to maintain a record of strong mutants in the analyzed programs. We established several stopping conditions for the algorithm: finding 30%, 45%, 60%, 75% and 90% of the set of strong mutants. Then EMT was run 30 times with different seeds for each of the five conditions. Therefore, the data shown are obtained from the results of these 30 executions. In order to answer RQ2, we make use of a random strategy where mutants are selected one by one until reaching the stopping condition. Again, this random technique was executed 30 times and several statistics were calculated. 4.4 Results and discussion Table 3 contains, individually for each program, the average, median, minimum, maximum and standard deviation of the results of the 30 executions for each of the 5 stopping conditions. Thus, the numbers shown in this table represent the percentage of mutants that EMT needs to generate before finding 30%, 45%, 60%, 75% and 90% of the set of strong mutants in these applications. We can observe that the percentage of necessary mutants increases as the stopping condition becomes more demanding in all the programs. Figure 3 focuses on the average, allowing us to know the tendency of this increase in each application. Given that the stopping conditions have been selected in 15% increments, this graphic reflects that the upward tendency is quite stable, not only between conditions but also among applications. Still, there is often a small increment in the percentage of mutants generated as the stopping condition increases. Taking TXM to illustrate this fact, on average EMT needs to produce 12.5% more mutants to find 45% of the strong mutants than to find 30%. However, this difference increases when considering the conditions 45%-60% (15.2%) and 75%-90% (19.4%). The standard deviation does not follow a pattern and is quite low, except for TCL where ![Figure 3: Average percentage of mutants generated in the programs to reach the stopping conditions.](image-url) Table 4: Results of the smart and Vargha and Delaney’s $A_{12}$ statistical tests <table> <thead> <tr> <th>Program</th> <th>75% p-value</th> <th>$A_{12}$</th> <th>90% p-value</th> <th>$A_{12}$</th> </tr> </thead> <tbody> <tr> <td>TCL</td> <td>$2.26 \times 10^{-03}$</td> <td>0.711</td> <td>$1.24 \times 10^{-03}$</td> <td>0.734</td> </tr> <tr> <td>DPH</td> <td>$4.55 \times 10^{-07}$</td> <td>0.848</td> <td>$2.72 \times 10^{-06}$</td> <td>0.829</td> </tr> <tr> <td>TXM</td> <td>$7.14 \times 10^{-21}$</td> <td>0.996</td> <td>$1.65 \times 10^{-06}$</td> <td>0.937</td> </tr> <tr> <td>DOM</td> <td>$4.31 \times 10^{-12}$</td> <td>0.962</td> <td>$1.71 \times 10^{-05}$</td> <td>0.816</td> </tr> </tbody> </table> **RQ1:** How does EMT behave when searching different percentages of strong mutants in C++ object-oriented systems? The GA behaves in a very stable way for all the tested programs. Additionally, the proportion of strong mutants found by EMT slightly decreases with the number of mutants generated in general. **RQ2:** Does EMT outperform the random selection of mutants? Yes. EMT yields better results than the random strategy with high confidence. The difference between both selection strategies is on average 6.17% and 3.80% to find 75% and 90% of strong mutants for the analyzed programs. 4.5 Threats to validity Representativeness of the programs under study presents a threat to the validity of the results. To counter this threat, we have selected four applications in which (a) different mutation operators were applied and (b) those operators generated a different number of mutants. Moreover, the number of strong mutants varies among those programs. The GA can show a different behavior depending on the configuration. Since the best configuration for a particular program is unknown in advance for the user, we have set the same parameters for all the programs. Namely, we have used the configuration found as optimal in previous studies, but other parameters could yield a worse or better performance. EMT selects and generates new individuals on a random basis. As such, we have executed the technique 30 times in order to avoid biased results because of a single execution. 5. RELATED WORK There exist several techniques to alleviate the cost of mutation testing by reducing the number of mutants generated [11], such as mutant sampling [4] (selects a subset of the mutants randomly), selective mutation [2] (selects a subset of the mutation operators) or high order mutation (HOM) [10] (combines more than a single fault into a mutant). EMT [8] was proposed recently for test suite improvement and assessed with 3 WS-BPEL compositions. The technique was later extended to generate HOMs [3]. In this work, EMT has been shown to be better than mutant sampling at finding strong mutants in 4 different C++ object-oriented programs. Silva et al. [15] surveyed the studies applying search based techniques in the scope of mutation testing. However, most of these works are devoted to test data generation, even for object-oriented software [9], and only a few to mutant generation (where EMT is classified). Adamopoulos et al. [1] were the first in using a GA for the co-evolution of mutant and test suite population, where difficult to kill mutants are favored and equivalent mutants are penalized (unlike EMT), while Oliveira et al. [5] also studied this approach but describing a new representation with new genetic operators. Other studies in the literature have focused on using a GA to generate interesting HOMs [10, 12]. Finally, Schwarz et al. [13] leveraged a GA to find mutations not detected by the test suite, which have a high impact and are also spread throughout the tested code. 6. CONCLUSIONS The experiments in an object-oriented context using GiGAN confirm the promising results yielded by EMT in previous research, thereby supporting this cost reduction technique as a useful mechanism for the selection of mutants with the goal of improving the test suite. The evaluation reveals that there is high stability among the results for the tested programs, and little variation in the percentage of strong mutants found as the number of mutants increases (best results with low percentages of mutants generated). Additionally, this study has shown EMT to be different from random testing, with better results in all case studies with high confidence. The gap between both strategies was however greater in the experiments with WS-BPEL. Future work can be divided into two different lines. Firstly, we would like to simulate a real process where the test suite is improved with new test cases. Instead of stopping when finding a percentage of strong mutants, in that experiment the algorithm would stop when a percentage of new test cases is reached. In this way, we could evaluate how EMT really help us improve the test suite. Secondly, studying the impact of mutations in the code coverage can help isolate equivalent mutants [12]. This information could assist in lowering the probability of selecting equivalent mutants. 7. REFERENCES
{"Source-Url": "https://research.aston.ac.uk/portal/files/21523467/paper.pdf", "len_cl100k_base": 4936, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22470, "total-output-tokens": 6138, "length": "2e12", "weborganizer": {"__label__adult": 0.0003516674041748047, "__label__art_design": 0.0002419948577880859, "__label__crime_law": 0.0003490447998046875, "__label__education_jobs": 0.0004329681396484375, "__label__entertainment": 4.38690185546875e-05, "__label__fashion_beauty": 0.0001474618911743164, "__label__finance_business": 0.00015342235565185547, "__label__food_dining": 0.00029349327087402344, "__label__games": 0.00045108795166015625, "__label__hardware": 0.0006184577941894531, "__label__health": 0.0004627704620361328, "__label__history": 0.00015461444854736328, "__label__home_hobbies": 6.729364395141602e-05, "__label__industrial": 0.00028634071350097656, "__label__literature": 0.0001900196075439453, "__label__politics": 0.000202178955078125, "__label__religion": 0.00035953521728515625, "__label__science_tech": 0.006969451904296875, "__label__social_life": 7.390975952148438e-05, "__label__software": 0.00406646728515625, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0003039836883544922, "__label__transportation": 0.0003604888916015625, "__label__travel": 0.00017702579498291016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24473, 0.04721]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24473, 0.40644]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24473, 0.89974]], "google_gemma-3-12b-it_contains_pii": [[0, 3943, false], [3943, 8178, null], [8178, 12768, null], [12768, 16422, null], [16422, 19093, null], [19093, 24473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3943, true], [3943, 8178, null], [8178, 12768, null], [12768, 16422, null], [16422, 19093, null], [19093, 24473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24473, null]], "pdf_page_numbers": [[0, 3943, 1], [3943, 8178, 2], [8178, 12768, 3], [12768, 16422, 4], [16422, 19093, 5], [19093, 24473, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24473, 0.152]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
b7208e6e625fdba3b1f2ea0e86cc0f89dde30baa
Regression testing techniques in a continuous integration environment - a comparison Emanuel Eriksson mat14ee1@student.lu.se Keiwan Mosaddegh ke2476mo-s@student.lu.se Max Strandberg ma2536st-s@student.lu.se Erik Stålberg er4047st-s@student.lu.se Abstract—Regression testing is an important part of any software project, but can be both very costly and time consuming. This is especially true in a continuous integration (CI) development environment. This paper analyses and compares four new techniques for regression testing. In the CoDynaQ method the dispatch queue is continuously re-prioritized based on the remaining test cases’ historic co-failure distributions with the ones executed. The RETECS method is a new method which uses reinforcement learning and neural networks to prioritize and select test cases. ROCKET is a test case prioritization approach where the test cases are prioritized by their historical failure data, and execution time. Finally, the Bloom filter method improves regression testing by using the Bloom filter data structure to filter out test that only fail once, and never again. The methods all show promise and each of them outperform their respective benchmarks. CoDynaQ also has the possibility to be combined with any of the others, or yet another method. We, however, find it unlikely that any of them will be widely applied anytime soon, as these types of academic results have generally proven themselves slow to propagate into industrial practice. I. INTRODUCTION The purpose of this report is to perform research and deep dive into the field of regression testing in a continuous development environment. The report will examine a handful of different methods, such as bloom filters and ROCKET, and compare the results these methods produce in the context of continuous integration. A. Continuous integration development environment The context of this report is that of a continuous integration (CI) development environment, a method of software development that has generated a lot of attention in recent years. In general, the CI approach differs from traditional software development in that changes in software are integrated into the main code base as quickly as possible, to prevent large divergences from impacting merge stability. There are many advantages to using CI, but they are not within the scope of this paper. However, working within a CI environment is not without risk, as developers lose the ability to safely experiment in development branches separate from the main production branch. Continuous Integration, especially when combined with Continuous Deployment, may be a risky endeavor - any issues and bugs need to be located and addressed as soon as possible. Continuously ensuring the stability and correctness of the main production branch is essentially impossible to do through manual means. Therefore large-scale automatic test suites are a prerequisite for CI to function as intended. As software evolves and grows over time, the need for comprehensive regression testing increases. According to Kerzazi and Khomh, as merge intervals and release cycles grow shorter or disappear entirely, testing activities are the greatest time bottleneck [2]. As a consequence of this, regression testing in continuous integration must be efficient and cost-effective. Simply executing all regression tests, or only tests directly impacted by a change in the code-base, have proven to be weak and unsustainable approaches. There exist a number of new regression testing techniques which are discussed in this paper, and which have shown promise in the context of continuous integration. B. Regression test selection Since regression testing can be very costly and time consuming it is important to choose wisely which tests to run. This is especially true for CI environments where development cycles are very short. In other contexts regression tests may be initialized by the end of the work day and the results received in the morning. This would, however, be too disruptive to work efficiently in a CI environment. Therefore, each regression test suite has to be chosen carefully, so as to minimize the time consumed. This is called the regression test selection (RTS) process. This is partly done by removing tests with a low failure probability, but also by removing tests that overlap already selected test cases. C. Test case prioritization After the RTS process it is time to prioritize the tests. The purpose of test case prioritization (TCP) is to reveal failures as early as possible. This lets the tester proceed to fix the revealed bugs, or to stop the execution of the regression test suite and return to the drawing board. D. Research Question RQ1: How do the regression testing techniques discussed in this report compare, in the context of a continuous integration development environment? II. DESCRIPTION 1) ROCKET: The authors Marijan, Gotlieb, and Sen present the test prioritization approach ROCKET in their paper [5]. The aim of ROCKET is to effectively reduce the time required to test, while increasing and keeping a high fault detection rate. In order to achieve these characteristics, the authors choose to prioritize test cases based on their previous amount of failed test executions. The historical failure data is then translated into a so called failure weight. If two test cases have the same failure weight, then their respective execution time is taken into account, where the test case with the shortest execution time is prioritized. ROCKET requires 4 different input data: the set of the test cases to prioritize $S = \{S_1, S_2, ..., S_n\}$; their corresponding execution time; each test case’s historical failure data; and the total execution time limit for the test suite $T_{\text{max}}$. Initially all test cases receive a priority of 0. A failure matrix $MF$ is constructed from the data of the test cases’ historical failures (equation 1). $$MF[i,j] = \begin{cases} 1, & \text{if } S_j \text{ passed in } (\text{current} - i) \text{ execution} \\ -1, & \text{if } S_j \text{ failed in } (\text{current} - i) \text{ execution} \end{cases}$$ (1) The impact of a failure that occurred $i$ executions since the current execution is assigned an impact weight $w_i$, that reflects the level of impact of the failure. The value of the weight is hence based on how many executions ago a test case failed, and the evaluation looks as in equation 2. $$w_i = \begin{cases} 0.7, & \text{if } i = 1 \\ 0.2, & \text{if } i = 2 \\ 0.1, & \text{if } i \geq 3 \end{cases}$$ (2) At this point, the cumulative priority for a test case $P_{S_i}$ can be calculated. This is done by taking the sum of each value from the failure matrix for the specific test case, multiplied by the corresponding impact value. As a result of this, every test case has a priority value, where a lower value corresponds to a higher priority. With the historical failure data taken into account, an evaluation of the execution times remains. First; the test cases are put into different classes based on their calculated priority value. All test cases in the same class have the same priority value $t$, which is increased by 1 for the successive class. ROCKET then checks the execution time of each test case $T_{e_i}$ and assigns a new priority value as shown in equation 3: $$P_{S_i} = \begin{cases} t + 1, & T_{e_i} \geq T_{\text{max}} \\ p_{S_i} + \frac{T_{e_i}}{T_{\text{max}}}, & \text{otherwise} \end{cases}$$ (3) As mentioned; lowest priority value means highest execution priority. If a test case is put in the first class (by having the highest total impact weight), and is assigned the lowest additional priority value (by having the shortest execution time), it (the test case) has the highest priority. ![Fig. 1. Number of detected faults](image1.png) ![Fig. 2. Test execution time (min)](image2.png) The context of which ROCKET is derived from, and tested in, is an industrial video conferencing software. The software system consists of 100 test cases, with an average of 30 minutes of execution time per test case. This means that a test session with no test case selection nor prioritization would require a minimum of 2 days. When measuring the performance of ROCKET, the authors compared their approach against manual prioritization by test engineers. The results of the comparison are presented in figure 1 and 2. The former (1) shows the amount of detected faults compared to the manually prioritized test cases. The latter (2) instead shows how the test execution times compare between the different approaches. For the first tests 20% of the test suite were to be prioritized. The test was then repeated, increasing the size of the test suite every time, in 20% increments. ROCKET-prioritized test cases initially outperformed manually prioritized test cases. 3 more faults were detected in 40% less time. The progression of figure 2 shows how the difference in test execution time eventually reaches 0. This is expected, as the complete test suite is executed in both cases, and the act of prioritization becomes inconsequential. However, ROCKET-prioritized test cases consistently execute in less time, and overall detect more faults. Additionally, Cost- cognizant weighted Average Percentage of Faults Detected (APFDC) was used to measure the effectiveness of ROCKET, compared to manual prioritization and random ordering of test cases. APFDC rewards test case orders proportionally to their rate of units of fault severity detected per unit test cost. In the case of ROCKET, the cost is determined by the execution time of the test case. ROCKET performed the best against the other methods, and received a score of 17.09, compared to 15.81 and 13.85, received by manual and random, respectively. 2) RETECS: RETECS is a new method used in continuous development environments to perform regression test selection and regression test prioritization. It performs these tasks through reinforcement learning and neural networks, and performs analysis on the test case failure history, computation time, and last previous execution[7]. The goal of RETECS is to reduce the time it takes for the developers to receive feedback after committing new code. In the article, Spieker et al. state that compared to other prioritization algorithms, RETECS is able to adapt to situations where test cases are added or removed. It is also capable of adapting to new test prioritization rules. The cost of running the prioritization is negligible as RETECS doesn’t do any costly computations during prioritization. RETECS itself is as stated by Spieker et al. an application of reinforcement learning as an online-learning and model-free method for the ATCS problem. Model-free meaning that it has no initial knowledge or concept of the environment, and online-learning means that the method is constantly learning, even during run-time. The reinforcement learning works by letting an agent interact with its environment, and select an action based on attributes such as learned policies or random exploration. The agent will then receive feedback in form of a reward, which will tell the agent if the selected action did well or not. From this feedback the agent will develop and adapt learning policies regarding behaviour and action choices. Spieker et al. write in their article that conventionally rewards should be negative or positive to deter respectively promote behaviour, and based on common metrics used in the TCP and TCS. This does however require knowledge about the whole system, something which is impossible to obtain in a CI environment. Therefore RETECS only supplies its agents with positive or zero feedback. To evaluate RETECS Spieker et al. used the it and three other methods. The first other method is called Random, it acts as baseline and is a random test case prioritization method. The second method is called Sorting, and is a test case prioritization method which sorts the test cases where recently failed cases have higher priority. The third and final method is called Weighting, which Spieker et al. defines as a naive version of RETECS as it analyses the same data and does so with weighted summation with equal weights. These four methods were used on three data sets from the industry, paint control, IOF/ROL and GSDTTSR. For the paint control data set, the paper concludes that within 60 CI cycles, RETECS is on par, or better than other methods. Similar results are seen on the other two data sets, but with a longer adaption phase and smaller performance difference on IOF/ROL, and a comparable performance on GSDTTSR. 3) CoDynaQ: In their article [8], Zhu et al. present a novel approach for TCP in CI environments called CoDynaQ. More specifically, the article proposes three variants of this method called CoDynaQSingle, CoDynaQDouble and CoDynaQFlexi. Their method is based on two ideas. The first is to make use of the co-failure distributions between test cases. The second is to re-prioritize test cases already in the dispatch queue. Every test case is assigned a priority score $s$. This score is updated according to equations 4 and 5, where $t_1$ is the test case just executed and $t_2$ is the test case whose score is being updated. $$s_{2,new} = s_{2,old} + (P\{t_2 = \text{fail}|t_1 = \text{fail}\} - 0.5)$$ (4) $$s_{2,new} = s_{2,old} + (P\{t_2 = \text{fail}|t_1 = \text{pass}\} - 0.5)$$ (5) If $t_1$ and $t_2$ have never been run together before their co-failure probability is unknown, and may be set to anything between 0 and 1. In their article, however, Zhu et al. choose to set it at 0.5, for them to maintain their original score. After each test case execution the scores of the remaining tests cases in the dispatch queue are updated and then re-prioritized based on their respective new scores. The three method variants vary with respect to their test cases. The simplest variant CoDynaQSingle only has a single queue to which test case requests are added. After a test case has been executed the priority scores of all remaining test cases are updated and the queue is reordered. This, however, leads to a problem called starvation. Starvation is when a test case is continuously pushed back in the queue and whose result is thus further and further delayed. If the tester suspects a test case will fail it may assigned it an original high priority, but if the test case has low co-failure rates it may be substantially delayed, even indefinitely, if new tests are continuously requested. This problem of starvation is addressed by dividing the single queue into a dispatch queue and a waiting queue. The waiting queue is a FIFO queue (first in, first out) to which new requests are added. The dispatch queue on the other hand is continuously re-prioritized as described earlier. In the CoDynaQDouble variant the dispatch queue is filled to capacity when empty. By contrast, in the textitCoDynaQFlexi the dispatch queue is refilled, not when empty, but when the remaining number of test cases sinks below a certain threshold. The performance of these methods were tested on a set of internal test data from Google and another from the Chrome project. The Google data set contained 11,457 change requests, resulting in 847,057 test case executions. The Chrome data set contained 235,917 change requests, resulting in 4,487,008 test case executions. As a baseline for comparison, the methods were compared with a simple FIFO-prioritization (i.e. no prioritization). Furthermore the methods were compared to a method Zhu et al. call GOOGLETCP, but which in the original article [1] is called SelectPRETests. There, on regular time intervals, the test cases in the waiting queue are given a priority of 1 if \( t_f < W_f \) or \( t_e > W_e \), where \( t_f \) and \( t_e \) are, respectively the, times since last fail and last execution, and \( W_f \) and \( W_e \) are corresponding thresholds. The waiting queue is then prioritized based on these scores and added to the back of the dispatch queue. These methods were then compared to the baseline FIFO model on the following metrics on both data sets. - Median relative time gain until first failure detected - Median relative time gain until all failures detected - Median proportion of delayed failures, relative to FIFO The results are shown in table I, where the best performances are shown in bold. ### TABLE I <table> <thead> <tr> <th></th> <th>CoDynaQSingle</th> <th>CoDynaQDouble</th> <th>CoDynaQFlexi</th> <th>SelectPRETests</th> </tr> </thead> <tbody> <tr> <td>First failure</td> <td>17.34%</td> <td>0.20%</td> <td>27.26%</td> <td>4.36%</td> </tr> <tr> <td></td> <td>0.18%</td> <td>31.78%</td> <td>11.33%</td> <td></td> </tr> <tr> <td>Delayed failures</td> <td>19.11%</td> <td>17.34%</td> <td>0.11%</td> <td>11.33%</td> </tr> <tr> <td></td> <td>31.78%</td> <td></td> <td>0.05%</td> <td></td> </tr> <tr> <td></td> <td>0.09%</td> <td>5.01%</td> <td>0.51%</td> <td></td> </tr> <tr> <td></td> <td>0.09%</td> <td>5.01%</td> <td>0.51%</td> <td></td> </tr> </tbody> </table> Here we clearly see that CoDynaQSingle outperforms the other methods with respect to the first failure and all failure metrics. CoDynaQFlexi outperforms the others with respect to the delayed failures metric, however the starvation problem is also captured by the all failures metric, so it is not necessarily best with respect to actual starvation. 4) **Bloom filter method:** In a paper published in APSEC, authors Kwon and Ko present and discuss a new method for regression testing in continuous integration environments [3]. The authors argue that the testing technique, here called the Bloom filter method, is a combination of both logical and technical improvements on standards regression testing procedures. The method incorporates changes to how both RTS and TCP is performed, and originates from the prevalence of one-hit-wonders amongst failed test suites. The implementation is largely build around Bloom filters, a hash-based data structure with many characteristics that prove advantageous to the method’s design. In an experimental comparison with industry baseline techniques, the Bloom filter method improves on the average number of failures detected by a factor of 2.23, and reduces the time taken to detect a failure by up to 42.2 hours. The experiment was performed on a sanitized data set of tests from Google. Kwon and Ko motivate the use of Bloom filters by arguing that the technology is well suited for precisely the type of fast operations their method relies on. Essentially, Bloom filters are a data structure built around \( k \) hash functions connected to each element in an \( m \)-bit array. As each hash function is probabilistically unlikely to clash with another, \( k \) hash functions represent \( k \) element positions in the array. The purpose of the array is to mark if an item has been added to the data structure, and allows one to quickly check for the presence of an item within the filter. If well designed, a Bloom filter has a negligible probability of both false negatives, as well as false positives. With a time complexity of \( O(1) \) for a content check operation, Bloom filters are incredibly time-efficient. Kwon and Ko also analyze if the added overhead of using them as part of regression testing has a negative impact on testing times, and show that they do not. The Bloom filter method is designed around the existence of test suites that fail once, but do not fail again. These test suites, also called one-hit-wonders, make up 44% of failed test suites in the pre-submit phase of testing, and 33% of failed tests in the post-submit phase. The concept of one-hit-wonders is not unique to testing, as they have been observed in many other areas of computer science. Kwon and Ko note that the idea for the Bloom filter method originates from the over-representation of one-hit-wonders amongst cached objects in a Content Delivery Network. According to Maggs and Sitarman, approximately three-quarters of objects accessed within a CDN are requested only once, which making caching them extremely inefficient [4]. The Bloom filter method attempts to use this observation to improve both test suite selection and prioritization. In practice, the Bloom filter method attempts to improve on existing window-based selection and prioritization techniques. Both RTS and TCP rely on a Bloom filter combined with a failure cache. When a test suite fails, the method checks if it exists in the Bloom filter. If it does not, it is added to the filter, but not to the failure cache. This effectively marks it as a one-hit-wonder. If the Bloom filter already contains the test suite, it is instead added to the failure cache. Kwon and Ko argue that in the context of regression testing, test suites are analogous to test cases, and that their method could be applied on either scale. For RTS, a subset of test suites are selected as candidates for execution, based on criteria such as Last Failure Time (LFT), Last Execution Time (LET), or if they are entirely new. From these selected test suites, a subset is created, containing only new test suites and ones from the failure cache. The authors argue that the exclusion of one-hit-wonders does not affect the overall effectiveness of regression testing, if it is done in the pre-submit phase of development. As for TCP, the Bloom filter method operates on similar principles as for selection. Essentially, test suites that failed more than once are assigned a high priority, and one-hit-wonders are given the lowest. In practice, test suites from the failure cache are given a priority of 0, as they have failed more than once. Test suites that fulfill the regular window-based criteria are assigned a 1, and suites that fulfill none of the above are assigned a 2. Test suites are then executed in ascending order. Kwon and Ko document the results of their experiment, which show that the Bloom filter method is much faster and more precise than random TCP and RTS. The same is true when compared with baseline industry methods, improving the average number of failures detected by 2.23 and reducing execution time by between 4.9 to 42.2 hours. III. Analysis The experiments conducted in [5] were, as mentioned, done in the context of a video conferencing software, where the test executions required a relatively significant amount of time. There is naturally a need to ensure that the tests that are to be executed are likely to find faults, as each execution requires time and resources that are desired to avoid wasting. An interesting addition to ROCKET would be how dynamic and real-time selection and prioritization could be used to improve the performance of the approach. By achieving a sufficient real-time test coverage estimation of the executing tests, the ROCKET approach could stop executing test cases as soon as it considers a sufficient test coverage to be reached. The approach of dynamic (re)selection and prioritization adopted by CoDynaQ could for example be something to attempt incorporating into ROCKET. Generally, the logic behind the ROCKET approach is fairly intuitive. There is a problem of time consuming test case executions, and there is a need to be frugal with the available time. Therefore it’s important to ensure that the executing test cases have a weighted history of failing, and that their execution times are as short possible. One could consider ROCKET to be a fairly promising approach, but it’s mainly a matter of context. Tests taking more than 30 minutes to execute is not the first thing that comes to mind when thinking about continuous integration. Then again, the fact that this is in the context of continuous integration is what makes an appropriate test case selection and prioritization so important in the first place. If the ROCKET approach is just as effective in the context of test cases with significantly shorter test execution times is unclear. Perhaps the improvements that ROCKET delivers are not as viable when shifting from talking about hour-long improvements for the VCS context, to a couple of minutes in another context. In regards of prioritization and selection ROCKET might find a greater value in the prioritization. ROCKET is derived from the context of time consuming, 30 minute (on average) long test case executions, where each executed test case has a significant impact in terms of time and resources. Therefore, by putting the test cases in an order where the first test case is very likely to fail, one receives (relatively) quick feedback if that is the case. On the other hand, if no test case selection is present, a significant amount of redundant test cases are prone to be executed. With regards to the ROCKET method we have a proposed improvement. Instead of storing all previous results for every test case and multiplying it with a fixed set of weights $w_i$ one could recursively update each priority score as $$ w_i = \begin{cases} 0, & i = 0 \\ \alpha \lambda_{i-1} + (1 - \alpha)w_{i-1} & i \geq 1 \end{cases} $$ where $\lambda_{i-1}$ is 1 if the last execution passed $-1$ if it failed and $\alpha$ weight for the last execution result. This approach retains the method’s original property that recent results are more important than previous. Instead of having to store all previous test results, one only has to store a each test’s priority score, which is updated after each test run. RETECS aims to improve the process of test case selection and prioritization over time as the network gains knowledge and policies regarding choice-making for the best possible results. The evaluation performed by Spieker et al show that RETECS is at least on par, and often better than the other methods, Random, Sorting, and Weighted. This difference in performance would most likely improve over time as RETECS learns which order and selection of tests will yield the fastest feedback to the developers. As the evaluation of RETECS was performed on static data sets, further testing in a live dynamic environment should be performed to assert the actual advantages and drawbacks of the method. The limits of RETECS are hard to identify as the neural network’s parameters can be modified and adapted to the project at hand. But a point could be made that the method needs to be applied to a varied set of projects to assert what and where the limits are. Regardless of the limits, the evaluation shows that RETECS is more proficient at learning and improving the selection and prioritization if the test suite has a higher percentage of failed tests. The $\text{QoDynaQSingle}$ method shows a lot of promise. It significantly reduces the time until both the first failure and all failures are detected. In their article [8], Zhu et al. claim their work to be novel with regards to dynamic re-prioritization after each test run. This sets the $\text{QoDynaQ}$ methods apart from the other discussed techniques. It is also what makes it highly interesting. This makes it highly compatible with other prioritization techniques. Test cases may be given an initial prioritization score by another algorithm, which is then re-evaluated based on their co-failure distribution. The Bloom filter method attempts to improve regression testing through both logical and technical improvements. The method does show some promise, even if the idea of filtering out one-hit-wonders is relatively simple when compared to the ones discussed in this report. However, Kwon and Ko do concede that the occurrence of one-hit-wonders is not necessarily universal to all data sets, or all parts of a continuous development process. The simplicity of the method is to its advantage though - the Bloom filter method can easily be combined with others presented in this report. The use of the Bloom filter data structure is certainly clever, as they are perfectly suited for the fast operations required for the method to function. However, there may very well be other data structures that work just as well - the strength of the method comes more from the logical improvement. There are some similarities between the RETECS and ROCKET methods. It is very possible that the Weighted method used by Spieker et al during evaluation of RETECS is a variation of ROCKET as they both use weighted sums. As Spieker et al concluded that Weighted is a naive version of RETECS, and that RETECS had better results than the compared methods, it would mean that ROCKET is a naive and inferior version of RETECS. In the context of regression testing, it is important to discuss whether TCP or RTS is of greater importance. This can be difficult, as comparing the method results directly is essentially impossible. The methods in this report optimize around different variables and operate on different data sets. Whether to focus on TCP or RTS becomes a question of context - in some scenarios, earlier fault detection is of more value than better fault coverage. Factors like average test execution time also affect whether to focus on TCP or RTS. Sometimes, in the case of some methods like ROCKET and RETECS, the distinction between prioritization and selection starts to disappear. As QoDynaQ focuses on TCP, certain advantages gained from effective selection may be lost. Potentially, some methods could be combined, but this implies that companies would be willing to try out new testing techniques in the first place. According to Minhass et al., new methods from academia do not propagate well into industry [6]. This observation probably greatly reduces the possibility of method combinations. IV. CONCLUSION How do the regression testing techniques discussed in this report compare, in the context of a continuous integration development environment? In conclusion, all of the methods discussed could potentially improve regression testing in a continuous integration environment, since they all outperform their respective benchmark comparison methods. All methods deal with the problem of TCP and all but CoDynaQ also deal with RTS. CoDynaQ however has the benefit that it could be combined with any of the others, to further increase efficiency. The Bloom filter method also shows potential, also because it can be combined with other regression testing techniques. In regards of RETECS, the performed evaluation shows that it is a promising method. It is however too early to draw any conclusions whether or not it is superior to the other methods presented in this paper. To make such a statement further testing needs to be done, preferably in a live industrial setting. Testing all four methods on the same data sets would also be a viable method to see which method performs the best test case prioritization and selection. As as final point, we suspect these new methods, even if proven effective in experimental environments, are unlikely to achieve wide use within industry. As a consequence of this, combinations of the techniques are also probably never going to be implemented in a real-world scenario. V. CONTRIBUTION STATEMENT All sections of this document have been co-written by the stated authors, with the exception of section II, the origin of which are shown below. II-1 ROCKET Keiwan Mosaddegh II-2 RETECS Max Strandberg II-3 CoDynaQ Erik Stålberg II-4 Bloom filter method Emanuel Eriksson REFERENCES
{"Source-Url": "https://fileadmin.cs.lth.se/cs/Education/ETSN20/reports/GroupF.pdf", "len_cl100k_base": 6780, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20240, "total-output-tokens": 7637, "length": "2e12", "weborganizer": {"__label__adult": 0.00030231475830078125, "__label__art_design": 0.00024890899658203125, "__label__crime_law": 0.0002887248992919922, "__label__education_jobs": 0.0007233619689941406, "__label__entertainment": 4.4286251068115234e-05, "__label__fashion_beauty": 0.00014078617095947266, "__label__finance_business": 0.00016415119171142578, "__label__food_dining": 0.00028896331787109375, "__label__games": 0.0003981590270996094, "__label__hardware": 0.0006022453308105469, "__label__health": 0.0004291534423828125, "__label__history": 0.00014865398406982422, "__label__home_hobbies": 6.389617919921875e-05, "__label__industrial": 0.0002758502960205078, "__label__literature": 0.00019812583923339844, "__label__politics": 0.00017404556274414062, "__label__religion": 0.0003368854522705078, "__label__science_tech": 0.00872039794921875, "__label__social_life": 7.712841033935547e-05, "__label__software": 0.005054473876953125, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.00024044513702392575, "__label__transportation": 0.0003020763397216797, "__label__travel": 0.0001569986343383789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33290, 0.03241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33290, 0.30861]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33290, 0.94169]], "google_gemma-3-12b-it_contains_pii": [[0, 4885, false], [4885, 9278, null], [9278, 15710, null], [15710, 22320, null], [22320, 28650, null], [28650, 33290, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4885, true], [4885, 9278, null], [9278, 15710, null], [15710, 22320, null], [22320, 28650, null], [28650, 33290, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33290, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33290, null]], "pdf_page_numbers": [[0, 4885, 1], [4885, 9278, 2], [9278, 15710, 3], [15710, 22320, 4], [22320, 28650, 5], [28650, 33290, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33290, 0.07018]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
b72e36cda78150c38901e3cfc020bd30cbca0be0
Security Lab Manager: Virtual Security Training for Universities Engineer: Simon Owens Advisor/Sponsor: Mark Randall Computer Science University of Evansville May 2, 2019 ABSTRACT The Security Lab Manager is a web application that manages vulnerable virtual machines for users to practice cyber security on. Users only need to login to the website to get started – no setting up environments or downloading software. Each exercise has unique answers for each student, so answer sharing is not viable. Administrators can create, edit, and view: classes, exercises, and users. Grades can be emailed out automatically. LIST OF FIGURES AND TABLES Figure 1 – Host Architecture Figure 2 – Login Page Figure 3 – Student View Figure 4 – Administrator View Figure 5 – Development and Deployment Process Figure 6 – Weak Authentication Figure 7 – Vulnerable Exercise Figure 8 – GUI Flow Table 1 – User Table 2 – Administrator Table 3 – Class Table 4 – Exercise Table 5 – Tool Comparison INTRODUCTION Learning modern security practices is difficult and time consuming. Much of this time can be spent setting up safe environments to practice and reproduce vulnerabilities in. Some security exercises may not work because of different operating system protections, configuration settings, permissions, and patch version. The Security Lab Manager eliminates these problems by having a variety of exercises that can be started in seconds. This application has a convenient interface that allows users to start, stop, and restart exercises, and submit answers when they are complete. Users can practice exploiting a variety of web and desktop C++/Java applications. For each exercise, there are guides for how to code securely and prevent the vulnerabilities just exploited. Administrators can create multiple choice questions to introduce certain topics before doing hands on exercises. Administrators can create their own exercise or import a vulnerable machine. Static analyzers, Vulnerability Scanning, CI/CD, and test driven development were used for the creation of this application. This makes the application production ready by ensuring for few vulnerabilities and a secure design. It is easily updatable by using a Jenkins pipeline to ensure updates are frequent and do not break functionality. It is easily deployed because Docker containers allow for easy installation to a desktop, on a server, or on a Cloud Computing platform. See Appendix for more information. PROBLEM STATEMENT AND BACKGROUND Universities desire to teach software security because of the industry demand for secure coding and Security Engineers. The best way to prepare students is with hands-on experience seeing, exploiting, and patching vulnerabilities. Setting up a practice area for students with multiple computers is expensive and requires management. The typical setup would depend on the class size and available funds: running vulnerable virtual machines would not be able to support a class size of hundreds of students or resources could be wasted if too much infrastructure was allocated. Services would have to be setup, systems updated, and users would have to be added/removed. Students will frequently crash their target computers which requires constant troubleshooting and resetting. If students are allowed full permission to the infrastructure to troubleshoot their own problems, they could do nefarious things or even break the infrastructure. There are a couple organizations devoted to creating practice environments for people - the most popular one being Offensive Security [9]. Most of their courses have limited lab access time and typically cost well over one thousand dollars per student. It is also not possible to translate their material and labs into course grades for University students. There are a variety of problems when students are responsible for setting up their own environment and exercises. These exercises require virtualization software to run on because of software compatibility issues and risk of harming the student’s computer. Students could host their own virtual machine, but this takes time away from class, requires computing power, and does not give students unique answers to submit. Setting up a victim and attack virtual machine takes several hours to do and does not directly help students learn security. Just getting one exercise to work might require installation and configuration of: an operating system patch, a DLL, a library, an application, networking, firewall settings, registry settings, and anti-virus rules. This configuration usually requires 4GB of RAM, and a couple CPU cores on top of the student host OS. This may be impossible for some students or result in an extremely slow experience for others. Vulnerable machines from the Internet also do not have unique answers, so one student could do the exercise, email the result to the rest of the class, and the instructor would have no way of knowing who did the exercise. Even if all of these efforts were planned, supported, and managed there are not any solutions that translate student exercises to grades for professors. Professors could take the time to create many exercises and vulnerable virtual machines but there are already hundreds of great resources available on the Internet. This is where this project comes in – the Security Lab Manager. It takes these vulnerable exercises others have already made and manages them so students can attack, destroy, and reset. Professors can view how long students spent on their exercises, and all of the commands they performed. If the class is not ready for hands-on exercises, the instructor can easily create their own multiple choice exercises for students to complete. Hosting this application takes minimal resources and can scale easily to the class size. The GUI and exercises work seamlessly for all class sizes. Professors have a nice interface to view completed student exercises and can be notified if any students cheated. REQUIREMENTS AND SPECIFICATIONS These requirements and specifications deliver the functionality that professors and students need in order to learn security at a rapid pace. The main goal of this application is to deliver a secure portal to professors and students to interact with virtual machines. 1. **GUI interface for students to login, launch exercises, revert machines, and submit answers** This GUI will have two main components: a grading page for professors and a page for students to interact with their exercises. 2. **GUI interface for teachers to login and view answers of students** This interface should display which students have submitted answers and if any of their answers match each other. Since each student should have a unique answer, this will catch cheating. Professors should be able to assign grades within seconds. 3. **Students should be able to start, stop, cancel, and revert their security exercises** As a student, they should always know what action is currently processing, and have the ability to cancel. 4. **The application should allow a student to launch only one exercise at a time** This limits the resources the application consumes. Students should work on only one exercise at a time, so they should be restricted by the application. 5. **The application should be multi-threaded** Users should never have to wait for server-side action to complete before issuing other actions. This makes the application feel nice and smooth. 6. **There must be at least 3 web security exercises** This allows users to immediately start practicing upon download. No additional configuration needs to be done in order to start learning. Web security is extremely relevant in today’s industry. 7. **There must be at least 3 desktop application security exercises** This allows users to immediately start practicing upon download. No additional configuration needs to be done in order to start learning. Desktop application security is still relevant but less common in security jobs. 8. **The application must be developed securely with static analyzer and must undergo scanning from web application scanning tool** Students that learn more about security may be tempted to try attacking this application for fun or to even change their grade. OWASP ZAP will help detect vulnerabilities during each build. 9. **This application should be extremely easy to setup, updated, and have documentation** Administrators should only have to download and run one command to install the application. Administrators get reports on any issues, vulnerabilities, and code quality on the download page. 10. **Each exercise must be uniquely identified for each student.** This helps translate security exercises into grades for students. This feature helps prove that the student did their own work. 11. **There must be an proxy in front of the application for scalability** Some environments may have hundreds of students which could make the web application slow. Using Nginx allows for static files to be delivered faster, and allows administrators to spin up more applications to meet the amount of users. DESIGN Overview The webserver is a Django project that interfaces with Docker to launch virtual machines. Figure 1 show the architecture for how users login and reach exercises. *Figure 1 – Host Architecture* The Security Lab Manager is a collection of Docker services working together to virtualize this environment: a proxy, a web front-end, a back-end, and a database. Docker containers are used to eliminate installation compatibility issues, scalability, and because they are lighter weight than other virtualization software. An administrator can download the project and install the application with one click on either Windows or Centos7 running Docker - the installer only needs to enter the master password for the application. The administrator can then visit the IP of the host computer via HTTPS to login and start creating users. Once students login, they will be able to view various exercises and start them. Starting an exercise will launch a lightweight Docker container. This container will have a unique hash in the root directory based on: the instructor’s password, student’s name, and exercise name. The goal is for students to find the vulnerability, exploit it, and then find the unique hash in the exercise. Once students complete the exercise, they can submit their unique hash to the application. If students crash the virtual machine, they can simply restart it with one click. Instructors can then view student’s progress and be alerted if any hashes submitted are the same. If instructors wish to add any new exercises, they can create a multiple-choice question or create their own vulnerable virtual image. They can then import that vulnerable image into the application by entering: the exercise’s name, where it should be grouped, and the Docker image name. Below are potential issues for users: - Students will be sending malicious traffic across the network at this Security Lab Manager. This could potentially violate any University policies. • This application can launch Docker containers with full permissions. If the main application was compromised, the attacker could use resources of the host machine and pivot onto other targets. • The Security Lab Manager must be centrally hosted and have computing power to support the class size *Graphical Interface* The graphical interface is constructed using HTML5, CSS, Javascript, and Jquery. The login page shown in Figure 2, is the same for users and administrators. ![Login Page](image) *Figure 2 – Login Page* Users are directed to the page shown in Figure 3 where they can launch exercises and submit their answers. Users can see all of the different sections, with all of the exercises associated with them. ![Student View](image) *Figure 3 – Student View* Instructors have an entirely different view shown in Figure 4, they can manage the performance of the application and see which students completed their exercises. They can easily scale the application, add users, and send out grades to students. *Database* Information for users, administers, classes, and exercises are stored in a PostgreSQL database because of my familiarity with the database and its easy integration with Django, is fairly fast, and has a low learning curve. Information for users will be stored in Table 1. <table> <thead> <tr> <th>DB Attribute</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Name</td> <td>Identifies in human readable way</td> </tr> <tr> <td>Password hash</td> <td>For login and unique hash in exercise</td> </tr> <tr> <td>Email</td> <td>Unique and allows for communication</td> </tr> <tr> <td>Classes&lt;Array&gt;</td> <td>The classes the user has access to</td> </tr> </tbody> </table> Table 1 – User Administrators will have a different table since they have access to all exercises shown in Table 2. <table> <thead> <tr> <th>DB Attribute</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Name</td> <td>Identifies in human readable way</td> </tr> <tr> <td>Password Hash</td> <td>For login and unique hash in exercise</td> </tr> <tr> <td>Email</td> <td>Unique and allows for communication</td> </tr> </tbody> </table> Table 2 – Administrator Each class will be comprised of various exercises shown in Table 3. <table> <thead> <tr> <th>DB Attribute</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Class Name</td> <td>Identifies in human readable way</td> </tr> <tr> <td>Exercises&lt;Array&gt;</td> <td>This list of exercises belonging to a class</td> </tr> </tbody> </table> Table 3 – Class Each exercise should have a unique hash for every user shown in Table 4. <table> <thead> <tr> <th>DB Attribute</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Name</td> <td>Identifies in human readable way</td> </tr> <tr> <td>Configuration</td> <td>Text for multiple choice or container</td> </tr> <tr> <td>Answer Hash</td> <td>Unique for every user and exercise</td> </tr> </tbody> </table> Table 4 – Exercise *Vulnerable Exercises* Three custom web and desktop exercises have been created. There are also instructions on how to create and import new exercises into the application. One of the web exercises is a weak authentication exercise. This exercise will be built on top of the Ubuntu Docker image. The example will be a JavaScript web page which contains the code in Figure 6. ```javascript If (usr.value==“simon” && pass.value==“password”){ ``` Figure 6 – Weak Authentication Below you can see the GUI generated for this exercise. Figure 7 – Vulnerable Exercise Users could simply view the source of the page to discover where the login page directs to. DEVELOPMENT Continuous Integration and Continuous Delivery This project is developed using CI/CD via Jenkins. Jenkins is a popular open source CI/CD tool. This allows the application to easily manage dependencies, vulnerabilities, and enables easy contribution. Figure 5 is a diagram for how the application gets developed and deployed. ![Diagram of Development and Deployment Process] **Figure 5 – Development and Deployment Process** - Using Dock as the visualization image allows users to easily add new security exercises. I do not need to spend the time making new exercises since other professionals already make things like WebGoat, Bricks, and Damn Vulnerable Web Application found on OWASP site. - Using an Nginx proxy and Docker containers allows the administrator to scale the application’s performance easily. This application could support anywhere from 5 to hundreds of users via load balancing and redundancy. - The continuous integration Jenkins build will detect if a base container breaks functionality upon any update. A failed build on the development branch will not push to production so stable releases can always be used. Before any code can be added to production, all tests must pass, and there must not be any Sonarqube vulnerabilities, code smells, or bugs. Snyk and Dependabot do scans against the project for common vulnerabilities and my dependencies. - All requests to web application front-end come through Nginx via HTTPS so attackers cannot snoop on traffic or execute remote vulnerabilities easily since Nginx has a great security program. - A vulnerability web scan is done against the system every build to ensure none of the OWASP top 10 exist in the web application. RESULTS The application met all requirements, is much more efficient than using virtual machines, and helps students learn security in a hands-on way. This project was an incredible learning experience because modern security and CI/CD tools were used while being a full-stack developer. Docker was much more complex than I anticipated – I am still learning their CLI and software development kit. Docker containers are extremely powerful for virtualizing applications because of their efficiency – creating a unique Docker container can take seconds whereas creating new virtual machines generally takes half an hour. Another success was using the various tools throughout project development: - Jenkins - this allowed me to start automated vulnerability scans and deploys on every commit. I would have had to do a ton of manual work if not for this tool. - Sonarqube – this caught several vulnerabilities in my project. It also kept my code cleaner to best practices. The project seems to be secure, stable, and creates unique exercises for students. Below is a table comparing downloading vulnerable software on your own computer, setting up virtual machines, and using the Security Lab Manger. <table> <thead> <tr> <th>Features</th> <th>BYOD</th> <th>Virtual Machines</th> <th>Security Lab Manager</th> </tr> </thead> <tbody> <tr> <td>Configured vulnerable exercises</td> <td>✗</td> <td>Some</td> <td>✓</td> </tr> <tr> <td>Downloaded/Configured debugging tools</td> <td>✗</td> <td>Some</td> <td>✓</td> </tr> <tr> <td>Cross-platform</td> <td>✗</td> <td>✓</td> <td>✗</td> </tr> <tr> <td>Exercise completion results in grade</td> <td>✗</td> <td>✗</td> <td>✓</td> </tr> <tr> <td>Unique answers to exercises</td> <td>✗</td> <td>✗</td> <td>✓</td> </tr> <tr> <td>Access Control</td> <td>✗</td> <td>✗</td> <td>✓</td> </tr> <tr> <td>Low computing resource requirements</td> <td>✗</td> <td>✗</td> <td>✓</td> </tr> <tr> <td>Requires only an internet browser</td> <td>✗</td> <td>✗</td> <td>✓</td> </tr> <tr> <td>Manage and email grades</td> <td>✗</td> <td>✗</td> <td>✓</td> </tr> </tbody> </table> Table 5 – Tool Comparison This application has potential to be extremely useful to Universities and business trying to teach security – because it is efficient and automates the process of assigning grades to students for their efforts. Below are some features important features: - Automatic grading/emailing - Configure all data via GUI - Secure web portal - Four full virtual exercises - Scales to performance needs Figure 8 – GUI Flow Figure 8 is the general flow that users and instructors can navigate on the site. Users only need to login to the website to get started – no setting up environments or downloading software. Each exercise has unique answers for students, so answer sharing is not viable. Instructors can check all of the submissions to check if someone submitted the same hash – this would alert them that cheating has occurred. Administrators can create, edit, and view classes, exercises, and users. Grades can be emailed out automatically. If the site settings can also be changed to limit or increase the CPU/RAM being used by this application as well to keep up with the class size. The main failure of my project was not creating enough exercises – I planned on creating 6 virtual exercises but I was only able to create 3. Creating exercises in Docker containers is not that hard but does take a decent amount of effort. I definitely have the ability to create several exercises, but I ran out of time as the project ended. Several more features would significantly enhance the value of the Security Lab manager. All of these features have been logged in Kanban boards if I decide to further develop this application after graduation. CONCLUSION This application was developed in a way to maximize satisfaction from the product owner by presenting constant demos and gathering feedback. Modern development practices like test driven development, static analysis, vulnerability scanning, and CI/CD enhanced the applications security and stability. This project enhanced my knowledge as a developer and security engineer. Carefully planning, adjusting planning, using CI/CD, and injecting static analysis into application development makes a significant difference in the product created. The Security Lab Manager is a great tool for learning security in a classroom setting safely. REFERENCES BIOGRAPHY Simon Owens graduated from the University of Evansville in May 2019 at the age of twenty two years old. He has grown up in Evansville his entire life but is working for Raytheon in Indianapolis as a Cybersecurity Engineer. Simon specializes in offensive security testing for Raytheon and working with developers on how to develop securely and integrate testing into daily workflow. His open source projects can be found at: https://github.com/so87 and https://gitlab.io/simonowens157. Appendix Static Analysis Static Analysis is the method of analyze the syntax of a programming language for improper style and flaws without being ran. The analyzer will attempt to analyze logical paths and look for logic that could be exploited by certain input. Sonarqube was chosen as the static analyzer because of its popularity, free usage, IDE plugin, and support of Python, Javascript, HTML, and CSS. Sonarqube will display errors in the IDE while you code real time, and will give you project metrics like: total vulnerabilities, total code smells, test coverage, and total lines of code. Static Analyzers are used during development to decrease technical debt – because vulnerabilities are much cheaper to fix the faster they are found. Sonarqube will not catch all vulnerabilities, which is why vulnerability scanners are also used. To learn more about Sonarqube, please visit their website [1]. Vulnerability Scanning OWASP ZAP – Vulnerability scanners send various inputs to a target and analyze the corresponding output for known vulnerabilities. There are vulnerability scanners for Operating systems, Docker containers, C++ applications, Web Applications, and so on. Since this project is a web application and uses various Docker containers, OWASP ZAP and Anchor Engine will be used. OWASP ZAP is a well-known web application vulnerability scanner that looks for weaknesses like cross site scripting, SQL injection, authentication bypass, and other common weaknesses. Each time a build is performed on this project, OWASP ZAP will begin a scan, and save the result to later analyze. This allows the developer to see what an attacker would see when scanning the application for vulnerabilities. Anchor Engine analyzes the contents of a Docker container for misconfigurations and vulnerable libraries/tools. An example is that some versions of SQL contain race condition vulnerabilities which can be exploited. If a Docker container was running the old version of SQL, Anchor Engine would report this information after a scan. To learn more about OWASP ZAP[2] and Anchor Engine[3], please visit their websites. Continuous Integration and Continuous Delivery Continuous Integration is the process of merging all approved code into a source controlled repository. Code is approved for integration if automated tests pass. This way all developers can improve the production code base without breaking functionality or injecting bugs. Continuous Delivery is the process of automatically making the integrated software changes to the production environment. This allows a developer to see their changes the same day in production – rather than upgrading application vulnerability every quarter or year. Jenkins is the Continuous Integration and Continuous Delivery (CI/CD) tool used for this project. Every time the development branch is updated, Static Analysis, Vulnerability scans, and tests are run. If all of those tests pass, and a certain level of quality is met, those changes are merged to the production branch, and then deployed on my local server. To learn more about CI/CD, or Jenkins, visit Jenkin’s website [4]. Test Driven Development Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the software is improved to pass the new tests, only. This is opposed to software development that allows software to be added that is not proven to meet requirements. Tests are written first, the test should fail, code is written to attempt to pass the test, once passed the code is reviewed, and this process is repeated for every requirement. This helps developers focus on meeting requirements and creating better tests that alert the developer when introducing a change breaks functionality. Selenium and Mocha are used as the primary testing tools. Selenium allows for easy functional testing – a web browser is started, navigates to a certain page, and then looks for a specific result. Mocha tests are used for basic unit testing and to check for basic security configurations like redirects and key strength. To learn more about Selenium[5] and Mocha[6], please visit their websites. Docker Virtualization Docker is a virtualization technology that puts applications and operating systems into what they call a container. This container is supposed to be a slimmed down version of a virtual machine – only the libraries and tools required to run an application are included. This generally results in a smaller and more efficient virtual machine. Docker has the ability to easily create multiple containers to scale to developer’s needs. Docker has images on their site[8] which are preconfigured for different applications. For more information about Docker, I recommend watching this video. Django Framework Django is a framework for creating websites. It is based on python and open source. It already has several built-in features to make authentication, scaling, and database interaction easy and secure. This is how websites - even simple ones designed by a single person - can still include advanced functionality like authentication support, management and administrator panels, comment boxes, file upload support, contact forms, and more. Python, JavaScript, HTML, and CSS are used in conjunction to create web pages for this project. To learn more about Django please visit their website [7].
{"Source-Url": "https://www.evansville.edu/majors/eecs/downloads/projects2019/Security-Lab-Manager-Report.pdf", "len_cl100k_base": 5484, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 45204, "total-output-tokens": 6382, "length": "2e12", "weborganizer": {"__label__adult": 0.0004696846008300781, "__label__art_design": 0.000514984130859375, "__label__crime_law": 0.0009007453918457032, "__label__education_jobs": 0.0230255126953125, "__label__entertainment": 8.225440979003906e-05, "__label__fashion_beauty": 0.00024306774139404297, "__label__finance_business": 0.0004651546478271485, "__label__food_dining": 0.0004591941833496094, "__label__games": 0.00064849853515625, "__label__hardware": 0.0013818740844726562, "__label__health": 0.0005230903625488281, "__label__history": 0.0002582073211669922, "__label__home_hobbies": 0.00019156932830810547, "__label__industrial": 0.00058746337890625, "__label__literature": 0.0002579689025878906, "__label__politics": 0.00026297569274902344, "__label__religion": 0.0004963874816894531, "__label__science_tech": 0.0130157470703125, "__label__social_life": 0.00024628639221191406, "__label__software": 0.01509857177734375, "__label__software_dev": 0.93994140625, "__label__sports_fitness": 0.0003418922424316406, "__label__transportation": 0.000576019287109375, "__label__travel": 0.0002465248107910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27896, 0.0112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27896, 0.36678]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27896, 0.92068]], "google_gemma-3-12b-it_contains_pii": [[0, 623, false], [623, 996, null], [996, 2483, null], [2483, 4571, null], [4571, 6013, null], [6013, 7514, null], [7514, 8937, null], [8937, 9172, null], [9172, 9385, null], [9385, 11161, null], [11161, 11688, null], [11688, 12104, null], [12104, 12925, null], [12925, 14300, null], [14300, 14935, null], [14935, 15645, null], [15645, 16650, null], [16650, 18845, null], [18845, 19452, null], [19452, 20488, null], [20488, 21135, null], [21135, 21927, null], [21927, 22423, null], [22423, 24273, null], [24273, 26157, null], [26157, 27762, null], [27762, 27896, null]], "google_gemma-3-12b-it_is_public_document": [[0, 623, true], [623, 996, null], [996, 2483, null], [2483, 4571, null], [4571, 6013, null], [6013, 7514, null], [7514, 8937, null], [8937, 9172, null], [9172, 9385, null], [9385, 11161, null], [11161, 11688, null], [11688, 12104, null], [12104, 12925, null], [12925, 14300, null], [14300, 14935, null], [14935, 15645, null], [15645, 16650, null], [16650, 18845, null], [18845, 19452, null], [19452, 20488, null], [20488, 21135, null], [21135, 21927, null], [21927, 22423, null], [22423, 24273, null], [24273, 26157, null], [26157, 27762, null], [27762, 27896, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27896, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27896, null]], "pdf_page_numbers": [[0, 623, 1], [623, 996, 2], [996, 2483, 3], [2483, 4571, 4], [4571, 6013, 5], [6013, 7514, 6], [7514, 8937, 7], [8937, 9172, 8], [9172, 9385, 9], [9385, 11161, 10], [11161, 11688, 11], [11688, 12104, 12], [12104, 12925, 13], [12925, 14300, 14], [14300, 14935, 15], [14935, 15645, 16], [15645, 16650, 17], [16650, 18845, 18], [18845, 19452, 19], [19452, 20488, 20], [20488, 21135, 21], [21135, 21927, 22], [21927, 22423, 23], [22423, 24273, 24], [24273, 26157, 25], [26157, 27762, 26], [27762, 27896, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27896, 0.16848]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
93759e5a8b861db1f7e0647b4e07a8b4bc90f55f
[REMOVED]
{"len_cl100k_base": 6718, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 28449, "total-output-tokens": 8503, "length": "2e12", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.0017385482788085938, "__label__crime_law": 0.0005412101745605469, "__label__education_jobs": 0.00466156005859375, "__label__entertainment": 0.0003001689910888672, "__label__fashion_beauty": 0.00024235248565673828, "__label__finance_business": 0.00032520294189453125, "__label__food_dining": 0.0005984306335449219, "__label__games": 0.002361297607421875, "__label__hardware": 0.0011110305786132812, "__label__health": 0.000965595245361328, "__label__history": 0.0006232261657714844, "__label__home_hobbies": 0.00013208389282226562, "__label__industrial": 0.0005545616149902344, "__label__literature": 0.0010805130004882812, "__label__politics": 0.0004405975341796875, "__label__religion": 0.0006251335144042969, "__label__science_tech": 0.211669921875, "__label__social_life": 0.00021827220916748047, "__label__software": 0.0197296142578125, "__label__software_dev": 0.75048828125, "__label__sports_fitness": 0.0003254413604736328, "__label__transportation": 0.0006103515625, "__label__travel": 0.00030159950256347656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36655, 0.01971]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36655, 0.57352]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36655, 0.90581]], "google_gemma-3-12b-it_contains_pii": [[0, 1051, false], [1051, 6464, null], [6464, 12590, null], [12590, 18339, null], [18339, 24035, null], [24035, 29092, null], [29092, 36655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1051, true], [1051, 6464, null], [6464, 12590, null], [12590, 18339, null], [18339, 24035, null], [24035, 29092, null], [29092, 36655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36655, null]], "pdf_page_numbers": [[0, 1051, 1], [1051, 6464, 2], [6464, 12590, 3], [12590, 18339, 4], [18339, 24035, 5], [24035, 29092, 6], [29092, 36655, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36655, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
d2e3dfc98d9715ed53db125e2593e63eecbc82aa
Automating Data Reuse in High-Level Synthesis Wim Meeus Imec and Ghent University, Gent, Belgium Email: Wim.Meeus@UGent.be Dirk Stroobandt Ghent University, Gent, Belgium Email: Dirk.Stroobandt@UGent.be Abstract—Current High-Level Synthesis (HLS) tools perform excellently for the synthesis of computation kernels, but they often don't optimize memory bandwidth. As memory access is a bottleneck in many algorithms, the performance of the generated circuit will benefit substantially from memory access optimization. In this paper we present an automated method and a toolchain to detect reuse of array data in loop nests and to build hardware that exploits this data reuse. This saves memory bandwidth and improves circuit performance. We make use of the polyhedral representation of the source program, which makes our method computationally easy. Our software complements the existing HLS flows. Starting from a loop nest written in C, our tool generates a reuse buffer and a loop controller, and preprocesses the loop body for synthesis with an existing HLS tool. Our automated tool produces designs from unoptimized source code that are as efficient as those generated by a commercial HLS tool from manually-optimized source code. I. INTRODUCTION High-level Synthesis (HLS) is a recent step in the design flow of a digital electronic circuit, moving the design effort to higher abstraction levels [1], [2]. HLS translates a behavioral description of an algorithm at the algorithmic abstraction level into a design at the Register Transfer Level (RTL). Ref. [3] shows that some of these tools generate excellent RTL designs for computation kernels, sometimes outperforming handcrafted RTL design in terms of speed, chip area and power consumption. Most existing HLS tools don't optimize memory accesses though, leaving a lot of potential for circuit performance optimization untapped. This is particularly unfortunate because the communication between processor cores and memory is a well-known bottleneck that limits performance. We present an automatic method for generating data reuse buffers during HLS. This should lead to a more optimal design than a general-purpose memory system with caches and scratchpad memories. We focus on array accesses in loop nests, because that is where the most gain can be obtained. We ensure that every array location is read and/or written only once during the execution of the loop nest. This is known as communication coalescing and leads to a reduction of the data traffic to and from memory, improving circuit performance and alleviating the memory bottleneck. Our tool produces RTL code for reuse buffers and the loop controller, and preprocesses C code of the loop body for the generation of a datapath using an existing HLS tool. Experiments reveal that the performance of designs generated using our automated flow with unoptimized source code match that of traditional HLS using hand-optimized code. The main contributions of this work are: - An automated method which harnesses the polyhedral representation of the source program to discover data reuse in an algorithm. - A method for efficiently exploiting data reuse using data reuse buffers, alleviating the memory bottleneck. The use of the polyhedral representation leads to an elegant design with linear memory access patterns. - An automated design flow to implement said reuse buffers in hardware in the form of an RTL design, together with a controller that synchronizes memory accesses, the reuse buffer and the datapath. II. THE POLYHEDRAL MODEL The polyhedral model is an instrument to represent the execution of a computer program in a geometric way. Its computational simplicity makes it very suitable for automatic optimization in compilers [4], [5]. Each statement of the program is characterized by its iteration domain, the access functions of written and read data, and its schedule. Paramount for our work is that while access functions in the polyhedral model express array indices as functions of loop variables and parameters, they also map the points of the iteration domain to array elements, i.e. they define a transformation from the iteration domain to the data domain. These transformations are affine projections. The access functions can be split into two parts: the loop variables dependent part (called motion vector) and the constant part (called offset). III. POLYHEDRAL REUSE BUFFER DESIGN FLOW Our design flow is depicted in Fig. 1. Starting from a loop nest, we analyze array references for data reuse (section III-A). We organize array references that share data into reuse chains (section III-B), which will be mapped onto a chain of FIFO buffers. For each reuse chain we calculate the fetch domain (section III-C) and the buffer size (section III-D). Finally, an RTL design is generated (section III-E). For polyhedral calculations, we use Jolylib [6] and the Barvinok library [7]. We illustrate our flow using Sobel edge detection, an image processing algorithm that detects edges in images from the horizontal and vertical brightness gradients in a 3x3 window. Pseudocode is shown in Fig. 2. From each 3x3 window, 8 pixels are used in the calculation. Conversely, pixel values are used 8 times (or reused 7 times) in the calculations. Array references and corresponding access functions are shown in columns 1 and 2 of table I. Rows in the access functions represent index dimensions. The left two columns of the matrix represent the motion vector, the right column the offsets. Our Tool Loop Nest Reuse Analysis Fetch / Exec / Store Domain Reuse Buffer Size External HLS RTL Design RTL loop body RTL loop ctrl RTL reuse buf Figure 1. Polyhedral reuse buffer design flow unsigned char pixel_in[cols][rows] for (r : 1..rows-1) for (c : 1..cols-1) gradX = pixel_in[c-1,r-1] + 2*pixel_in[c-1,r] + pixel_in[c-1,r+1] - 2*pixel_in[c-1,r+1] - pixel_in[c-1,r+1] gradY = pixel_in[c-1,r-1] + 2*pixel_in[c+1,r-1] - pixel_in[c-1,r+1] - 2*pixel_in[c+1,r+1] - pixel_in[c+1,r+1] grad = abs(gradX) + abs(gradY) if (grad>255) grad = 255 pixel_out[c,r] = 255 - grad Figure 2. Sobel edge detection: pseudocode Our method works on a loop nest with references to array data. For the polyhedral representation to be applicable, loop bounds and array indices need to be affine expressions of loop variables and parameters. Benchmarks studied in [8] show that these categories comprise between 83% and 100% of the array indices. Support for additional categories is future work. A. Reuse analysis Reuse of array elements may appear in 3 different ways. 1) When two array references in the loop body have the same access function. In our example, this is the case for a.o. pixel_in[c-1,r-1] which occurs twice. 2) When an array subscript is invariant for a certain combination of loop variables. E.g. A[i-j] will access the same element of A whenever i - j attains the same value. A special case is when an access function is independent of a loop variable. 3) When two different access functions attain the same value for different combinations of loop variables. E.g. A[i+1] and A[j] will access the same array element in iterations i = n and i = n+1, respectively. This work focuses on cases 1 and 3, i.e. reuse between different array references. At this stage, support is limited to rectangular data domains, and array indices that depend on one loop variable only. The access functions define a transformation from the iteration domain to the array elements. For each array reference, we calculate its data domain by transforming the iteration domain of the statement in which the array reference occurs with the transformation defined by the access function. The intersection of two such data domains is the set of array elements that are accessed by both array references. In other words, reuse occurs if the data domains of different array references overlap. Table I. Access functions, data domains and reuse distances <table> <thead> <tr> <th>Index expression</th> <th>Access function</th> <th>Data domain</th> <th>Reuse buffer size</th> </tr> </thead> <tbody> <tr> <td>pixel_in[c-1][r-1]</td> <td>1 0 -1</td> <td>1 ≤ c ≤ cols - 2</td> <td>1</td> </tr> <tr> <td></td> <td>0 1 -1</td> <td>1 ≤ r ≤ rows - 2</td> <td>1</td> </tr> <tr> <td>pixel_in[c][r-1]</td> <td>1 0 0</td> <td>2 ≤ c ≤ cols - 1</td> <td>1</td> </tr> <tr> <td></td> <td>0 1 -1</td> <td>1 ≤ r ≤ rows - 2</td> <td>1</td> </tr> <tr> <td>pixel_in[c+1][r-1]</td> <td>1 0 1</td> <td>3 ≤ c ≤ cols</td> <td>2</td> </tr> <tr> <td></td> <td>0 1 -1</td> <td>1 ≤ r ≤ rows - 2</td> <td>2</td> </tr> <tr> <td>pixel_in[c][r]</td> <td>1 0 -1</td> <td>1 ≤ c ≤ cols - 2</td> <td>2</td> </tr> <tr> <td></td> <td>0 1 0</td> <td>2 ≤ r ≤ rows - 1</td> <td>2</td> </tr> <tr> <td>pixel_in[c+1][r]</td> <td>1 0 1</td> <td>3 ≤ c ≤ cols</td> <td>1</td> </tr> <tr> <td></td> <td>0 1 1</td> <td>3 ≤ r ≤ rows</td> <td>1</td> </tr> <tr> <td>pixel_in[c][r+1]</td> <td>1 0 0</td> <td>2 ≤ c ≤ cols - 1</td> <td>1</td> </tr> <tr> <td></td> <td>0 1 1</td> <td>3 ≤ r ≤ rows</td> <td>1</td> </tr> <tr> <td>pixel_in[c+1][r+1]</td> <td>1 0 1</td> <td>3 ≤ c ≤ cols</td> <td>1</td> </tr> <tr> <td></td> <td>0 1 1</td> <td>3 ≤ r ≤ rows</td> <td>1</td> </tr> </tbody> </table> The set of references to an array can be partitioned into subsets that have (fully or partly) overlapping data domains. We call these subsets reuse sets. The data domain of the reuse set is the union of the data domains of the array references in the reuse set. The third column of table I shows the data domain of each array reference in the Sobel edge example. In this case, all array accesses form a single reuse set. B. Reuse chain We now describe how to order these reuse sets into reuse chains. This ordering is based on the access functions. At this stage, our work is limited to reuse sets of array references that have the same motion vector, corresponding to sliding window access patterns, but other patterns can be added (handled as future work). We order the array references into a reuse chain according to the time (iteration) in which they access the array elements. The array reference at the head of the reuse chain is the first one to access array elements. If it is a read, the array element has to be fetched from memory. The array element is then passed to the next references in the reuse chain where it is accessed in a later iteration, until at the end of the chain the element is no longer used. If there is a write operation in the chain, the array element needs to be written back to memory after the last write access in the chain. For the Sobel edge detection example, the rows of table I are ordered in reuse chain order from tail to head. Reuse chains can be used to create reuse buffers. Between read accesses, or between a write and successive reads, the reuse buffer is a FIFO. Between any operation and a successive write, no buffering is needed as the existing data will be overwritten. If the head of the reuse chain is a read access, the array element needs to be fetched from memory. If there is a write operation in the chain, the array element needs to be written back to memory after the last write access in the chain. for (I in extended iteration domain) { if (I in fetch domain) FETCH_ARRAY_DATA_INTO_REUSE_BUFFER(I); if (I in original iteration domain) EXEC_LOOP_BODY_WITH_REUSE_BUFFER(I); if (I in store domain) STORE_ARRAY_DATA_INTO_MAIN_MEMORY(I); } Figure 3. Code fragment with reuse buffers C. Fetch, execute and store domains Using a reuse buffer, the loop nest looks as in Fig. 3: new data is fetched into the reuse buffer, the loop body is executed with data in the reuse buffer, and data is stored back into memory. Generally, only one new array element at the head of the reuse buffer needs to be fetched for each iteration. The rest is taken from the reuse buffer. Additional data fetches are needed to fetch all necessary data, e.g. to pre-fill the reuse buffer before the first execution of the loop body. We solve this by extending the iteration domain with fetch-only iterations. We call the iteration domain of all fetch operations the fetch domain. All fetches use the same array indices, namely those of the head of the reuse chain. Now we calculate the fetch domain. The data domain of an array reference is the transformation of its iteration domain by the access function. Inverting these transformations is trivial as they are bijective. Transforming the data domain of a reuse chain back to the iteration domain, a new iteration domain is obtained. Choosing the inverse access function of the head of the reuse chain for back transformation, we obtain the fetch domain. Similarly, the store domain can be found using the array reference at the end of the reuse chain. With the original iteration domain of the loop and the fetch and store domains, we can build a new loop nest as in Fig. 3. The extended iteration domain is the union of the original iteration domain of the loop and the fetch and store domains. In Sobel Edge, we find the data domain of the reuse chain as the union of the data domains of all accesses of array \( A \) (see table I), i.e. \( 1 \leq i_1 \leq \text{cols}, 1 \leq i_2 \leq \text{rows} \). The head of the reuse chain is \( \text{pixel}_{\text{in}}[c+1][r+1] \). The fetch domain is the inverse transformed data domain by the access function of the reuse chain head: \(-1 \leq r \leq \text{cols} - 1, -1 \leq c \leq \text{rows} - 1\). D. Reuse buffer size For each pair of successive array references in the reuse chain, we calculate how many array elements the data domain contains between them. Consider a point of the iteration domain representing a present iteration. It is trivial to split up the iteration space into present (the chosen point), past and future. Fig. 4a shows a graphical interpretation. Consider two successive array references of the reuse chain, \( A[index1] \) and \( A[index2] \), with the latter accessing data from the former. The reuse buffer needs to store the array elements that are accessed after the former and before the latter array access. Transforming the past iteration space with the first access function and intersecting the result with the data domain of the reuse chain, we find the data points that have been accessed in the past by the first array reference (Fig. 4b). Similarly, transforming the future iteration domain with the second access function leads to data points that will be accessed in the future by the second array reference (Fig. 4c). The intersection (Fig. 4d) is the set of data points between both array references. The reuse buffer needs to store one additional data element, namely the present element. Using this method, the buffer sizes for Sobel Edge are as shown in column 4 of table I. In case of a long reuse distance, the size of the reuse buffer may exceed available on-chip resources. In such case, the excessively long buffer should be left out, splitting the reuse chain in two separate chains and requiring an additional fetch from memory. E. Building the RTL design With the elements from the previous sections, we automatically generate an RTL design that exploits the data reuse. The RTL design consists of 4 parts as in Fig. 5. The datapath implements the data statements. It gets array data from and writes array data back to the reuse buffer(s). The loop controller controls the execution of the loop nest, firing datapath and reuse buffer operations and calculating loop variables and array indices. The memory interface handles the communication between the memory and the reuse buffer(s). The datapath can be generated with any HLS tool. To this end, our tool rewrites the loop body, replacing array references with variables that are the ports of the reuse buffer, as shown in Fig. 6. Generating the reuse buffer from the reuse chain and reuse buffer sizes is straightforward. Between each pair of accesses, a FIFO is instantiated with the appropriate length. So far our tool generates the RTL code of reuse buffers with only read operations. The loop controller generator builds on [9], with extensions for array index calculation and firing of... memory operations. The memory interface is a generic piece of RTL that asserts read / write signals as required. The VHDL code for the top level was written by hand. Correct operation of the reuse buffer was verified in simulation. IV. EXPERIMENTAL RESULTS We compare our approach to two alternatives that all use HLS to synthesize the Sobel Edge detector, one using the same unoptimized code as with our tool, and one, in which we have hand-optimized the source code so that HLS would produce a reuse buffer. Calypso’s Catapult C was used in all cases. The image size was 100 × 100 pixels. The bandwidth of the input and output image buffers was 1 pixel per clock cycle, i.e. at least 10,000 cycles are needed to transfer all data. RTL synthesis was run for area estimation, targeting a Xilinx Virtex-5 FPGA. The results are in Table II. The area is given in lookup table, flip-flop and RAM counts. Without pipelining, our generated circuit greatly outperforms the circuit generated using HLS from the same source code in terms of latency, and is slightly better than the one with HLS and hand-optimized code. The area is in the same range for all circuits. The latency is still considerably worse than the theoretical optimum of slightly more than 10,000 cycles. With pipelining, which our tool doesn’t do yet, HLS gets rather close to this figure. From these experiments, we conclude that our automated method performs equally well as manual optimization combined with traditional HLS. Adding pipelining to our method will further improve latency. V. RELATED WORK ROCCC is one of the few HLS tools that does memory access optimization. ROCCC introduces the concept of smart buffers [10] for input data reuse. Unfortunately ROCCC has very stringent requirements on the input C code, and the generated RTL doesn’t scale well with the reuse distance [3]. Two papers discuss the generation of an application specific memory architecture similar to what we do. Ref. [11] presents methods to optimize local data storage and transfers to main memory. The authors tackle both the problem of optimizing the loop nest for the available memory resources and the generation of fitting cyclic reuse buffers through HLS. Their method for reuse buffer length calculation is more complex than ours. In [12], a method is presented to use on-chip buffers for data reuse. The authors use the polyhedral model as well as the transformation aspect of access functions. However, they don’t use the reverse transformation as we do, resulting in a more complex algorithm for code generation. Their work is also limited to data reuse in consecutive iterations, which is not a limitation in our work. In both papers, the reuse buffers are introduced in C, and RTL generation is left to the HLS tool. This makes the integration of the reuse buffer and datapath easier at the expense of less control from the designer on the reuse buffer hardware design. VI. CONCLUSION In this paper we have presented a method to automate the generation of data reuse buffers for HLS. It uses the polyhedral model and more specifically the fact that array access functions are affine transformations between the iteration and the data space. This leads to a powerful method to analyze potential data reuse, as well as an elegant means of streamlining data fetching, processing and storing in a loop nest. Though not fully integrated yet, our toolflow is able to generate all major parts for building a functional circuit. Our automated method and flow produce circuits that perform equally well as circuits generated with HLS from manually optimized C code. REFERENCES
{"Source-Url": "https://www.date-conference.com/files/proceedings/2014/pdffiles/10.7_5_ip5-7.pdf", "len_cl100k_base": 4594, "olmocr-version": "0.1.50", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 14730, "total-output-tokens": 5480, "length": "2e12", "weborganizer": {"__label__adult": 0.0008597373962402344, "__label__art_design": 0.001003265380859375, "__label__crime_law": 0.0007162094116210938, "__label__education_jobs": 0.0007233619689941406, "__label__entertainment": 0.0001933574676513672, "__label__fashion_beauty": 0.0004067420959472656, "__label__finance_business": 0.0004138946533203125, "__label__food_dining": 0.0006818771362304688, "__label__games": 0.0010976791381835938, "__label__hardware": 0.034332275390625, "__label__health": 0.0012636184692382812, "__label__history": 0.0005960464477539062, "__label__home_hobbies": 0.0003063678741455078, "__label__industrial": 0.0022792816162109375, "__label__literature": 0.0002846717834472656, "__label__politics": 0.0005898475646972656, "__label__religion": 0.0012369155883789062, "__label__science_tech": 0.38134765625, "__label__social_life": 0.00010567903518676758, "__label__software": 0.0068359375, "__label__software_dev": 0.5615234375, "__label__sports_fitness": 0.0006814002990722656, "__label__transportation": 0.00209808349609375, "__label__travel": 0.0003764629364013672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21916, 0.03112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21916, 0.55132]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21916, 0.85643]], "google_gemma-3-12b-it_contains_pii": [[0, 5538, false], [5538, 10805, null], [10805, 15828, null], [15828, 21916, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5538, true], [5538, 10805, null], [10805, 15828, null], [15828, 21916, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21916, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21916, null]], "pdf_page_numbers": [[0, 5538, 1], [5538, 10805, 2], [10805, 15828, 3], [15828, 21916, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21916, 0.14159]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
e1677ad57f0e328e07a4a2f586989344177e625f
Graphs: Breadth First Search **Graph Traversals:** There are a number of approaches used for solving problems on graphs. One of the most important approaches is based on the notion of systematically visiting all the vertices and edges of a graph. The reason for this is that these traversals impose a type of tree structure (or generally a forest) on the graph, and trees are usually much easier to reason about than general graphs. **Breadth-first search:** Given a graph $G = (V, E)$, breadth-first search starts at some source vertex $s$ and "discovers" which vertices are reachable from $s$. Define the distance between a vertex $v$ and $s$ to be the minimum number of edges on a path from $s$ to $v$. Breadth-first search discovers vertices in increasing order of distance, and hence can be used as an algorithm for computing shortest paths. At any given time there is a “frontier” of vertices that have been discovered, but not yet processed. Breadth-first search is named because it visits vertices across the entire “breadth” of this frontier. Initially all vertices (except the source) are colored white, meaning that they are undiscovered. When a vertex has first been discovered, it is colored gray (and is part of the frontier). When a gray vertex is processed, then it becomes black. The search makes use of a queue, a first-in first-out list, where elements are removed in the same order they are inserted. The first item in the queue (the next to be removed) is called the head of the queue. We will also maintain arrays $\text{color}[u]$ which holds the color of vertex $u$ (either white, gray or black), $\text{pred}[u]$ which points to the predecessor of $u$ (i.e. the vertex who first discovered $u$), and $d[u]$, the distance from $s$ to $u$. Only the color is really needed for the search (in fact it is only necessary to know whether a node is nonwhite). We include all this information, because some applications of BFS use this additional information. Observe that the predecessor pointers of the BFS search define an inverted tree (an acyclic directed graph in which the source is the root, and every other node has a unique path to the root). If we reverse these edges we get a rooted unordered tree called a BFS tree for $G$. (Note that there are many potential BFS trees for a given graph, depending on where the search starts, and in what order vertices are placed on the queue.) These edges of $G$ are called tree edges and the remaining edges of $G$ are called cross edges. It is not hard to prove that if $G$ is an undirected graph, then cross edges always go between two nodes that are at most one level apart in the BFS tree. (Can you see why this must be true?) Below is a sketch of a proof that on termination, $d[v]$ is equal to the distance from $s$ to $v$. (See the CLRS for a detailed proof.) **Theorem:** Let $\delta(s, v)$ denote the length (number of edges) on the shortest path from $s$ to $v$. Then, on termination of the BFS procedure, $d[v] = \delta(s, v)$. **Proof:** (Sketch) The proof is by induction on the length of the shortest path. Let $u$ be the predecessor of $v$ on some shortest path from $s$ to $v$, and among all such vertices the first to be processed by the BFS. Thus, $\delta(s, v) = \delta(s, u) + 1$. When $u$ is processed, we have (by induction) $d[u] = \delta(s, u)$. Since $v$ is a neighbor of $u$, we set $d[v] = d[u] + 1$. Thus we have $$d[v] = d[u] + 1 = \delta(s, u) + 1 = \delta(s, v).$$ BFS(G,s) { for each u in V { // initialization color[u] = white d[u] = infinity pred[u] = null } color[s] = gray // initialize source s d[s] = 0 Q = {s} // put s in the queue while (Q is nonempty) { u = Q.Dequeue() // u is the next to visit for each v in Adj[u] { if (color[v] == white) { // if neighbor v undiscovered color[v] = gray // ...mark it discovered d[v] = d[u]+1 // ...set its distance pred[v] = u // ...and its predecessor Q.Enqueue(v) // ...put it in the queue } } color[u] = black // we are done with u } } Fig. 1: Breadth-first search: Example. as desired. **Analysis:** The running time analysis of BFS is similar to the running time analysis of many graph traversal algorithms. As done in CLR \( V = |V| \) and \( E = |E| \). Observe that the initialization portion requires \( \Theta(V) \) time. The real meat is in the traversal loop. Since we never visit a vertex twice, the number of times we go through the while loop is at most \( V \) (exactly \( V \) assuming each vertex is reachable from the source). The number of iterations through the inner for loop is proportional to \( \text{deg}(u) + 1 \). (The +1 is because even if \( \text{deg}(u) = 0 \), we need to spend a constant amount of time to set up the loop.) Summing up over all vertices we have the running time \[ T(V) = V + \sum_{u \in V} (\text{deg}(u) + 1) = V + \sum_{u \in V} \text{deg}(u) + V = 2V + 2E \in \Theta(V + E). \] The analysis is essentially the same for directed graphs. **Depth-First Search** **Depth-First Search:** The next traversal algorithm that we will study is called depth-first search, and it has the nice property that nontree edges have a good deal of mathematical structure. Consider the problem of searching a castle for treasure. To solve it you might use the following strategy. As you enter a room of the castle, paint some graffiti on the wall to remind yourself that you were already there. Successively travel from room to room as long as you come to a place you haven’t already been. When you return to the same room, try a different door leaving the room (assuming it goes somewhere you haven’t already been). When all doors have been tried in a given room, then backtrack. Notice that this algorithm is described recursively. In particular, when you enter a new room, you are beginning a new search. This is the general idea behind depth-first search. **Depth-First Search Algorithm:** We assume we are given an directed graph \( G = (V, E) \). The same algorithm works for undirected graphs (but the resulting structure imposed on the graph is different). We use four auxiliary arrays. As before we maintain a color for each vertex: white means undiscovered, gray means discovered but not finished processing, and black means finished. As before we also store predecessor pointers, pointing back to the vertex that discovered a given vertex. We will also associate two numbers with each vertex. These are time stamps. When we first discover a vertex \( u \) store a counter in \( d[u] \) and when we are finished processing a vertex we store a counter in \( f[u] \). The purpose of the time stamps will be explained later. (Note: Do not confuse the discovery time \( d[u] \) with the distance \( d[u] \) from BFS.) The algorithm is shown in code block below, and illustrated in Fig. 2. As with BFS, DFS induces a tree structure. We will discuss this tree structure further below. **Analysis:** The running time of DFS is \( \Theta(V + E) \). This is somewhat harder to see than the BFS analysis, because the recursive nature of the algorithm obscures things. Normally, recurrences are good ways to analyze recursively defined algorithms, but it is not true here, because there is no good notion of “size” that we can attach to each recursive call. First observe that if we ignore the time spent in the recursive calls, the main DFS procedure runs in \( O(V) \) time. Observe that each vertex is visited exactly once in the search, and hence the call \( \text{DFSVisit()} \) is made exactly once for each vertex. We can just analyze each one individually and add up their running times. Ignoring the time spent in the recursive calls, we can see that each vertex \( u \) can be processed in \( O(1 + \text{outdeg}(u)) \) time. Thus the total time used in the procedure is \[ T(V) = V + \sum_{u \in V} (\text{outdeg}(u) + 1) = V + \sum_{u \in V} \text{outdeg}(u) + V = 2V + E \in \Theta(V + E). \] A similar analysis holds if we consider DFS for undirected graphs. Depth-First Search DFS(G) { // main program for each u in V { // initialization color[u] = white; pred[u] = null; } time = 0; for each u in V if (color[u] == white) // found an undiscovered vertex DFSVisit(u); // start a new search here } DFSVisit(u) { // start a search at u color[u] = gray; // mark u visited d[u] = ++time; for each v in Adj(u) do if (color[v] == white) { // if neighbor v undiscovered pred[v] = u; // ...set predecessor pointer DFSVisit(v); // ...visit v } color[u] = black; // we’re done with u f[u] = ++time; } Fig. 2: Depth-First search tree. Tree structure: DFS naturally imposes a tree structure (actually a collection of trees, or a forest) on the structure of the graph. This is just the recursion tree, where the edge \((u, v)\) arises when processing vertex \(u\) we call \(\text{DFSVisit}(v)\) for some neighbor \(v\). For directed graphs the other edges of the graph can be classified as follows: Back edges: \((u, v)\) where \(v\) is a (not necessarily proper) ancestor of \(u\) in the tree. (Thus, a self-loop is considered to be a back edge). Forward edges: \((u, v)\) where \(v\) is a proper descendant of \(u\) in the tree. Cross edges: \((u, v)\) where \(u\) and \(v\) are not ancestors or descendents of one another (in fact, the edge may go between different trees of the forest). It is not difficult to classify the edges of a DFS tree by analyzing the values of colors of the vertices and/or considering the time stamps. This is left as an exercise. With undirected graphs, there are some important differences in the structure of the DFS tree. First, there is really no distinction between forward and back edges. So, by convention, they are all called back edges by convention. Furthermore, it can be shown that there can be no cross edges. (Can you see why not?) Time-stamp structure: There is also a nice structure to the time stamps. In CLR this is referred to as the parenthesis structure. In particular, the following are easy to observe. **Lemma:** (Parenthesis Lemma) Given a digraph \(G = (V, E)\), and any DFS tree for \(G\) and any two vertices \(u, v \in V\). - \(u\) is a descendent of \(v\) if and only if \([d[u], f[u]] \subseteq [d[v], f[v]]\). - \(u\) is an ancestor of \(v\) if and only if \([d[u], f[u]] \supseteq [d[v], f[v]]\). - \(u\) is unrelated to \(v\) if and only if \([d[u], f[u]]\) and \([d[v], f[v]]\) are disjoint. **Cycles:** The time stamps given by DFS allow us to determine a number of things about a graph or digraph. For example, suppose you are given a graph or digraph. You run DFS. You can determine whether the graph contains any cycles very easily. We do this with the help of the following two lemmas. **Lemma:** Given a digraph \(G = (V, E)\), consider any DFS forest of \(G\), and consider any edge \((u, v) \in E\). If this edge is a tree, forward, or cross edge, then \(f[u] > f[v]\). If the edge is a back edge then \(f[u] \leq f[v]\). **Proof:** For tree, forward, and back edges, the proof follows directly from the parenthesis lemma. (E.g. for a forward edge \((u, v)\), \(v\) is a descendent of \(u\), and so \(v\)'s start-finish interval is contained within \(u\)'s, implying that \(v\) has an earlier finish time.) For a cross edge \((u, v)\) we know that the two time intervals are disjoint. When we were processing \(u\), \(v\) was not white (otherwise \((u, v)\) would be a tree edge), implying that \(v\) was started before \(u\). Because the intervals are disjoint, \(v\) must have also finished before \(u\). Lemma: Consider a digraph \( G = (V, E) \) and any DFS forest for \( G \). \( G \) has a cycle if and only the DFS forest has a back edge. Proof: (\( \Leftarrow \)) If there is a back edge \((u, v)\), then \( v \) is an ancestor of \( u \), and by following tree edges from \( v \) to \( u \) we get a cycle. (\( \Rightarrow \)) We show the contrapositive. Suppose there are no back edges. By the lemma above, each of the remaining types of edges, tree, forward, and cross all have the property that they go from vertices with higher finishing time to vertices with lower finishing time. Thus along any path, finish times decrease monotonically, implying there can be no cycle. Beware: No back edges means no cycles. But you should not infer that there is some simple relationship between the number of back edges and the number of cycles. For example, a DFS tree may only have a single back edge, and there may anywhere from one up to an exponential number of simple cycles in the graph. A similar theorem applies to undirected graphs, and is not hard to prove. **Topological Sort and Strong Components** **Directed Acyclic Graph:** A directed acyclic graph is often called a DAG for short DAG’s arise in many applications where there are precedence or ordering constraints. For example, if there are a series of tasks to be performed, and certain tasks must precede other tasks (e.g. in construction you have to build the first floor before you build the second floor, but you can do the electrical wiring while you install the windows). In general a precedence constraint graph is a DAG in which vertices are tasks and the edge \((u, v)\) means that task \( u \) must be completed before task \( v \) begins. A topological sort of a DAG is a linear ordering of the vertices of the DAG such that for each edge \((u, v)\), \( u \) appears before \( v \) in the ordering. Note that in general, there may be many legal topological orders for a given DAG. To compute a topological ordering is actually very easy, given DFS. By the previous lemma, for every edge \((u, v)\) in a DAG, the finish time of \( u \) is greater than the finish time of \( v \). Thus, it suffices to output the vertices in reverse order of finishing time. To do this we run a (stripped down) DFS, and when each vertex is finished we add it to the front of a linked list. The final linked list order will be the final topological order. This is given below. ```plaintext TopSort(G) { for each (u in V) color[u] = white; // initialize L = new linked_list; // L is an empty linked list for each (u in V) if (color[u] == white) TopVisit(u); return L; // L gives final order } TopVisit(u) { // start a search at u color[u] = gray; // mark u visited for each (v in Adj(u)) if (color[v] == white) TopVisit(v); Append u to the front of L; // on finishing u add to list } ``` This is typical example of DFS is used in applications. Observe that the structure is essentially the same as the basic DFS procedure, but we only include the elements of DFS that are needed for this application. As an example we consider the DAG presented in CLRS for Professor Bumstead’s order of dressing. Bumstead lists the precedences in the order in which he puts on his clothes in the morning. We do our depth-first search in a different order from the one given in CLRS, and so we get a different final ordering. However both orderings are legitimate, given the precedence constraints. As with depth-first search, the running time of topological sort is $\Theta(V + E)$. ![Topological sort diagram] **Final order: socks, shirt, tie, shorts, pants, shoes, belt, jacket** **Fig. 4: Topological sort.** **Strong Components:** Next we consider a very important connectivity problem with digraphs. When digraphs are used in communication and transportation networks, people want to know that there networks are complete in the sense that from any location it is possible to reach any other location in the digraph. A digraph is strongly connected if for every pair of vertices, $u, v \in V$, $u$ can reach $v$ and vice versa. We would like to write an algorithm that determines whether a digraph is strongly connected. In fact we will solve a generalization of this problem, of computing the strongly connected components (or strong components for short) of a digraph. In particular, we partition the vertices of the digraph into subsets such that the induced subgraph of each subset is strongly connected. (These subsets should be as large as possible, and still have this property.) More formally, we say that two vertices $u$ and $v$ are mutually reachable if $u$ and reach $v$ and vice versa. It is easy to see that mutual reachability is an equivalence relation. This equivalence relation partitions the vertices into equivalence classes of mutually reachable vertices, and these are the strong components. Observe that if we merge the vertices in each strong component into a single super vertex, and joint two supervertices $(A, B)$ if and only if there are vertices $u \in A$ and $v \in B$ such that $(u, v) \in E$, then the resulting digraph, called the component digraph, is necessarily acyclic. (Can you see why?) Thus, we may be accurately refer to it as the component DAG. ![Component DAG diagram] **Fig. 5: Strong Components.** The algorithm that we will present is an algorithm designer’s “dream” (and an algorithm student’s nightmare). It is amazingly simple and efficient, but it is so clever that it is very difficult to even see how it works. We will give some of the intuition that leads to the algorithm, but will not prove the algorithm’s correctness formally. See CLRS for a formal proof. **Strong Components and DFS:** By way of motivation, consider the DFS of the digraph shown in the following figure (left). By definition of DFS, when you enter a strong component, every vertex in the component is reachable, so the DFS does not terminate until all the vertices in the component have been visited. Thus all the vertices in a strong component must appear in the same tree of the DFS forest. Observe that in the figure each strong component is just a subtree of the DFS forest. Is it always true for any DFS? Unfortunately the answer is no. In general, many strong components may appear in the same DFS tree. (See the DFS on the right for a counterexample.) Does there always exist a way to order the DFS such that it is true? Fortunately, the answer is yes. Suppose that you knew the component DAG in advance. (This is ridiculous, because you would need to know the strong components, and that is the problem we are trying to solve. But humor me for a moment.) Further suppose that you computed a reversed topological order on the component digraph. That is, \((u, v)\) is an edge in the component digraph, then \(v\) comes before \(u\) in this reversed order (not after as it would in a normal topological ordering). Now, run DFS, but every time you need a new vertex to start the search from, select the next available vertex according to this reverse topological order of the component digraph. Here is an informal justification. Clearly once the DFS starts within a given strong component, it must visit every vertex within the component (and possibly some others) before finishing. If we do not start in reverse topological, then the search may “leak out” into other strong components, and put them in the same DFS tree. For example, in the figure below right, when the search is started at vertex \(a\), not only does it visit its component with \(b\) and \(c\), but the it also visits the other components as well. However, by visiting components in reverse topological order of the component tree, each search cannot “leak out” into other components, because other components would have already have been visited earlier in the search. ![Fig. 6: Two depth-first searches.](image) This leaves us with the intuition that if we could somehow order the DFS, so that it hits the strong components according to a reverse topological order, then we would have an easy algorithm for computing strong components. However, we do not know what the component DAG looks like. (After all, we are trying to solve the strong component problem in the first place). The “trick” behind the strong component algorithm is that we can find an ordering of the vertices that has essentially the necessary property, without actually computing the component DAG. The Plumber's Algorithm: I call this algorithm the plumber's algorithm (because it avoids leaks). Unfortunately it is quite difficult to understand why this algorithm works. I will present the algorithm, and refer you to CLRS for the complete proof. First recall that $G^R$ (what CLRS calls $G^T$) is the digraph with the same vertex set as $G$ but in which all edges have been reversed in direction. Given an adjacency list for $G$, it is possible to compute $G^R$ in $\Theta(V + E)$ time. (I'll leave this as an exercise.) Observe that the strongly connected components are not affected by reversing all the digraph’s edges. If $u$ and $v$ are mutually reachable in $G$, then certainly this is still true in $G^R$. All that changes is that the component DAG is completely reversed. The ordering trick is to order the vertices of $G$ according to their finish times in a DFS. Then visit the nodes of $G^R$ in decreasing order of finish times. All the steps of the algorithm are quite easy to implement, and all operate in $\Theta(V + E)$ time. Here is the algorithm. ```c StrongComp(G) { Run DFS(G), computing finish times f[u] for each vertex u; Compute R = Reverse(G), reversing all edges of G; Sort the vertices of R (by CountingSort) in decreasing order of f[u]; Run DFS(R) using this order; Each DFS tree is a strong component; } ``` Correctness: Why visit vertices in decreasing order of finish times? Why use the reversal digraph? It is difficult to justify these elements formally. Here is some intuition, though. Recall that the main intent is to visit the strong components in a reverse topological order. The question is how to order the vertices so that this is true. Recall from the topological sorting algorithm, that in a DAG, finish times occur in reverse topological order (i.e., the first vertex in the topological order is the one with the highest finish time). So, if we wanted to visit the components in reverse topological order, this suggests that we should visit the vertices in increasing order of finish time, starting with the lowest finishing time. This is a good starting idea, but it turns out that it doesn’t work. The reason is that there are many vertices in each strong component, and they all have different finish times. For example, in the figure above observe that in the first DFS (on the left) the lowest finish time (of 4) is achieved by vertex e, and its strong component is first, not last, in topological order. It is tempting to give up in frustration at this point. But there is something to notice about the finish times. If we consider the maximum finish time in each component, then these are related to the topological order of the component DAG. In particular, given any strong component $C$, define $f(C)$ to be the maximum finish time among all vertices in this component. \[ f(C) = \max_{u \in C} f[u]. \] **Lemma:** Consider a digraph \( G = (V, E) \) and let \( C \) and \( C' \) be two distinct strong components. If there is an \((u, v)\) of \( G \) such that \( u \in C \) and \( v \in C' \), then \( f(C) > f(C') \). See the book for a complete proof. Here is a quick sketch. If the DFS visits \( C \) first, then the DFS will leak into \( C' \) (along edge \((u, v)\) or some other edge), and then will visit everything in \( C' \) before finally returning to \( C \). Thus, some vertex of \( C \) will finish later than every vertex of \( C' \). On the other hand, suppose that \( C' \) is visited first. Because there is an edge from \( C \) to \( C' \), we know from the definition of the component DAG that there cannot be a path from \( C' \) to \( C \). So \( C' \) will completely finish before we even start \( C \). Thus all the finish times of \( C \) will be larger than the finish times of \( C' \). For example, in the previous figure, the maximum finish times for each component are 18 (for \( \{a, b, c\} \)), 17 (for \( \{d, e\} \)), and 12 (for \( \{f, g, h, i\} \)). The order \( 18, 17, 12 \) is a valid topological order for the component digraph. This is a big help. It tells us that if we run DFS and compute finish times, and then run a new DFS in decreasing order of finish times, we will visit the components in topological order. The problem is that this is not what we wanted. We wanted a reverse topological order for the component DAG. So, the final trick is to reverse the digraph, by forming \( G^R \). This does not change the strong components, but it reverses the edges of the component graph, and so reverses the topological order, which is exactly what we wanted. In conclusion we have: **Theorem:** Consider a digraph \( G \) on which DFS has been run. Sort the vertices by decreasing order of finish time. Then a DFS of the reversed digraph \( G^R \), visits the strong components according to a reversed topological order of the component DAG of \( G^R \).
{"Source-Url": "http://www.cs.umd.edu/class/fall2006/cmsc451/BFS-DFS.pdf", "len_cl100k_base": 6181, "olmocr-version": "0.1.51", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30798, "total-output-tokens": 6786, "length": "2e12", "weborganizer": {"__label__adult": 0.0004949569702148438, "__label__art_design": 0.0004601478576660156, "__label__crime_law": 0.0006270408630371094, "__label__education_jobs": 0.0023956298828125, "__label__entertainment": 0.00017368793487548828, "__label__fashion_beauty": 0.00024962425231933594, "__label__finance_business": 0.00031685829162597656, "__label__food_dining": 0.0005950927734375, "__label__games": 0.0017719268798828125, "__label__hardware": 0.002132415771484375, "__label__health": 0.0011425018310546875, "__label__history": 0.0006961822509765625, "__label__home_hobbies": 0.0002703666687011719, "__label__industrial": 0.0008978843688964844, "__label__literature": 0.000720977783203125, "__label__politics": 0.0003654956817626953, "__label__religion": 0.0007596015930175781, "__label__science_tech": 0.271728515625, "__label__social_life": 0.00016355514526367188, "__label__software": 0.0092620849609375, "__label__software_dev": 0.7021484375, "__label__sports_fitness": 0.0006742477416992188, "__label__transportation": 0.0016689300537109375, "__label__travel": 0.0004010200500488281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25099, 0.00322]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25099, 0.72133]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25099, 0.92283]], "google_gemma-3-12b-it_contains_pii": [[0, 3471, false], [3471, 4204, null], [4204, 8145, null], [8145, 8770, null], [8770, 11725, null], [11725, 14837, null], [14837, 17188, null], [17188, 20212, null], [20212, 23025, null], [23025, 25099, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3471, true], [3471, 4204, null], [4204, 8145, null], [8145, 8770, null], [8770, 11725, null], [11725, 14837, null], [14837, 17188, null], [17188, 20212, null], [20212, 23025, null], [23025, 25099, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25099, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25099, null]], "pdf_page_numbers": [[0, 3471, 1], [3471, 4204, 2], [4204, 8145, 3], [8145, 8770, 4], [8770, 11725, 5], [11725, 14837, 6], [14837, 17188, 7], [17188, 20212, 8], [20212, 23025, 9], [23025, 25099, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25099, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
8b31c35004d1bdb3252b806570a0aa951571481f
[REMOVED]
{"Source-Url": "http://poseidon.csd.auth.gr/papers/PUBLISHED/CONFERENCE/pdf/Kovasznai2003_DEXA.pdf", "len_cl100k_base": 4886, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25044, "total-output-tokens": 5598, "length": "2e12", "weborganizer": {"__label__adult": 0.0004506111145019531, "__label__art_design": 0.0008511543273925781, "__label__crime_law": 0.0004258155822753906, "__label__education_jobs": 0.0008969306945800781, "__label__entertainment": 0.0002188682556152344, "__label__fashion_beauty": 0.00018143653869628904, "__label__finance_business": 0.0002129077911376953, "__label__food_dining": 0.0003440380096435547, "__label__games": 0.000675201416015625, "__label__hardware": 0.0012044906616210938, "__label__health": 0.0006041526794433594, "__label__history": 0.000286102294921875, "__label__home_hobbies": 7.009506225585938e-05, "__label__industrial": 0.0004825592041015625, "__label__literature": 0.0007414817810058594, "__label__politics": 0.00031065940856933594, "__label__religion": 0.0005426406860351562, "__label__science_tech": 0.1014404296875, "__label__social_life": 0.00012576580047607422, "__label__software": 0.019744873046875, "__label__software_dev": 0.869140625, "__label__sports_fitness": 0.0002512931823730469, "__label__transportation": 0.0005125999450683594, "__label__travel": 0.00017631053924560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23059, 0.01344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23059, 0.67581]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23059, 0.8622]], "google_gemma-3-12b-it_contains_pii": [[0, 2338, false], [2338, 4306, null], [4306, 7338, null], [7338, 8998, null], [8998, 12353, null], [12353, 13022, null], [13022, 14574, null], [14574, 16777, null], [16777, 19106, null], [19106, 22071, null], [22071, 23059, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2338, true], [2338, 4306, null], [4306, 7338, null], [7338, 8998, null], [8998, 12353, null], [12353, 13022, null], [13022, 14574, null], [14574, 16777, null], [16777, 19106, null], [19106, 22071, null], [22071, 23059, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23059, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23059, null]], "pdf_page_numbers": [[0, 2338, 1], [2338, 4306, 2], [4306, 7338, 3], [7338, 8998, 4], [8998, 12353, 5], [12353, 13022, 6], [13022, 14574, 7], [14574, 16777, 8], [16777, 19106, 9], [19106, 22071, 10], [22071, 23059, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23059, 0.04348]]}
olmocr_science_pdfs
2024-12-05
2024-12-05
010d09ad62b2149a2536c3457912e88a4e2e451c
PART 4 Process Management: Scheduling, Context Switching, Process Suspension, Process Resumption, And Process Creation Terminology - The term *process management* has been used for decades to encompass the part of an operating system that manages concurrent execution, including both processes and threads within them. - The term *thread management* is newer, but sometimes leads to confusion because it appears to exclude Processes. - The best approach is to be aware of the controversy but not worry about it. Location Of Scheduling In The Hierarchy Concurrent Processing - Unit of computation - Abstraction of a processor - Known only to operating system - Not known by hardware The Operating System View - All computation must be done by some process - No execution by the operating system - No execution “outside” of a process - Key idea - A process must be running at all times Concurrency Models • Many variations have been used – Job – Task – Thread – Process • Differences in – Address space and sharing – Coordination and communication mechanisms – Longevity – Dynamic or static definition Thread Of Execution - Single “execution” - Sometimes called a *lightweight process* - Can share data (data and bss segments) with other threads - Must have private stack segment for - Local variables - Procedure calls Process Abstraction - Written with uppercase “P” to distinguish from generic notion - Address space in which multiple threads can execute - One data segment per Process - One bss segment per Process - Multiple threads per Process - Each thread - Bound to a single Process - Cannot move to another Process Illustration Of Two Processes And Their Threads - Threads within a Process share *text*, *data*, and *bss* - No sharing between Processes - Threads within a Process cannot share stacks Terminology - Distinction between *process* and *Process* can be confusing - For this course, assume generic use ("process") unless - Used in context of specific OS - Speaker indicates otherwise Maintaining Processes Or Threads - Process or thread - OS abstraction - Unknown to hardware - Created dynamically - Pertinent information kept by OS - OS keeps information in a central data structure - Called *process table* or *thread table* - Part of OS address space Information Kept In A Process Table • For each process – Unique *process identifier* – Owner (a user) – Scheduling priority – Location of code and data (stack) – Status of computation – Current program counter – Current values of registers Information Kept In A Process Table (continued) - If a Process contains multiple threads, keep for each thread - Owning Process - Thread’s scheduling priority - Location of stack - Status of computation - Current program counter - Current values of registers Xinu Model - Simplest possible scheme - Single-user system (no ownership) - One global context - One global address space - No boundary between OS and applications - Note Xinu “process” is technically a “thread” ## Example Items In A Xinu Process Table <table> <thead> <tr> <th>Field</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td>prstate</td> <td>The current status of the process (e.g., whether the process is currently executing or waiting)</td> </tr> <tr> <td>prprio</td> <td>The scheduling priority of the process</td> </tr> <tr> <td>prstkptr</td> <td>The saved value of the process’s stack pointer when the process is not executing</td> </tr> <tr> <td>prstkbase</td> <td>The address of the base of the process’s stack</td> </tr> <tr> <td>prstklen</td> <td>A limit on the maximum size that the process’s stack can grow</td> </tr> <tr> <td>prname</td> <td>A name assigned to the process that humans use to identify the process’s purpose</td> </tr> </tbody> </table> Process State - Used by OS to manage processes - Set by OS whenever process changes status (e.g., waits for I/O) - Small integer value stored in the process table - Tested by OS to determine - Whether a requested operation is valid - The meaning of an operation Process States - One “state” per activity - Value updated in process table when activity changes - Example values - *Current* (process is currently executing) - *Ready* (process is ready to execute) - *Waiting* (process is waiting on semaphore) - *Receiving* (process is waiting to receive a message) - *Sleeping* (process is delayed for specified time) - *Suspended* (process is not permitted to execute) Example Declaration For Process States In Xinu /* Process state constants */ #define PR_FREE 0 /* process table entry is unused */ #define PR_CURR 1 /* process is currently running */ #define PR_READY 2 /* process is on ready queue */ #define PR_RECV 3 /* process waiting for message */ #define PR_SLEEP 4 /* process is sleeping */ #define PR_SUSP 5 /* process is suspended */ #define PR_WAIT 6 /* process is on semaphore queue */ #define PR_RECTIM 7 /* process is receiving with timeout */ • States are defined when a system is constructed • We will understand the purpose of each state as we consider the system design Scheduling And Context Switching Scheduling - Fundamental part of process management - Performed by OS - Three steps - Examine computations eligible for execution - Select one - Switch CPU to selected process - Three-level scheduling possible - Select User - Select Process owned by used - Select thread within Process Implementation Of Scheduling - Need a *scheduling policy* that specifies which process to select - Build a scheduling function that - Selects a process according to the policy - Updates process table for current and selected process - Call *context switch* to switch from current to selected process Scheduling Policy • Fundamental part of OS • Determines when process is selected for execution • May depend on – User – How many processes a user owns – Whether each process contains multiple threads – Time a given process waits – Priority of process (or of threads) • Note: hierarchical or flat scheduling can be used Example Scheduling Policy In Xinu - Each process assigned a *priority* - Non-negative integer value - Initialized when process created - Can be changed at any time - Scheduler chooses a process with highest priority - Policy implemented by a system-wide invariant The Xinu Scheduling Invariant At any time, the CPU must run the highest priority eligible process. Among processes with equal priority, scheduling is round robin. - Invariant must be enforced whenever - The set of eligible processes changes - The priority of any eligible process changes - Such changes only happen during a system call or an interrupt Implementation Of Scheduling - Process is eligible if state is ready or current - To avoid searching process table during scheduling - Keep ready processes on linked list called ready list - Order ready list by process priority - Selection of highest-priority process performed in constant time Forcing Round-Robin Scheduling - Operating system uses timer - Whenever timer interrupts, if another equal-priority process is eligible for the CPU, switch to the other process - We will consider details later High-Speed Scheduling - Compare priority of current process to priority of first process on ready list - If current process priority higher, do nothing - Otherwise, call context switch to make process on ready list current Xinu Scheduler Details - Before calling the scheduler - Global variable `currpid` gives ID of process that is executing - `proctab[currpid].pstate` must be set to desired next state for the process - If current process remains eligible and has highest priority, scheduler does nothing (i.e., merely returns) - Otherwise, swap current process and highest priority ready process Scheduling And Equal Priority Processes - Calling scheduler harmless if current process - Remains eligible, and - Has uniquely highest priority - Scheduler causes context switch if current process - No longer eligible - Has priority less-than or equal to highest priority ready process - Later, we will see why switching when a waiting process has equal priority is important / except from resched.c - resched */ extern void ctxsw(void *, void *); /*----------------------------------------------- * resched - Reschedule processor to highest priority eligible process *----------------------------------------------- */ int32 resched(void) /* assumes interrupts are disabled */ { struct procent *ptold; /* ptr to table entry for old process */ struct procent *ptnew; /* ptr to table entry for new process */ ptold = &proctab[currpid]; /* current process’ table entry */ if (ptold->prstate == PR_CURR) { if (ptold->prprio > firstkey(readylist)) { return OK; } } /* old process will no longer remain current */ ptold->prstate = PR_READY; insert(currpid, readylist, ptold->prprio); } Example Scheduler Code (resched part 2) /* force context switch to highest priority ready process */ currpid = dequeue(readylist); ptnew = &proctab[currpid]; ptnew->prstate = PR_CURR; preempt = QUANTUM; /* reset time slice for process */ ctxsw(&ptold->prstkptr, &ptnew->prstkptr); /* old process returns here when resumed */ return OK; } Process State Transitions • Recall each process has a “state” • State determines – Whether an operation valid – Semantics of each operation • Transition diagram documents valid operations Illustration Of State Transition Between Current And Ready - Single function (resched) moves a process in either direction between the two states Context Switch • Basic facility in OS – Low-level (manipulates hardware state) – Written in assembly language • Called by scheduler • Moves CPU from one process to another Context Switch Operation - Given a “new” process, \( N \), and “old” process, \( O \) - Save copy of all information pertinent to \( O \) in process table and/or on stack - Machine registers - Program counter - Privilege level - Memory maps - Load information for \( N \) Example Context Switch Code (MIPS part 1) `/* ctxsw.s - ctxsw */` ``` .align 4 .globl ctxsw ``` `/***************************************************************************** * ctxsw - Switch from one process context to another ****************************************************************************/ .ent ctxsw ctxsw: ``` /* build context record on the current process' stack */ addiu sp, sp, -CONTEXT sw ra, CONTEXT-4(sp) sw ra, CONTEXT-8(sp) ``` `/* Save callee-save (non-volatile) registers */` ``` sw s0, S0_CON(sp) sw s1, S1_CON(sp) sw s2, S2_CON(sp) sw s3, S3_CON(sp) sw s4, S4_CON(sp) sw s5, S5_CON(sp) sw s6, S6_CON(sp) sw s7, S7_CON(sp) ``` Example Context Switch Code (MIPS part 2) ``` sw s8, S8_CON(sp) sw s9, S9_CON(sp) /* Save outgoing process' stack pointer */ sw sp, 0(a0) /* Load incoming process' stack pointer */ lw sp, 0(a1) /* At this point, we have switched from the run-time stack */ /* of the outgoing process to the incoming process */ /* Restore callee-save (non-volatile) registers from new stack */ lw s0, S0_CON(sp) lw s1, S1_CON(sp) lw s2, S2_CON(sp) lw s3, S3_CON(sp) lw s4, S4_CON(sp) lw s5, S5_CON(sp) lw s6, S6_CON(sp) lw s7, S7_CON(sp) lw s8, S8_CON(sp) lw s9, S9_CON(sp) ``` Example Context Switch Code (MIPS part 3) /* Restore argument registers for the new process */ lw a0, CONTEXT(sp) lw a1, CONTEXT+4(sp) lw a2, CONTEXT+8(sp) lw a3, CONTEXT+12(sp) /* Remove context record from the new process’ stack */ lw v0, CONTEXT-4(sp) lw ra, CONTEXT-8(sp) addiu sp, sp, CONTEXT /* If this is a newly created process, ensure */ /* it starts with interrupts enabled */ beq v0, ra, ctxdone mfc0 v1, CP0_STATUS ori v1, v1, STATUS_IE mtc0 v1, CP0_STATUS ctxdone: jr v0 .end ctxsw Example Context Switch Code (X86 part 1) /* ctxsw.s - ctxsw */ .text .globl ctxsw newmask: .word 0 /* excerpt from ctxsw on an X86 architecture. */ /* args: &oldsp, &oldmask, &newsp, &newmask */ ctxsw: pushl %ebp movl %esp,%ebp pushl 12(%ebp) call disable movl 20(%ebp),%eax movw (%eax),%dx movw %dx,newmask pushfl /* save flags */ pushal /* save general regs */ /* save segment registers here, if multiple allowed */ movl 8(%ebp),%eax movl %esp,(%eax) /* save old SP */ Example Context Switch Code (X86 part 2) /* restore new segment registers here, if multiple allowed */ popal /* restore general registers */ popfl /* restore flags */ pushl $newmask call restore leave ret Puzzle #1 - Invariant says that at any time, one process must be executing - Context switch code moves from one process to another - Question: which process executes the context switch code? Solution To Puzzle #1 - "Old" process - Executes first half of context switch - Is suspended - "New" process - Continues executing where previously suspended - Usually runs second half of context switch Puzzle #2 - Invariant says that at any time, one process must be executing - All user processes may be idle (e.g., applications all wait for input) - Which process executes? Solution To Puzzle #2 - OS needs an extra process - Called *NULL process* - Never terminates - Cannot make a system call that takes it out of ready or current state - Typically an infinite loop Null Process • Does not compute anything useful • Is present merely to ensure that at least one process remains ready at all times • Simplifies scheduling (no special cases) Null Process Code - Typical null process while(1) ; - May not be optimal Puzzle #3 - Null process must always remain ready to execute - Null process should avoid using bus because doing so “steals” cycles from I/O activity - Instructions reside in memory, so merely fetching instructions use the bus - How can a null process avoid using the bus? Two Solutions To Puzzle #3 • Solution #1 – Halt the CPU until interrupt occurs – Special hardware instruction required • Solution #2 – Install an instruction cache – Processor fetches instructions from cache when possible – Avoids using bus when executing tight loop More Process Management Process Manipulation - A process does not exist forever and does not perform computation continuously - Need to invent ways to control processes - Example operations - Suspension - Resumption - Creation - Termination - State variable in process table records activity Process Suspension - Temporarily “stop” a process - Prohibit from using the CPU - To allow later resumption - Process table entry retained - Complete state of computation saved Example Suspension Code (suspend part 1) /* excerpt from suspend.c - suspend */ /*----------------------------------------------- * suspend - Suspend a process, placing it in hibernation *----------------------------------------------- */ syscall suspend( pid32 pid /* ID of process to suspend */ ) { intmask mask; /* saved interrupt mask */ struct procent *prptr; /* ptr to process’ table entry */ pri16 prio; /* priority to return */ mask = disable(); if (isbadpid(pid) || (pid == NULLPROC)) { restore(mask); return SYSERR; } } /* Only suspend a process that is current or ready */ prptr = &proctab[pid]; if ((prptr->prstate != PR_CURR) && (prptr->prstate != PR_READY)) { restore(mask); return SYSERR; } if (prptr->prstate == PR_READY) { getitem(pid); /* remove a ready process */ /* from the ready list */ prptr->prstate = PR_SUSP; } else { prptr->prstate = PR_SUSP; /* mark the current process */ resched(); /* suspended and reschedule */ } prio = prptr->prprio; restore(mask); return prio; Process Resumption - Resume execution of previously suspended process - Method - Make process eligible for CPU - Re-establish scheduling invariant - Note: resumption does not guarantee instantaneous execution Example Resumption Code /* resume.c - resume */ #include <xinu.h> /*--------------------------------------------- * resume - Unsuspend a process, making it ready *---------------------------------------------*/ pril6 resume( pid32 pid /* ID of process to unsuspend */ ) { intmask mask; /* saved interrupt mask */ struct procent *prptr; /* ptr to process' table entry */ pril6 prio; /* priority to return */ mask = disable(); prptr = &proctab[pid]; if (isbadpid(pid) || (prptr->prstate != PR_SUSP)) { restore(mask); return (pril6)SYSERR; } prio = prptr->prprio; /* record priority to return */ ready(pid, RESCHED_YES); restore(mask); return prio; } Example Code Make A Process Ready (part 1) /* ready.c - ready */ #include <xinu.h> qid16 readylist; /* index of ready list */ /*----------------------------------------------- * ready - Make a process eligible for CPU service *----------------------------------------------- */ status ready( pid32 pid, /* ID of process to make ready */ bool8 resch /* reschedule afterward? */ ) { register struct procent *prptr; if (isbadpid(pid)) { return(SYSERR); } } Example Code Make A Process Ready (part 2) /* Set process state to indicate ready and add to ready list */ prptr = &proctab[pid]; prptr->prstate = PR_READY; insert(pid, readylist, prptr->prprio); if (resch == RESCHED_YES) { resched(); } return(OK); • Note: ready assumes that interrupts are disabled Process Termination - Final and permanent - Record of the process is expunged - Process table entry becomes available for reuse - Known as *process exit* if initiated by the thread itself - We will see more about termination later Process Creation - Processes are dynamic — process *creation* refers to starting a new process - Performed by *create* procedure in Xinu - Method - Find free entry in process table - Fill in entry - Place new process in *suspended* state - We will see more about creation later Illustration Of State Transitions For Additional Process Management Functions - READY - CURRENT - SUSPENDED Transitions: - resched - suspend - resume - create System Calls - Define interface from applications to OS - Define OS characteristics - Conceptually like procedure calls - Transfer to kernel address space - Note: for 503 version of Xinu - System calls are procedure calls - syscall clarifies intent At one time, process scheduling was the primary research topic in operating systems. Why did the topic fade? Was the problem completely solved? Summary - Process management is a fundamental part of OS - Information about processes kept in process table - A state variable associated with each process records the process's activity - Currently executing - Ready, but not executing - Suspended - Waiting on a semaphore - Receiving a message Summary (continued) • Scheduler – Key part of the process manager – Chooses next process to execute – Implements a scheduling policy – Changes information in the process table – Calls context switch or change from one process to another – Usually optimized for high speed Summary (continued) - Context switch - Low-level piece of a process manager - Moves processor from one process to another - At any time a process must be executing - Processes can be suspended, resumed, created, and terminated - Special process known as *null process* remains ready to run at all times
{"Source-Url": "https://www.cs.purdue.edu/homes/dxu/cs503/notes/part4.pdf", "len_cl100k_base": 4782, "olmocr-version": "0.1.49", "pdf-total-pages": 65, "total-fallback-pages": 0, "total-input-tokens": 94033, "total-output-tokens": 7196, "length": "2e12", "weborganizer": {"__label__adult": 0.00035643577575683594, "__label__art_design": 0.0003597736358642578, "__label__crime_law": 0.00038743019104003906, "__label__education_jobs": 0.0017271041870117188, "__label__entertainment": 9.179115295410156e-05, "__label__fashion_beauty": 0.00014078617095947266, "__label__finance_business": 0.0002658367156982422, "__label__food_dining": 0.0003786087036132813, "__label__games": 0.0012054443359375, "__label__hardware": 0.007465362548828125, "__label__health": 0.00049591064453125, "__label__history": 0.000347137451171875, "__label__home_hobbies": 0.0001977682113647461, "__label__industrial": 0.0009775161743164062, "__label__literature": 0.00035262107849121094, "__label__politics": 0.0002884864807128906, "__label__religion": 0.0005893707275390625, "__label__science_tech": 0.1395263671875, "__label__social_life": 9.638071060180664e-05, "__label__software": 0.01546478271484375, "__label__software_dev": 0.828125, "__label__sports_fitness": 0.00030112266540527344, "__label__transportation": 0.0007567405700683594, "__label__travel": 0.00016987323760986328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19690, 0.01045]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19690, 0.37483]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19690, 0.78707]], "google_gemma-3-12b-it_contains_pii": [[0, 120, false], [120, 514, null], [514, 554, null], [554, 689, null], [689, 899, null], [899, 1133, null], [1133, 1356, null], [1356, 1666, null], [1666, 1852, null], [1852, 2052, null], [2052, 2334, null], [2334, 2589, null], [2589, 2861, null], [2861, 3074, null], [3074, 3847, null], [3847, 4114, null], [4114, 4533, null], [4533, 5186, null], [5186, 5219, null], [5219, 5518, null], [5518, 5825, null], [5825, 6155, null], [6155, 6428, null], [6428, 6786, null], [6786, 7088, null], [7088, 7299, null], [7299, 7527, null], [7527, 7911, null], [7911, 8296, null], [8296, 9075, null], [9075, 9427, null], [9427, 9620, null], [9620, 9767, null], [9767, 9944, null], [9944, 10225, null], [10225, 10890, null], [10890, 11497, null], [11497, 12036, null], [12036, 12562, null], [12562, 12786, null], [12786, 12978, null], [12978, 13191, null], [13191, 13366, null], [13366, 13569, null], [13569, 13746, null], [13746, 13830, null], [13830, 14104, null], [14104, 14383, null], [14383, 14407, null], [14407, 14684, null], [14684, 14866, null], [14866, 15467, null], [15467, 15961, null], [15961, 16175, null], [16175, 16919, null], [16919, 17407, null], [17407, 17715, null], [17715, 17947, null], [17947, 18232, null], [18232, 18393, null], [18393, 18647, null], [18647, 18791, null], [18791, 19098, null], [19098, 19383, null], [19383, 19690, null]], "google_gemma-3-12b-it_is_public_document": [[0, 120, true], [120, 514, null], [514, 554, null], [554, 689, null], [689, 899, null], [899, 1133, null], [1133, 1356, null], [1356, 1666, null], [1666, 1852, null], [1852, 2052, null], [2052, 2334, null], [2334, 2589, null], [2589, 2861, null], [2861, 3074, null], [3074, 3847, null], [3847, 4114, null], [4114, 4533, null], [4533, 5186, null], [5186, 5219, null], [5219, 5518, null], [5518, 5825, null], [5825, 6155, null], [6155, 6428, null], [6428, 6786, null], [6786, 7088, null], [7088, 7299, null], [7299, 7527, null], [7527, 7911, null], [7911, 8296, null], [8296, 9075, null], [9075, 9427, null], [9427, 9620, null], [9620, 9767, null], [9767, 9944, null], [9944, 10225, null], [10225, 10890, null], [10890, 11497, null], [11497, 12036, null], [12036, 12562, null], [12562, 12786, null], [12786, 12978, null], [12978, 13191, null], [13191, 13366, null], [13366, 13569, null], [13569, 13746, null], [13746, 13830, null], [13830, 14104, null], [14104, 14383, null], [14383, 14407, null], [14407, 14684, null], [14684, 14866, null], [14866, 15467, null], [15467, 15961, null], [15961, 16175, null], [16175, 16919, null], [16919, 17407, null], [17407, 17715, null], [17715, 17947, null], [17947, 18232, null], [18232, 18393, null], [18393, 18647, null], [18647, 18791, null], [18791, 19098, null], [19098, 19383, null], [19383, 19690, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19690, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 19690, null]], "pdf_page_numbers": [[0, 120, 1], [120, 514, 2], [514, 554, 3], [554, 689, 4], [689, 899, 5], [899, 1133, 6], [1133, 1356, 7], [1356, 1666, 8], [1666, 1852, 9], [1852, 2052, 10], [2052, 2334, 11], [2334, 2589, 12], [2589, 2861, 13], [2861, 3074, 14], [3074, 3847, 15], [3847, 4114, 16], [4114, 4533, 17], [4533, 5186, 18], [5186, 5219, 19], [5219, 5518, 20], [5518, 5825, 21], [5825, 6155, 22], [6155, 6428, 23], [6428, 6786, 24], [6786, 7088, 25], [7088, 7299, 26], [7299, 7527, 27], [7527, 7911, 28], [7911, 8296, 29], [8296, 9075, 30], [9075, 9427, 31], [9427, 9620, 32], [9620, 9767, 33], [9767, 9944, 34], [9944, 10225, 35], [10225, 10890, 36], [10890, 11497, 37], [11497, 12036, 38], [12036, 12562, 39], [12562, 12786, 40], [12786, 12978, 41], [12978, 13191, 42], [13191, 13366, 43], [13366, 13569, 44], [13569, 13746, 45], [13746, 13830, 46], [13830, 14104, 47], [14104, 14383, 48], [14383, 14407, 49], [14407, 14684, 50], [14684, 14866, 51], [14866, 15467, 52], [15467, 15961, 53], [15961, 16175, 54], [16175, 16919, 55], [16919, 17407, 56], [17407, 17715, 57], [17715, 17947, 58], [17947, 18232, 59], [18232, 18393, 60], [18393, 18647, 61], [18647, 18791, 62], [18791, 19098, 63], [19098, 19383, 64], [19383, 19690, 65]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19690, 0.01444]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
ae4795e3d409b07fe1bd27cc229bcd0f6206d6e8
Automating the process of choosing among highly correlated covariates for multivariable logistic regression Michael C. Doherty, i3DrugSafety, Waltham, MA ABSTRACT In observational studies, there can be significant differences between the characteristics of a treatment and a control group. To reduce the potential for confounding, controls are matched to members of the treatment group using propensity scores estimated by multivariable logistic regression analyses. Propensity score modeling can involve the inclusion of hundreds of covariates including patient diagnoses, medical procedures, and medication exposures, highly correlated variables can complicate the multivariable logistic regression. This paper describes a statistical method to remove the variables automatically, with little input from the programmer. By utilizing the R statistic output from PROC CORR, followed by a MACRO to select which variables to keep and which ones to remove from the model, the programmer can save time in selecting the covariates to be used in the model statement in PROC REG. INTRODUCTION In observational studies, there can be substantial differences between the characteristics of a treatment and a control group. Propensity score matching is a multivariable technique that can achieve a high degree of balance between the comparison groups, producing groups that have very similar patterns of a large number of key variables, and thus, reducing the potential for confounding. Propensity score modeling can involve the inclusion of highly correlated variables which can complicate the multivariable logistic regression model. Instead of removing the highly correlated variables by hand, a statistical method has been developed to remove those variables automatically, with little input from the programmer. Describe Example In this example, the covariates under consideration are 0,1 flags indicating whether a subject had a particular diagnosis, procedure, or pharmacy dispensing during the baseline period. When two variables are highly correlated, it is often better to remove the covariate which occurs less frequently. The program described below selects which variable to retain in the regression model by choosing the factor with a larger absolute value of the R statistic and a higher prevalence in the study population. Describe Method The programmer’s task is two fold. First, identify the variables that are highly correlated and second, remove the offending covariates using an iterative procedure. To conceptualize the process, the table below shows the highly correlated covariates in descending order of their R statistic. <table> <thead> <tr> <th>Covariate 1</th> <th>Covariate 2</th> <th>R Statistic</th> </tr> </thead> <tbody> <tr> <td>VarA</td> <td>VarB</td> <td>0.967</td> </tr> <tr> <td>VarC</td> <td>VarD</td> <td>0.945</td> </tr> <tr> <td>VarB</td> <td>VarE</td> <td>0.931</td> </tr> <tr> <td>VarA</td> <td>VarF</td> <td>0.903</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>VarZ</td> <td>VarB</td> <td>0.715</td> </tr> </tbody> </table> In this example, let’s assume we wish to keep VarA as a covariate in the model. Since VarA and VarB are so highly correlated, we would like to remove VarB from consideration. We would also like to remove VarF, since it is also highly correlated with VarA. Note how VarB is highly correlated to VarE and VarZ. Since VarB is being removed from consideration, we do not necessarily wish to remove either VarE or VarZ, unless they are also highly correlated with VarA. After removing all variables that are highly correlated with VarA, we will then move onto VarC and find any variables that are highly correlated with it and remove them. The selection macro will loop through the list until it has worked its way through the entire list of variables and has removed the offending highly correlated variables. The program creates a list of variables in order from the highest to lowest R value. However, we also need to choose which variable should be in the left hand column (Covariate 1) and which variable should be in the right hand column (Covariate 2). Since these covariates are indicators, the selection macro chooses those variables that occur more often in our sample to be in the left hand column, and thus, are more likely to be retained. For instance, in the example above, VarA is kept while VarB is removed because VarA occurs more often. Now that we know what we want to do, we can being setting up our program to do it. Our first step is to create some global macros. In this example, we set up a macro for the dataset containing the covariates (dt), the lower limit of the R statistic we are interested in (HighCorr), and the variables we wish to exclude from consideration (exvar), including any continuous variables (contv). ```sas %let dt=outcomes; %let HighCorr=0.7; %let exvar=indv_id cohort i; %let contv=scnddiabdxdt scnddysldxdt diabrxdt dyslrxdt diaboutdt dysloutdt; ``` Since we are assessing hundreds of covariates, we use PROC CONTENTS to create our variables list. As a precaution, we select only those variable where=(type = 1), i.e., numeric. Create a dataset (varlabel) with the variables and their labels for later use using PROC SORT. ```sas proc contents data=in.&dt(drop=&exvar &contv) noprint out=dtvname(keep=name label nobs type where=(type=1)); proc sort data=dtvname(rename=(name=vname)) out=varlabel(keep=vname label); by vname; run; ``` Next create macro variables for the total number of variables (totvar) and the total number of observations (totobs). ```sas data _null_; set dtvname end=last; call symput('var'||left(put(_n_,4.)), trim(left(name))); if last then do; call symput('totvar', left(put(_n_,4.))); call symput('totobs', left(put(nobs,12.))); end; run; ``` Now create a macro that will create the list of variables (varlst) and a list of string variables (Mlst) v1 … vN (where N is the total number of variables under consideration) that correspond to the variable names. ```sas %macro varlst; %do i=1 %to &totvar; &&var&i %end; %mend varlst; ``` ```sas %macro MLst; %do i=1 %to &totvar; v&i="&&var&i" %end; %mend; ``` Use PROC CORR to obtain the R statistic for each pairing. The output should keep only the R statistic (i.e., drop the N, MEAN and STD observations from the output dataset). ```sas proc corr noprint data=finaldt out=temp(where=(_type_ not in('MEAN','STD','N'))); var %varlst; run; ``` Now create a table of pairings and sort by the R statistic in descending order. Remove any pairs whose R value is below our lower limit (highcorr), as well as any pairing with a R value of one (e.g. the R statistic of VarA with itself equals one). Only three variables are retained. The variable '_name_' is changed to CorrVar1. CorrVar2 is set to the variable name of the variable that is highly correlated with CorrVar1. CorrVal is an estimate of the R statistic. ```sas data CorrTemp(keep=CorrVar1 CorrVar2 CorrVal); set temp(drop=_type_ rename=(_name_=CorrVar1)); %mlst; array ChkCorr(&totvar) %varlst; array VN(&totvar) $ v1-v&totvar; do i=1 to dim(chkcorr); ``` if (chkcorr(i) >= abs(highcorr)) and (chkcorr(i) ne 1) then do; CorrVar2=vn(i); CorrVal=chkcorr(i); output; end; end; run; proc sort data=corrtemp; by descending corrval; run; At this point, we have duplicates in our dataset. The dataset CorrTemp looks like the following: <table> <thead> <tr> <th>CorrVar1</th> <th>CorrVar2</th> <th>CorrVal</th> </tr> </thead> <tbody> <tr> <td>VarA</td> <td>VarB</td> <td>R1</td> </tr> <tr> <td>VarB</td> <td>VarA</td> <td>R1</td> </tr> <tr> <td>VarC</td> <td>VarD</td> <td>R2</td> </tr> <tr> <td>VarD</td> <td>VarC</td> <td>R2</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Note how we have the correlation between VarA and VarB in observation one and the correlation between VarB and VarA in observation two. The same holds for VarC and VarD in observations three and four. Remove those duplicates using the lag function. Note that pairs will have increasing odd values (1, 3, 5, etc…). data ChkDup(keep=CorrVar1 CorrVar2 CorrVal pairs); set corrtemp; Name1=lag(corrvar1); Name2=lag(corrvar2); pairs=_n_ if name1=corrvar2 and name2=corrvar1 then delete; run; Create a frequency count for each indicator flag for use in the selection criteria. Transpose the resulting dataset for ease of merging. proc summary data=finaldt; var %varlst; output out=sumdt(drop=_freq_ _type_) sum= run; proc transpose data=sumdt out=transdt; run; Split up the pairs and merge the counts onto the two resulting datasets (dt1 and dt2). We will use the ‘pairs’ variable to match our correlated pairings later on. Set the two datasets (dt1 and dt2) together to create our working dataset (one). proc sql; create table dt1 as select a.corrvar1 as vname, a.corrval, b.col1 as Count, a.pairs from chkdup a, transdt b where upcase(a.corrvar1)=upcase(b._name_); quit; proc sql; create table dt2 as select a.corrvar2 as vname, a.corrval, b.col1 as Count, a.pairs from chkdup a, transdt b where upcase(a.corrvar2)=upcase(b._name_); quit; data one; set dt1 dt2; run; The macro ‘chkrcd’ selects which variables to keep and which ones to remove from consideration in the modeling process. Any highly correlated pairs are ordered by their frequency of occurrence and the covariate which occurs more often is selected to be retained, and the other is set to be deleted. %macro chkrcd; %let i=1; %let N=1; proc datasets; delete basedt deletedt; run; %do %while (&N > 0); * Sort in descending order by R statistic, pairs and frequency; proc sort data=one; by descending corrval pairs descending descending count; run; * Set PreNM to previous name and PrePair to previous pair using lag function; data one; set one; PreNM=lag(vname); PrePair=lag(pairs); run; * During actual runs, you probably want to comment out print statements; proc print data=one; title "Dataset at Loop &i"; run; * CREATE TWO DATASETS: KEEP (kdt&i) AND DELETE(ddt&i); data kdt&i(keep=rcd rename=(rcd=vname)) ddt&i(keep=rcd rename=(rcd=vname)); set one; length str krcd drcd $ 200 kbase dbase $ 30; retain krcd drcd kbase dbase; * 1ST RECORD IS ALWAYS KEPT; * kvar will contain the list of variables to keep; if _n_=1 then do; kbase=vname; call symput('kvar', left(trim(upcase(vname)))); krcd=symget('kvar'); rcd=vname; output kdt&i; end; else if _n_=2 then do; * 2ND RECORD IS ALWAYS DELETED; * dvar will contain the list of variables to delete; dbase=vname; call symput('dvar', left(trim(upcase(vname)))); drcd=symget('dvar'); rcd=vname; output ddt&i; end; else do; * For _n_ ge 3, we need to make sure we are working on the lines where pair = prepair. Then we search for the variable we are keeping (krcd). If vname matches up with krcd, then set rcd to prenm. Or if prenm matches with krcd then set rcd to vname; if (indexw(krcd,upcase(vname)) > 0 or indexw(krcd,upcase(prenm)) > 0) and pairs=prefpair then do; * Check to make sure the variable we are about to put into the deleted variable list should not be kept; if indexw(krcd,upcase(prenm)) > 0 and vname ne kbase then rcd=vname; if indexw(krcd,upcase(vname)) > 0 and prenm ne kbase then rcd=prenm; * ADD NEW DELETING VARIABLE INTO MACRO VARIABLE LIST; str=symget('dvar')||'[ '||left(trim(upcase(rcd)))); call symput('dvar', left(trim(str))); end; * REMOVE THIS VARIABLE FROM KEEPING VARIABLE LIST; str=symget('kvar'); str=tranwrd(str,upcase(rcd),''); call symput('kvar', left(trim(str))); drcd=symget('dvar'); * Output to deleted dataset (ddt&i) if rcd ne kbase; end; %end; if rcd ne kbase then do; output ddt&i; end; end; * Look for drcd (deleted variable from observation 2); else if (indexw(drcd,upcase(prenm)) > 0 or indexw(drcd,upcase(vname)) > 0) and pairs=prepair then do; if indexw(drcd,upcase(prenm)) > 0 and indexw(krcd,upcase(vname))=0 then rcd=vname; else if indexw(drcd,upcase(vname)) > 0 and indexw(krcd,upcase(prenm))=0 then rcd=prenm; str=symget('dvar'); * ADD NEW KEEPING VARIABLE TO LIST IF THIS VARIABLE IS NOT IN THE DELETING LIST; if indexw(str,upcase(rcd))=0 then do; str=symget('kvar')||' '||left(trim(upcase(rcd))); call symput('kvar', left(trim(str))); krcd=symget('kvar'); if rcd ne dbase then do; output kdt&i; end; end; end; run; * Sort dataset one by vname and make sure there are no duplicates or blanks in keeper dataset (kdt&i); proc sort data=one; by vname; proc sort nodupkey data=kdt&i; where vname ne ''; by vname; run; * You may want to comment out print statements after debugging; proc print data=kdt&i; title "Keep Records from Loop &i"; run; * Make sure there are no duplicates or blanks in deleted dataset (ddt&i); proc sort nodupkey data=ddt&i; where vname ne ''; by vname; run; * You may want to comment out print statements after debugging; proc print data=ddt&i; title "Delete Records from Loop &i"; run; * UPDATE THE CHECKING FILE, REMOVE THE RECORDS IN KEEP/DELETE FILES; * CREATE A SELECTED VARIABLE FILE; data cdt one; merge kdt&i(in=in1) ddt&i(in=in2) one(in=in3); by vname; * Put into cdt if variable is a keeper, but not in deleted dataset.; if in1=1 and in2=0 then output cdt; * If no determination has been made (i.e. not in keeper or deleted dataset, output to dataset one; else if in1=0 and in2=0 and in3=1 then output one; run; * Again, make sure there are no duplicates in dataset to keep; proc sort data=cdt(keep=vname) nodupkey; by vname; run; * Append list of variables to keep onto dataset called basedt; proc append base=basedt data=cdt; run; * Append list of variables to delete onto dataset called deletedt; proc append base=deletedt data = ddt; run; * Make sure dataset one is not empty. If it is, then stop loop; data _null_; call symput('N', left(put(num,8.))); if not 0 then set one nobs=num; stop; run; /* increment counter; %let i=%eval(&i+1); %end; %mend; %chkrcd; The ‘basedt’ dataset contains our list of variables to keep. Make sure to remove any duplicates and reattach the labels to complete the process. proc sort nodupkey data=basedt; by vname; run; data sel_var_vs; merge basedt(in=in1) varlabel; by vname; if in1; proc print data=sel_var_vs; title "***** High (&highcorr) Correlation Variables Listing *****"; run; The dataset ‘deletedt’ contains the list of variables we are removing. Remove any duplicates and reattach labels to complete the process. proc sort nodupkey data=deletedt; by vname; run; data del_var_vs; merge deletedt(in=in1) varlabel; by vname; if in1; proc print data=del_var_vs; title "***** High (&highcorr) Correlation Variables Deleted *****"; run; CONCLUSION Propensity score modeling can use hundreds of covariates; however, not all of them are necessary. By eliminating the highly correlated variables from the logistic regression model, the process can run more efficiently. Selecting which variable to include between the highly correlated pairs is done by listing the correlated covariates by descending order of their R statistic and then choosing the variable which occurs more frequently. Often the programmer will want to examine the pairs that are retained and excluded, and thus, all covariates are listed in the output. The macro presented can be used to automate the process of removing the highly correlated covariates from the dataset before running the model and may increase efficiency. CONTACT INFORMATION Your comments and questions are valued and encouraged. Contact the author at: Michael Doherty i3 Drug Safety 950 Winter St Waltham, MA 02451 Work Phone: (781)472-8454 E-mail: Michael.Doherty@i3drugsafety.com SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. © indicates USA registration. Other brand and product names are trademarks of their respective companies.
{"Source-Url": "http://wuss.org/proceedings08/08WUSS%20Proceedings/papers/anl/anl17.pdf", "len_cl100k_base": 4437, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 16091, "total-output-tokens": 4855, "length": "2e12", "weborganizer": {"__label__adult": 0.0004718303680419922, "__label__art_design": 0.00049591064453125, "__label__crime_law": 0.0006847381591796875, "__label__education_jobs": 0.00676727294921875, "__label__entertainment": 0.00011014938354492188, "__label__fashion_beauty": 0.0002694129943847656, "__label__finance_business": 0.0016126632690429688, "__label__food_dining": 0.0006814002990722656, "__label__games": 0.0010900497436523438, "__label__hardware": 0.0016422271728515625, "__label__health": 0.00921630859375, "__label__history": 0.0003807544708251953, "__label__home_hobbies": 0.00038361549377441406, "__label__industrial": 0.001678466796875, "__label__literature": 0.00030732154846191406, "__label__politics": 0.0004317760467529297, "__label__religion": 0.0005230903625488281, "__label__science_tech": 0.426513671875, "__label__social_life": 0.00024771690368652344, "__label__software": 0.1029052734375, "__label__software_dev": 0.44189453125, "__label__sports_fitness": 0.0007343292236328125, "__label__transportation": 0.0006785392761230469, "__label__travel": 0.0003600120544433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16105, 0.01432]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16105, 0.39974]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16105, 0.80439]], "google_gemma-3-12b-it_contains_pii": [[0, 4465, false], [4465, 7046, null], [7046, 9248, null], [9248, 11726, null], [11726, 13914, null], [13914, 16105, null], [16105, 16105, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4465, true], [4465, 7046, null], [7046, 9248, null], [9248, 11726, null], [11726, 13914, null], [13914, 16105, null], [16105, 16105, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16105, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16105, null]], "pdf_page_numbers": [[0, 4465, 1], [4465, 7046, 2], [7046, 9248, 3], [9248, 11726, 4], [11726, 13914, 5], [13914, 16105, 6], [16105, 16105, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16105, 0.05051]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
fa337cb5376660b5fe9c977a21b381e4213614e0
Building an Online Entry Form with WebAF® (And a Little Java®) Frederick Pratter, Eastern Oregon University Introduction The AppDev Studio® software suite from SAS® is a comprehensive set of tools for developing web applications. This paper is intended as a quick introduction to webAF, the included IDE (Integrated Development Environment) for building dynamic web content based on SAS data. Once the initial system configuration has been established, it is possible to provide dynamic access to SAS datasets in real-time for thin clients that do not have any SAS software available. This is accomplished using the JavaServer Pages® technology from Sun Microsystems®, along with some server-side SAS resources. Obviously, there is far more material than could be covered in a single conference paper, but the general steps in the procedure can be summarized as follows: 1. Start a SAS job spawner on a SAS/Connect® server 2. Start a Tomcat® session on a web server 3. Create a web application base directory on the server 4. Register a connection to the spawner 5. Create a new webAF project 6. Add JSP content to the project 7. Stir well and serve. The following example is based on an online examination application that was developed to demonstrate how to build an entry form using webAF. Hopefully, the examples provided will give a sense of how easy it can be to create complex web forms using SAS (and a little Java). Preliminaries: How to Hit the Ground Running Before starting webAF, it is important to perform several preliminary housekeeping tasks, summarized as steps 1-3 in the list above. Explaining what these do and why they are necessary can be a little complicated. Step 1: Remote Data Services using SAS/Connect Using SAS/Connect requires that a server program be started on the remote system where the data resides. This program is called a Remote Host Spawner. The spawner program listens for TCP connections to the host, just like a web server but on a different port. Starting the spawner is handled differently on Windows® and UNIX®, although the principle is the same on both platforms. On Windows, the spawner program is stored in the SAS root directory. Just run spawner.exe from a DOS command window, or use the handy SAS menu selection Start>Programs>AppDev Studio>Services>SAS V8>Start SAS Connect Spawner, if AppDev Studio has been installed on the server. Under Windows 2000 and XP it is possible to install the spawner as a service, so that it will run as a background process whenever the server is started. For a complete list of the available startup options, see *Starting and Stopping SAS/Connect* in the SAS System Help contents under *Help on SAS Software Products>SAS/Connect Software*. On UNIX, the spawner program is installed by default under the SAS root directory in `utilities/bin/sastcpd` and can be started with the following command: ``` sastcpd -service spawner -shell & ``` The `sastcpd` program runs by default as a daemon, so it is not necessary to use `nohup`. Again, if you want it to restart when the server reboots, install the script under the `init.d` directory. In either case, the job spawner must be running in order to allow remote connections to the SAS data on the server. **Step 2: Tomcat Java Servlet Engine** In order to make use of JavaServer Pages, it is necessary first to understand how servlets work (see [java.sun.com/products/servlet](http://java.sun.com/products/servlet)). Java servlets are the server-side equivalent of Java client-side applets. Like JavaScript® and applets, servlets can only be run from within a web browser; there is no_main_method as in Java console applications. Servlets require both a web server and a servlet engine. The function of this engine is to load the servlet `.class` file into the *Java Virtual Machine (JVM)* running on the server. The engine can then run the servlet. (The `.class` file is not reloaded into the JVM again after the first time; usually, it is necessary to restart the server when the `.class` file changes, but most servlet engines now include options to reload `.class` files automatically when they are updated.) The most widely available servlet engine is *Tomcat* from the Apache Software Foundation’s Jakarta Project (see [jakarta.apache.org/tomcat](http://jakarta.apache.org/tomcat)). This engine is included in AppDev Studio and must be started before any JavaServer Pages can be viewed. Note that Tomcat is used instead of the regular web browser. In the default SAS configuration shown in the following examples, the engine is started on port 8082 instead of port 80, as would be the case for the Apache or IIS web servers. In order to start the SAS-supplied servlet engine, AppDev Studio must be installed on the web server. If the web server is not a Windows-based platform, an appropriate servlet engine from the Apache organization can be used instead. The details of this are beyond the scope of this paper, but are discussed in the references listed at the end. For the sake of this discussion, it will be assumed that the default AppDev Studio engine is available. To start the servlet engine, select **Start>Programs>AppDev Studio>Services>Start Java Web Server**. On a properly configured system, this will result in a command window that displays the ongoing status of the server. It is also possible to start the servlet engine from within the webAF development environment, as shall be seen. **Step 3: Web Application Base Directory** SAS webAF, like Microsoft’s Visual Studio® and many other development environments, depends on the concept of a software Project. This is simply a collection of files relating to some specific application. SAS uses a specific project directory containing a project file with the extension .afx to keep track of this collection. The default directory will be something like the following: C:\AppDevStudio\WebAF\Projects\MyProject In addition to the project directory, however, webAF will put all of your JSP, servlets and any other web application components in a project-specific Web application base directory. Basically, you have two choices here: the default directory and the one SAS recommends. (There is probably some good reason why these are different.) The default value is to use the same folder as the .afx project file. The recommended one is a sub-folder of the AppDev Studio webapps directory; for example C:\AppDevStudio\WebAppDev\webapps\MyProject That is, each project has two associated directories: the Project folder, including the information that AppDev Studio needs to manage the project, and the Web application base directory, which contains the actual project files. If you decide to use this recommended approach, you have a little preliminary work to do first. The SAS web application templates directory D:\AppDevStudio\WebAppDev\templates contains a couple of web application “starter” directories, called empty and sasads. Of course empty is not really empty. It has the following structure: - empty\WEB-INF - empty\META-INF - empty\WEB-INF\classes - empty\WEB-INF\lib along with a few cleverly structured files that assume you have used the default layout as recommended in the previous section. The sasads directory has a similar organization, but also includes a folder called sasads\assets that contains a bunch of handy images in GIF format. The sasads folder is used to access the SAS custom tag library; for example it contains the tag library descriptor file sasads.tld. SAS webAF uses Java custom tags as controls when building pages; see the references at the end of the paper for more details about tag libraries. In order to create a SAS web application base directory, you need to copy one of these two folders (you can’t go wrong using sasads) into webapps. Now rename it, using the same name as your project. Assuming you want to call your project examples, the new directory will be something like D:\AppDevStudio\WebAppDev\webapps\examples Be sure to create this folder first, before you create the project! Diving into webAF Step 4. Register Connections The first step to using remote data service in webAF is registering one or more connections to a SAS job spawner. From the webAF main menu, select **Tools>Register Connections**. (You don't need to have a project open; a connection can apply to any webAF project.) Something like the following list of "Persisted Connections" should appear: ![Persisted Connections](image1) **Figure 1. Register Connections** When you start webAF for the first time, you will only see one connection: the default one. To create another connection, click on **New**. This will display the **New Connection** window below: ![New Connection](image2) **Figure 2. Create New Connection** For this example, check the box marked SAS server and web server are the same. That is, the spawner is running on the local host. Obviously, it is possible to connect to a remote host using the Connection Wizard, but for the sake of simplicity we will assume that the data are on the local system. Fill in a valid user name and password to connect to the host. That’s it; the wizard will do the rest. Click on the Test tab, and then on Check Connection. The system will attempt to connect to localhost, that is, to itself. The connection test window displays the following message: Threaded connection test starting... SAS session instantiation information <table> <thead> <tr> <th>Prompt</th> <th>Timeout</th> <th>Response</th> </tr> </thead> <tbody> <tr> <td>Hello&gt;</td> <td>120</td> <td>sas -dmr -comamid tcp -noterminal -cleanup</td> </tr> <tr> <td>PORT=</td> <td>60</td> <td>// Connection to SAS established at this point</td> </tr> </tbody> </table> Host: localhost Port: 2323 Intransients differing from the default <table> <thead> <tr> <th>Property</th> <th>Default</th> <th>Modified</th> </tr> </thead> <tbody> <tr> <td>debugTelnetConnectClient</td> <td>true</td> <td></td> </tr> <tr> <td>initialStatement</td> <td></td> <td></td> </tr> <tr> <td>isAppletCodebaseRelative</td> <td>false</td> <td>true</td> </tr> <tr> <td>logTrap</td> <td>false</td> <td>true</td> </tr> <tr> <td>persistedName</td> <td>&lt;Custom Connection&gt; localhost</td> <td></td> </tr> <tr> <td>serverArchitecture</td> <td>PC</td> <td></td> </tr> </tbody> </table> Connection failed: java.lang.Exception: Connect.C75.ex.txt: Cannot connect to telnet session. com.sas.net.connect.TelnetClientException: Connection refused: no further information Connection refused: no further information java.net.ConnectException: Connection refused: no further information. >>> Connection expecting Connect Spawner; you may need to start it. Figure 3. Failed Connection to local host Oops. You forgot to start the spawner on the local system! So just start the spawner as described in Step 1 above. Now you should see the following message: Threaded connection test starting... Telnet session established on Mon Jun 16 17:51:47 PDT 2003 Telnet client: com.sas.net.connect.SASTelnetClient Host: ASTERIX Port: 2323 Looking for message from host containing one of the following Hello> Received: Hello> Sent: sas -dmr -comamid tcp -noterminal -cleanup Looking for message from host containing one of the following SESSION ESTABLISHED Received: SESSION ESTABLISHED NOTE: Copyright (c) 1999-2001 by SAS Institute Inc., Cary, NC, USA. NOTE: SAS (r) Proprietary Software Release 8.2 (TS2M0) NOTE: This session is executing on the WIN_PRO platform. NOTE: SAS initialization used: real time 0.44 seconds cpu time 0.14 seconds 1 %put RemoteSASInfoStart &SYSVER RemoteSASInfoEnd; RemoteSASInfoStart 8.2 RemoteSASInfoEnd NOTE: PROCEDURE PRINTTO used: real time 0.01 seconds cpu time 0.01 seconds NOTE: SAS Server: Authorization commencing... NOTE: SAS Server: Client LOGON NOTE: NEW task=3 factory=8387 oid=8425 class=sashelp.prdauthuserinfo.class NOTE: NEW task=3 factory=8387 oid=8505 class=SASHELP.RSASMOD.SRVINFO.CLASS NOTE: Ofactory : _term NOTE: TERM task=3 factory=8387 oid=8505 NOTE: TERM task=3 factory=8387 oid=8425 NOTE: SAS Server: Client LOGOFF NOTE: Stopping task taskid=3 curtask=1 Success!! Figure 4. Successful Connection to local host You now have a connection to your local system. To create a connection to a remote PC just change the value in the Host Name text box from "localhost" to the name of your remote system and supply a valid user ID and password. The way TCP works, as long as both ends of the connection are talking over the same port number, one end can be on the same system as the other end (the "local" host) or on a computer in Australia (assuming you are not in Australia when you're reading this). Step 5. Creating a JSP Project Now you are ready to actually create the web application. First, however, as noted above it is necessary to create a new project to hold the JSP project. In webAF, select File>New. From the Projects tab, select JavaServer Pages Project. You should see something like the resulting screen: ![Creating a New JSP project](image) Three text boxes show up: Project name, Java package and Location. The only one of these that you have to specify is the project name—in this case Examples. Leave the second text box blank. The result is that the Project Wizard does not create a hierarchy of directories to store Java class files (see the webAF Help Topic “Assigning a Package Name” for more details). The Location field will automatically fill in with the default AppDev Studio Project directory name; again, this is probably what you want to do. Selecting OK should result in a screen that looks something like the following example: The Web application base directory is specified as described in the discussion above. The default value is the project location from Figure 5. You need to change this value to the new folder you created by copying and renaming the template: ``` D:\AppDevStudio\WebAppDev\webapps\examples ``` Checking the box labeled Invoke JSP using the following URL offers two choices: the Default Web server and the WebAppDev Web Server. In this example the second choice is selected. As we saw in Step 2, be sure this server is started before you try to run any applications from within webAF. The third text box, Invoke JSP using the following URL, also requires some additional explanation. The string ``` http://localhost:${WebPort}/${ProjectName}/index.jsp ``` is comprised of a series of environment variables of the form ${varname}. The selection arrow at the right hand side of the text brings up a list additional field values that can be automatically inserted into the text. In this case, the generated URL is ``` http://localhost:8082/Examples/index.jsp ``` requesting the page `index.jsp` from the `WebAppDev Web Server`. The full pathname to this page is ``` d:\AppDevStudio\WebAppdev\webapps\Examples\index.jsp ``` Unlike most Windows commands, however, URLs are case sensitive. Unless you are sure that the true pathnames agree completely with the constructed ones, it is almost certainly better not to use the environment values in the URL string, but instead just hard-code the actual path to the web application directory. The fourth text box is the name of the initial JSP page in the new project; the default is `index.jsp` which is probably what you want to call it. If all of the parameters have been specified as illustrated, selecting Next should result in the following confirmation page. Press Finish to create the project. ![Project Wizard Summary](image) Figure 7. Project Wizard Summary Step 6. Creating a new JavaServer Page For the online examination example the first screen prompts the student to select a test from the list available. This list is stored as a SAS dataset on the server. <table> <thead> <tr> <th>Test Number</th> <th>Subject</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Astronomy</td> </tr> <tr> <td>2</td> <td>Chemistry</td> </tr> <tr> <td>3</td> <td>Geology</td> </tr> <tr> <td>4</td> <td>Meteorology</td> </tr> <tr> <td>5</td> <td>Physics</td> </tr> <tr> <td>6</td> <td>Mathematics</td> </tr> <tr> <td>7</td> <td>Computer Science</td> </tr> </tbody> </table> Figure 8. Sample Data: EXAMS.TEST A new JSP page can be readily constructed using webAF, by simply dropping widgets onto the page. To create a new page in an open project, just select **File>**New from the main menu. The following screen should appear: Figure 9. Creating a New JavaServer Page Enter a new **File name**, in this case **index.jsp** since this is the name of the start page specified above in the previous step (see Figure 6). The **JSP/Servlet** window should open with a new blank page. Note that the first line says This is the link to the `sasads` tag library which, as we will see below, is used by `webAF` to create form elements. Note that the `uri` (Uniform Resource Identifier) looks a lot like an anchor tag. You do not need to be connected to the internet in order to display a page containing this tag; the local Java web server understands this reference not as a link, but instead as an identifier. This is just the name of the `sasads` resource, not its location. The rest of this discussion is based on the finished `index.jsp` page shown in Figure 10 below. Each of the components is discussed in turn, but it is useful to view the entire page at once in order to understand where the process is leading. ```jsp <%@taglib uri="http://www.sas.com/taglib/sasads" prefix="sasads"%> <%@ page import="com.sas.collection.StringCollection" %> <sasads:Connection id="connection1" scope="session" initialStatement="libname EXAMS 'd:\exams';" /> <sasads:DataSet id="table1" connection="connection1" dataSet="EXAMS.TESTS" /> <% // Get unique test names and codes from data set, // add collections to page context pageContext.setAttribute("codes", new StringCollection(table1.getFormattedColumn(1))); pageContext.setAttribute("labels", new StringCollection(table1.getFormattedColumn(2))); %> <p>-- begin HTML --%</p> <html> <head> <title>JSP Examples</title> </head> <body> <h1 style="color: blue; text-align: center">On-line Exam Demo</h1> <sasads:Form method="get" action="page1.jsp" style="text-align: center;">% <sasads:Choicebox id="test" prolog="<strong>Select Test: </strong>" model="codes" descriptionModel="labels" /> <sasads:PushButton text="Begin" /></p> </sasads:Form> </body> </html> ``` Figure 10. Using DataSetInterface to Populate a Listbox Control Adding a Connection Object Selecting the SAS tab in the webAF IDE brings up a set of 9 controls. Dropping a Connection control on the Source page results in the following tag: ```xml <sasads:Connection id="connection1" scope="session" /> ``` Note the sasads custom namespace; the Java code for this control is available from the template library copied in Step 3. This default template has to be modified for the specific connection required. If you know the attribute values for the connection you can just type them into the source window. It is probably easier however to modify the connection properties from the Components tab of the Project Navigator window. Right click on the selected component in the left window (here sasads:Connection – connection1) and a menu appears that can be used to start the Customizer or change component properties individually. In general, if a component customizer is available it supports editing of the available component properties. The Customizer brings up a connection editor. In either case—editing the properties directly or using the Customizer-- the JSP code in the Source tab is rewritten with the new attributes. In this example you also need to add an initialStatement attribute allocating a libname on the local host. ```xml initialStatement="libname EXAMS 'd:\exams';" /> ``` Note that two slashes are required; since the interface uses the backslash character as an escape, ‘\’ gets translated into ‘\’ when it is passed to SAS. Also, use single quotes around the libname directory, since the attribute itself is enclosed in double quotes. Adding a Dataset Object Once the connection has been instantiated, the DatasetInterface control must be added to reference the specific EXAMS.TESTS table. As Figure 12 shows, in order to populate the page, it is necessary to include a short JSP scriptlet to set the page attributes. ![JavaScript Example](image) Figure 12. Java Scriptlet Note that column 1 in the data set shown in Figure 8 is the test number and column 2 is the name of the test. The data are passed to the page context as Java StringCollection objects. In this instance, the records are provided by the `getFormattedColumn()` method of the dataset class. (As might be expected, there are also methods to `getFormattedRows()` and other table components.) The data for the first variable is added to the page as a collection called `codes` while the second variable is added as the collection `labels`. These two collections are used below on the HTML form. **HTML Static Text** A JavaServer Page is just an HTML page with some special tags added. Figure 13 shows the static web page code. Note that a CSS (Cascading Style Sheet) `style` tag is used to format the heading. Adding a Form In order to display dynamic data in HTML, a form element is usually required. SAS has supplied a custom tag to support this action. The form control is available from the Form elements tab of the IDE, as are the Choicebox and PushButton tags. These are just standard HTML elements, with the expected functions. Pressing the “Begin” button submits the page and calls the first page of the selected exam: page1.jsp. This page can be constructed in a similar fashion, just by putting together simple elements using the IDE. The other three pages of this application, page1.jsp, page2.jsp and finish.jsp are attached as an appendix for the interested. It is to be hoped that this discussion has provided enough information in order to be able to puzzle out how these were constructed and why they work. Step 7. Putting it All Together There are three different ways to start the Tomcat engine from within webAF. 1. From the main webAF menu, select **Tools>Service>Start Java Web Server**. 2. Press the F5 function key. 3. Click the **Go** button on the toolbar. Whichever method is chosen, a window appears at the bottom of the display with something like the following text. ``` Starting tomcat. Check logs/tomcat.log for error messages 2003-06-16 07:13:15 - ContextManager: Adding context Ctx( /examples ) 2003-06-16 07:13:15 - ContextManager: Adding context Ctx( ) 2003-06-16 07:13:15 - ContextManager: Adding context Ctx( /ServletExample ) 2003-06-16 07:13:16 - ContextManager: JspClassDebugInfo: Enabling inclusion of class debugging information in JSP servlets for context "/examples". 2003-06-16 07:13:16 - Scratch dir for the JSP engine is: D:\AppDevStudio\WebAppDev\work\localhost_8082%2Fexamples 2003-06-16 07:13:16 - IMPORTANT: Do not modify the generated servlets 2003-06-16 07:13:16 - ContextManager: JspClassDebugInfo: Enabling inclusion of class debugging information in JSP servlets for context "". 2003-06-16 07:13:16 - ContextManager: JspClassDebugInfo: Enabling inclusion of class debugging information in JSP servlets for context "/ServletExample". 2003-06-16 07:13:17 - PoolTcpConnector: Starting HttpConnectionHandler on 8082 2003-06-16 07:13:17 - PoolTcpConnector: Starting Ajp12ConnectionHandler on 8083 ``` Figure 15. Java Web Server Messages Note that the Java web server is started on port 8082. (Do not forget to start the **SAS Job Spawner** as well!) Selecting the “Execute in Browser” button from the tool bar results in the page shown in Figure 17. Clicking the **Begin** button on the form passes the test number of the desired exam subject as a parameter to `page1.jsp`, which selects the appropriate questions from another table in the same SAS data library. Conclusion As should be obvious from this brief introduction, SAS/AppDev Studio is an extremely powerful and flexible collection of tools for web development. It also has a lot of moving parts, and new functionality is being added on an ongoing basis. The goal of this paper is to try to put all of the pieces together in a systematic overview, so that both novice and experienced web programmers can find the information and examples they need to get started using these tools effectively. References Conference Papers SAS Resources JavaServer Pages (a few of many) And last (but not least) Acknowledgements *SAS®, and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are registered trademarks or trademarks of their respective companies. Appendices page1.jsp – Display the next question ```jsp <%@taglib uri="http://www.sas.com/taglib/sasads" prefix="sasads"%> <%@ page import="com.sas.collection.StringCollection" %> <sasads:Connection id="conn" scope="session" initialStatement="libname EXAMS 'd:\exams';"/> <sasads:DataSet id="table1" connection="conn" dataSet="EXAMS.TESTS" /> <sasads:DataSet id="table2" connection="conn" dataSet="EXAMS.QUESTIONS" /> <sasads:DataSet id="table3" connection="conn" dataSet="EXAMS.ANSWERS" /> <%// get parameters from URL String tnum = request.getParameter("test"); String qparm = request.getParameter("question"); int qindex = qparm != null ? Integer.parseInt(qparm) : 0; // get test name table1.setWhere("tnum eq " + tnum); String tname = table1.getFormattedCell(1,2); // select questions for this test table2.setWhere("tnum eq " + tnum); String[] qnum = table2.getFormattedColumn(2); String[] qtext = table2.getFormattedColumn(3); // select answers for this question table3.setWhere("tnum eq " + tnum + " and qnum eq " + qnum[qindex]); pageContext.setAttribute("answer_codes", new StringCollection(table3.getFormattedColumn(3))); pageContext.setAttribute("answers", new StringCollection(table3.getFormattedColumn(4))); %> <%-- begin HTML code --%> <html> <head> <title>On-line Exam Demo: page1.jsp</title> </head> <body> <h1 style="color: blue; text-align: center">On-line Exam: <%= tname %></h1> <sasads:Form action="page2.jsp" method="get"> <sasads:Hidden id="test" text="<%= tnum %>"> <sasads:Hidden id="question" text="<%= qnum[qindex] %>> <%-- display question --%> (<%= qnum[qindex] %>) <%= qtext[qindex] %> </sasads:Form> </body> </html> ``` page2.jsp – Display the correct answer ```java <%@ page import="com.sas.collection.StringCollection" %> <sasads:Connection id="conn" scope="session" initialStatement="libname EXAMS 'd:\exams';"/> <sasads:DataSet id="table1" connection="conn" dataSet="EXAMS.TESTS" /> <sasads:DataSet id="table2" connection="conn" dataSet="EXAMS.QUESTIONS" /> <sasads:DataSet id="table3" connection="conn" dataSet="EXAMS.ANSWERS" /> <% // get parameters from URL String tnum = request.getParameter("test"); String qnum = request.getParameter("question"); String answer = request.getParameter("answer"); // get test name table1.setWhere("tnum eq " + tnum); String tname = table1.getFormattedCell(1,2); // select questions for this test table2.setWhere("tnum eq " + tnum); String[] qtext = table2.getFormattedColumn(3); String[] qanswer = table2.getFormattedColumn(4); // test for end of exam int qindex = Integer.parseInt(qnum); String nextpage = (qindex-- < qtext.length) ? "page1.jsp" : "finish.jsp"; // select answers for this question table3.setWhere("tnum eq " + tnum + " and qnum eq " + qnum); String[] avalues = table3.getFormattedColumn(3); pageContext.setAttribute("answer_codes", new StringCollection(avalues)); String[] answers = table3.getFormattedColumn(4); pageContext.setAttribute("answers", new StringCollection(answers)); // look up the right answer String correct_answer = new String(); for (int i=0; i<answers.length; i++) if (qanswer[qindex].equals(avalues[i])) correct_answer = answers[i]; ``` break; // specify correct answer, compute score String reply = new String(); if (answer.equals(qanswer[qindex])) { reply = "That's right!"; // compute test score, store as session value String s = (String) session.getValue("score"); int score = (s == null) ? 0 : Integer.parseInt(s); session.setAttribute("score", String.valueOf(++score)); } else { reply = "Sorry. The correct answer is: " + correct_answer; } <%-- begin HTML code --%> <html> <head> <title>On-line Exam Demo: page2.jsp</title> </head> <body> <h1 style="color: blue; text-align: center"> On-line Exam: <%= tname %></h1> <blockquote> <sasads:Form action="<%= nextpage %>"> <sasads:Hidden id="test" text="<%= tnum %>"> <sasads:Hidden id="question" text="<%= qnum %>"> <%-- display question --%> (<%= qnum %>) <%= qtext[qindex] %>: <blockquote> <sasads:Radio model="answer_codes" descriptionModel="answers" selectedItem="<%= answer %>"> <%-- display correct answer --%> <p><%= reply %></p> </blockquote> <center><sasads:PushButton text="Next Question"/></center> </sasads:Form> </blockquote> <%-- finish.jsp --%> <html> <head> <title>On-line Exam Demo: finish.jsp</title> </head> <body> <h1 style="color: blue; text-align: center"> On-line Exam Demo</h1> <center><strong>Your score: <%= score %></strong></center> </body> </html> <%= session.getAttribute("score") %> correct out of <%= request.getParameter("question") %> questions. </strong></center> <% // re-initialize test score session.invalidate(); %> </body> </html>
{"Source-Url": "https://www.lexjansen.com/nesug/nesug03/et/et008.pdf", "len_cl100k_base": 7261, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 40215, "total-output-tokens": 8911, "length": "2e12", "weborganizer": {"__label__adult": 0.0002837181091308594, "__label__art_design": 0.0003097057342529297, "__label__crime_law": 0.0001908540725708008, "__label__education_jobs": 0.0028018951416015625, "__label__entertainment": 6.324052810668945e-05, "__label__fashion_beauty": 0.00012671947479248047, "__label__finance_business": 0.0002567768096923828, "__label__food_dining": 0.00025153160095214844, "__label__games": 0.0003502368927001953, "__label__hardware": 0.0005841255187988281, "__label__health": 0.0001747608184814453, "__label__history": 0.00012803077697753906, "__label__home_hobbies": 0.00010961294174194336, "__label__industrial": 0.00025773048400878906, "__label__literature": 0.00019657611846923828, "__label__politics": 0.00010722875595092772, "__label__religion": 0.00030040740966796875, "__label__science_tech": 0.002410888671875, "__label__social_life": 0.0001245737075805664, "__label__software": 0.01314544677734375, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00014603137969970703, "__label__transportation": 0.0003483295440673828, "__label__travel": 0.000186920166015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31685, 0.02197]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31685, 0.47135]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31685, 0.75755]], "google_gemma-3-12b-it_contains_pii": [[0, 2597, false], [2597, 5703, null], [5703, 8080, null], [8080, 8800, null], [8800, 11041, null], [11041, 12700, null], [12700, 13664, null], [13664, 14727, null], [14727, 15580, null], [15580, 16617, null], [16617, 18387, null], [18387, 20014, null], [20014, 21135, null], [21135, 21950, null], [21950, 24269, null], [24269, 25792, null], [25792, 26746, null], [26746, 28506, null], [28506, 30148, null], [30148, 31486, null], [31486, 31685, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2597, true], [2597, 5703, null], [5703, 8080, null], [8080, 8800, null], [8800, 11041, null], [11041, 12700, null], [12700, 13664, null], [13664, 14727, null], [14727, 15580, null], [15580, 16617, null], [16617, 18387, null], [18387, 20014, null], [20014, 21135, null], [21135, 21950, null], [21950, 24269, null], [24269, 25792, null], [25792, 26746, null], [26746, 28506, null], [28506, 30148, null], [30148, 31486, null], [31486, 31685, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31685, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31685, null]], "pdf_page_numbers": [[0, 2597, 1], [2597, 5703, 2], [5703, 8080, 3], [8080, 8800, 4], [8800, 11041, 5], [11041, 12700, 6], [12700, 13664, 7], [13664, 14727, 8], [14727, 15580, 9], [15580, 16617, 10], [16617, 18387, 11], [18387, 20014, 12], [20014, 21135, 13], [21135, 21950, 14], [21950, 24269, 15], [24269, 25792, 16], [25792, 26746, 17], [26746, 28506, 18], [28506, 30148, 19], [30148, 31486, 20], [31486, 31685, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31685, 0.0516]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
3eeacc09b2e1d9a06b21831127dc54ef5c7002a0
The SIGNAL Approach to the Design of System Architectures Abdoulaye Gamatié, Thierry Gautier To cite this version: HAL Id: hal-00541913 https://hal.archives-ouvertes.fr/hal-00541913 Submitted on 1 Dec 2010 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. The SIGNAL Approach to the Design of System Architectures* Abdoulaye GAMATIÉ, Thierry GAUTIER IRISA / INRIA F-35042 RENNES, France {agamatie, gautier}@irisa.fr Abstract Modeling plays a central role in system engineering. It significantly reduces costs and efforts in the design by providing developers with means for cheaper and more relevant experimentations. So, design choices can be assessed earlier. The use of a formalism, such as the synchronous language SIGNAL which relies on solid mathematical foundations for the modeling, allows validation. This is the aim of the methodology defined for the design of embedded systems where emphasis is put on formal techniques for verification, analysis, and code generation. This paper mainly focuses on the modeling of architecture components using SIGNAL. For illustration, we consider the modeling of a bounded FIFO queue, which is intended to be used for communication protocols. We bring out the capabilities of SIGNAL to allow specifications in an elegant way, and we check few elementary properties on the resulting model for correctness. 1. Introduction Nowadays, systems in general are more and more large and complex. Obviously, the engineering becomes very delicate since the complexity of data structures and computation algorithms is challenging. On the other hand, the design cycle usually involves multiple formalisms and various tools. A major drawback in such a context is that the design and checking tasks are inherently long and complex. In the case of distributed embedded systems, there are additional difficulties: on the one hand, such systems have to be separated efficiently into components, and suitable communication mechanisms between these components must be provided; on the other hand, the validation of the whole is required. Among solutions that have been proposed to overcome the above obstacles, there are Architecture Description Languages (ADLs) [7], the Unified Modeling Language (UML) [16], or the synchronous technology [11]. They all provide formalisms and tool-sets that help for the description of systems. Solutions which adopt formal methods are widely accepted as a confident way for guaranteeing the quality of designs. As a matter of fact, verification and validation are facilitated. So, it appears desirable for a design formalism to have a well-defined formal semantics. Unfortunately, this is not the case for all the solutions (for instance, UML only has a semi-formal semantics). The synchronous technology emerges as one of the most promising ways for guaranteeing a safe design of embedded systems. It offers practical design assistance tools with a formal basis. These include possibilities of high level specifications, modular verification of properties on these specifications, automatic code generation through formal transformations, and validation of the generated code against specifications. As a result, earlier architectural choices and behavioral simulation are enabled, and design ambiguities and errors can be significantly reduced. POLYCHRONY, the programming environment of the synchronous language SIGNAL [5], implemented by INRIA1 (http://www.irisa.fr/espresso) incorporates all these functionalities. A major objective of our work is the definition and implementation of an enhanced methodology for the design of embedded systems within POLYCHRONY. This methodology must significantly reduce the risk of design errors and shorten overall design times. Earlier results have been established during the SACRES project [9]. The main add-on of the methodology is to allow automatic generation of efficient and validated distributed code from the specifications, entirely replacing the manual coding phase still employed in current industrial design flows. In this paper, we give an overview of the methodology, but we rather concentrate on the approach of modeling compo- *This work has been supported by the european project IST SAFEAIR (Advanced Design Tools for Aircraft Systems and Airborne Software) [10] (http://www.safeair.org/). 1There is also an industrial version, SILDEX, implemented and commercialized by TNI-Valiosys (http://www.tni-valiosys.com). ments used in architecture descriptions. The remainder of the paper is organized as follows: section 2 presents the SIGNAL language. Then, in section 3, we introduce the methodology. We highlight its main steps from specification to implementation. In section 4, we illustrate the design of architecture components through the modeling of a First In First Out queue. We verify some properties (e.g. safety) on the resulting model for correctness. Finally, a conclusion is given in section 5. 2. The SIGNAL language The underlying theory of the synchronous approach [2] is that of discrete event systems and automata theory. Time is logical: it is handled according to partial order and simultaneity of events. Durations of execution are viewed as constraints to be verified at the implementation level. Typical examples of synchronous languages [11] are: ESTEREL, LUSTRE, SIGNAL. They differ mainly from each other in their programming style. The first one adopts an imperative style whereas the two others are dataflow oriented (LUSTRE is functional and SIGNAL is relational). However, there have been joint efforts to provide a common format DC+ [1], which allows the interoperability of tools. The SIGNAL language [5] handles unbounded series of typed values \((x_i)_{i \in \mathbb{N}}\), called signals, denoted as \(x\) in the language, and implicitly indexed by discrete time (denoted by \(t\) in the semantic notation). At a given instant, a signal may be present, then it holds a value; or absent, then it is denoted by the special symbol \(\perp\) in the semantic notation. There is a particular type of signals called event. A signal of this type is always true when it is present (otherwise, it is \(\perp\)). The set of instants where a signal \(x\) is present is called its clock. It is noted as \(^x\) (which is of type event) in the language. Signals that have the same clock are said to be synchronous. A SIGNAL program, also called process, is a system of equations over signals. The kernel language. SIGNAL relies on a handful of primitive constructs which are combined using a composition operator. These are: - **Functions.** \(y := f(x_1, \ldots, x_n)\), where \(y_i \neq \perp \iff x_{i1} \neq \perp \iff \ldots \iff x_{ni} \neq \perp\), and \(\forall t:\ y_t = f(x_{t1}, \ldots, x_{tn})\). - **Delay.** \(y := x \ \$ \ 1 \ \text{init} \ y_0\), where \(x_i \neq \perp \iff y_i \neq \perp\), \(\forall t > 0 : y_t = x_{t-1}, y_0 = y_0\). - **2-args down-sampling.** \(y := x \ \text{when} \ b\), where \(y_t = x_t\) if \(b_t = \text{true}\), else \(y_t = \perp\). - **Deterministic merging.** \(z := x \ \text{default} \ y\), where \(z_t = x_t\) if \(x_t \neq \perp\), else \(z_t = y_t\). - **Hiding.** \(P \ \text{where} \ x\) denotes that the signal \(x\) is local to the process \(P\). - **Synchronous parallel composition** of \(P\) and \(Q\), encoded by \(\{ P \ | \ Q \ | \ \}\). It corresponds to the union of systems of equations represented by \(P\) and \(Q\). These core constructs are of sufficient expressive power to derive other constructs for comfort and structuring. The following operators are also used in the next sections: - **1-arg down-sampling.** \(y := \ \text{when} \ b\), where \(y_t = \text{true}\) if \(b_t = \text{true}\), else \(y_t = \perp\). - **Clock union.** \(y := x_1 ^+ \ldots ^+ x_n\), where \(x_i^+\neq \perp \iff x_{ni}^+ \neq \perp\) (i.e. \(x_1, \ldots, x_n\) are synchronous). - **Sliding window.** \(y := x \ \text{window} \ n \ \text{init} \ y_0\), where \(\forall t \geq 0\), and \(i \in 0..n-1:\ (t + i \geq n) \Rightarrow (y_i[t] = X_{t-n+i+1}) \land (1 \leq t + i < n) \Rightarrow (y_i[t] = y_i[t-n+i+2])\). - **Memory.** \(y := x \ \text{cell} \ b \ \text{init} \ y_0\), defined as: \[ \left\{ \begin{array}{l} y := x \ \text{default} \ (y \$ 1 \ \text{init} \ y_0) \\ y \ ^= x \ ^+ \ (\text{when} \ b) \ \} \\ \end{array} \right. \] The next example illustrates the meaning of the sliding window and the memory operators. Let us consider a process defined as follows: \[ \left\{ \begin{array}{l} y := x \ \text{window} \ 3 \ \text{init} \ [-1,0] \\ z := x \ \text{cell} \ b \ \text{init} \ 0 \\ \end{array} \right. \] Signals \(x\) and \(z\) are of integer type, \(b\) is a boolean, and \(y\) is a 3-array of integers. A possible run is: \[ \begin{array}{cccccccc} x & : & \perp & 1 & 2 & \perp & 1 & 3 & \ldots \\ b & : & t & \perp & f & t & f & t & \ldots \\ y & : & \perp & [-1,0,1] & 0,1,2 & \perp & \perp & [1,2,3] & \ldots \\ z & : & 0 & 1 & 2 & 2 & \perp & 3 & \ldots \\ \end{array} \] 3. A methodology for the design of embedded systems This methodology relies on the theory of desynchronization [3], which defines the formal basis for an effective implementation of synchronous programs on asynchronous architectures, without changing their original semantics. Basically, the design of distributed embedded systems consists in the distribution of a SIGNAL program representing a functional graph of flows, operators and dependencies. \[\text{Similarly, intersection and difference of clocks are defined.}\] The target architecture is composed of a set of possibly heterogeneous execution components (processors, microcontrollers...). A general comment is that the level of detail at which the architecture needs to be known depends quite a lot on the refinement of the mapping to the chosen architecture. This means that in the simplest cases, the amount of data required is fairly small, and simple to assess: - the set of processors or tasks, and the mapping from operations or sub-processes in the application specification to those processors or tasks. This information enables the partitioning of the graph into sub-graphs grouped according to the mapping. - the topology of the network of processors, the set of connections between processors, and a mapping from inter-process communications to these communication links. This is useful in the case of signals exchanged between processes located on different processors or tasks, if several of them have to be routed through the same communication medium. - a definition of the set of system-level primitives used e.g. for communications (readings and writings to the media). Roughly, this amounts to the profiles of the function library to which the code generated for the application will have to be linked. Further degrees of refinement of the description may be required for a better architecture-adaptation: for example, concerning communications, the type and nature of the links (that could be implemented using shared variables, synchronous or asynchronous communications...). If the target architecture features an OS, the required model consists basically in the profile of the corresponding functions. For instance, according to the degree of use of the OS, we need models of synchronization gates, communications (possibly including routing between processors) or tasking functions (in the case of un-interruptible tasks: start and stop; in the case of interruptible tasks: suspend and resume, assignment and management of priority levels), etc. A specification of such functions has been addressed in [8], where a component library (process, communication and synchronization mechanisms...) has been defined for the design of modular avionics architectures. In Figure 1, we have illustrated the whole design chain. First, the respective SIGNAL descriptions of an application software and the hardware architecture (mainly processors) are given. Then, the application is manually partitioned onto the target architecture. The compilation of the whole determines which information has to be exchanged by processors, and communication wires are automatically added between processors. These communications have a synchronous semantics. Of course, if the application has to be deployed on an asynchronous architecture, the instantaneousness of the added communications will be lost. However, (bounded) communication mechanisms can be easily modeled with SIGNAL. In that case, the models can be used in the description of the architecture so as to obtain a model of GALS (Globally Asynchronous Locally Synchronous). --- 3The main part of the compilation is called clock calculus in POLYCHRONY. 4In other words, zero-delay communications: a sent message is instantaneously received. type. The SIGNAL compiler is used to establish and verify the conditions under which the asynchronous behavior of the application model is equivalent to the synchronous one (this is addressed by the so-called endo/isochrony properties [3]). So, one solution is to define a library of components which can be used to model various communication mechanisms in architectures. For instance, the components presented in [8] can be used to describe avionics applications. They have been modeled with SIGNAL in order to take advantage of its formal basis for architecture analysis. So, for a particular implementation, we only have to pick up the required component model from a pre-defined library and insert it into the current system description. Afterwards, we can use the techniques and tools available in POLYCHRONY to assess the final implementation; or generate separate embedded code for each processor, along with the suitable communication protocol. The protocol preserves the semantics of synchronous communication even though an asynchronous communication medium is used. We already mentioned that verification and validation are essential to our approach. The SIGNAL compiler and tools like SIGALI (a model checker, see in the next sections) help for property checking (e.g. safety). Performance evaluation is also possible using implemented techniques such as the profiling of SIGNAL programs [12]. All these features favor a helpful and confident context for the design activity. In the next, we concentrate on the design of component models. We model a bounded FIFO queue, usable for message exchanges between several entities. Besides this particular example which can be used as a communication component, we illustrate more generally a component-based approach. We also show how to verify properties on a component, and how to abstract it for future use. This brings out the main features of SIGNAL programming, and their benefits for a component-based design within a homogeneous formal framework. 4. Design of a safe FIFO queue A FIFO queue, called basic_FIFO is first considered. This component will be enhanced later so as to derive a really “safe” FIFO queue on which properties will be checked. Model of a basic FIFO queue. Informally, basic_FIFO works as follows: - On a write request, the incoming message is inserted in the queue regardless of its size limit. When the queue was previously full, the oldest message is lost. The other messages are shifted forward, and the incoming is put in the queue. - On a read request, there is an outgoing message whatever the queue status is. When it was previously empty, two situations are distinguished: if there is not yet any written message, an arbitrary message called default message is returned; else the outgoing message is the message that has been read last. Furthermore, for simplicity we suppose that write/read requests never occur simultaneously. 4.1. SIGNAL specifications Here, we concentrate on the SIGNAL description of the FIFO queue. We give a model for basic_FIFO, then we show how to specify another FIFO queue from the previous one. The corresponding SIGNAL description (also termed process model) is given in Figure 2. Variables message_type, fifo_size and default_mess are parameters, which respectively denote the type of messages, the size limit of the queue, and the default message value. The input signals mess_in and access_clock are respectively the incoming message (its presence denotes a write request), and the queue access clock (i.e. instants of read/write requests). The output signals are mess_out, nbmess, OK_write and OK_read. They respectively represent the outgoing message, the current number of messages in the queue, and conditions for writing and reading. Now, we can take a look at the meaning of the statements in the process body. Let us begin with the equation (1.b); it defines the local signal prev_nbmess, which denotes the previous number of messages in the queue. This signal is used in (1.c) and (1.d), to define respectively when the queue can be “safely” written (the size limit is not reached), and read (there is at least one message received). This is the meaning of the signals OK_write and OK_read. The statement (1.a) expresses how the current number of messages is calculated. That is, its previous value is incremented by one when there is a write request, and if the queue was not full; it is decremented by one when there is a read request, and if the queue was not empty; otherwise it remains unchanged. The equation (1.h) states that the value of nbmess changes whenever there is a request on the queue. The equation (1.e) defines the message queue. The signal queue is an array of dimension fifo_size that contains the fifo_size latest values of mess_in (expressed by the window operator). The cell operator makes the signal queue available when access_clock is present (i.e. whenever there is a request). Finally, (1.f) means that on a read request (i.e. at the clock ^mess_out), the outgoing message is either the previous if the FIFO is empty (defined in (1.g)), or the oldest message in the queue. In the resulting trace (in Figure 2), the type of the messages is integer, the size limit is 2, and the default message value is -1. Henceforth, the basic_FIFO model can be used to describe other FIFO queues. This is the topic of the next paragraph. Model of a safe FIFO queue. In the model depicted in Figure 3, the interface is slightly different from that of basic_FIFO. Parameters are the same. A new input signal get_mess has been added. It denotes a read request. The signal nbmess which was previously an output of basic_FIFO, is now a local signal. The statement (2.a) defines the access clock as the union of instants where read/write requests occur. Equations (2.b) and (2.c) are in charge of ensuring a safe access to the queue in basic_FIFO. The process call in (2.d) has the local signal new_mess_in as input. This signal is defined only when basic_FIFO was not full, it is stated by (2.b). Similarly, (2.c) expresses that on a read request, a message is received only when basic_FIFO was not empty. In the trace in Figure 3, the same parameters as for basic_FIFO are considered. The safe_FIFO component will serve later in the description of some communication protocol such as the LTTA protocol (Loosely Time-Triggered Architectures) [4]. We observe that modularity and reusability are key features of the SIGNAL programming. They favor component-based designs. By constraining a given component, one can derive others. The most difficult task is the identification of suitable basic components. Moreover, the adaptability of components is very flexible. As a matter of fact, the SIGNAL process model enables “generic” components by parameterizing the interface (e.g. in the above models, the type of messages is a parameter). Finally, combined with the other characteristics of the language (e.g. possibility of non-deterministic specifications), richer descriptions are enabled. 4.2. Property verification As mentioned earlier in the paper, a benefit of using SIGNAL for designs is the availability of formal verification tools. Two kinds of properties can be distinguished about SIGNAL programs [13]: invariant properties (e.g. a program exhibits no contradiction between clocks of involved signals) on the one hand, and dynamical properties (e.g. reachability, liveness) on the other hand. The SIGNAL compiler itself addresses only the first one. For a given SIGNAL program, it checks the consistency of constraints between clocks of signals, and statically proves properties. Dynamical properties are addressed by the model checker SIGALI [14], available within POLYCHRONY. SIGALI relies on the theory of polynomial dynamical systems. Roughly speaking, a SIGNAL program is abstracted into a system of polynomial equations\(^9\) over $\mathbb{Z}/2\mathbb{Z}$. This allows to encode all the possible status of a boolean signal: 1 for $true$, $-1$ for $false$, and 0 for $\perp$. For a non-boolean signal, only the fact that this signal is present (whatever its value is) or absent is encoded. So, the presence is denoted by 1, and the absence by 0. It must be noted that this “translation” fully takes into account information about boolean variables (values and clocks), whereas for non-boolean signals, information on values is lost. Therefore, it is important that a SIGNAL program that will be analyzed by SIGALI is specified as much as possible using boolean variables, since reasoning capabilities capture only synchronization and logic properties. In fact, people most often have to consider a boolean abstraction of programs with numerical properties. This is the main limitation\(^{10}\) of SIGALI. In the sequel, properties of interest concern first safety: - $(S_1)$: Write to the full queue never happens. - $(S_2)$: Read to the empty queue never happens. Other desirable properties are for example the following invariants: - $(I_1)$: A message can always be written in the queue, when it is not full. - $(I_2)$: A message can always be read from the queue, when it is not empty. To check these properties, we consider an abstraction of the process safe_FIFO. It can be obtained using the state variables (signals that are defined by delay or memory operators) that appear in the program. They feature the dynamics of the system defined by the process. Here, the state variables are nbmess, queue and prev_mess_out (defined in basic_FIFO). On the other hand, we notice that properties of interest can be addressed by considering only the state variable nbmess. Indeed, the queue access conditions rely on this single signal. Therefore, in the abstraction, a state will be simply characterized by nbmess. Furthermore, since SIGALI does not address numerical properties, nbmess must be encoded with a boolean variable. A n-FIFO queue can be represented by an automaton with $(n + 1)$ states, where a state denotes the current number of messages in the queue. For the sake of simplicity, we consider a 2-FIFO queue since all possible relevant configurations can be addressed. So, the results remain valid for any bounded n-FIFO queue where $n > 2$. Moreover, it is assumed that messages in the queue are read in the same order they have been written (i.e. the FIFO queue satisfies the sampling theorem in the protocol for Loosely Time-Triggered Architectures [4]). The automaton in Figure 4 abstracts a 2-FIFO queue behavior. A state $sk$ (represented by a circle) denotes the fact that the queue currently contains $k$ messages. In other words, for any $k \in \{0, 1, 2\}$: \[ (n\text{mess} = k \Rightarrow sk = \text{true}) \quad \land \quad (n\text{mess} \neq k \Rightarrow sk = \text{false}) \] The state $s0$ represents the initial state. Labels $in$ and $out$ are respectively write and read requests. Two special states (represented by rectangles) have also been defined only when basic_FIFO has the local signal signals) on the one hand, and \[ \begin{align*} (\text{nbmess} = k & \Rightarrow sk = \text{true}) \quad \land \\ (\text{nbmess} \neq k & \Rightarrow sk = \text{false}) \end{align*} \] \(^9\)In fact, a symbolic automaton. \(^{10}\)Some solutions [6] have been proposed to cope with the problem of numerical properties verification. added. They characterize "illegal" accesses to the queue: ERR_empty is reached on an attempt to read an empty queue, and ERR_full is reached when overwriting a full queue. They are also encoded by boolean variables. Automata are very easy to specify in SIGNAL. To show how states can be defined, let us consider the following specification of s0: \[ | s0 := \begin{cases} \text{true when prev_s1 when } (\text{mess_out}) & \text{default false when prev_s0 when } (\text{mess_in } + \text{ mess_out}) \text{ default prev_s0} \\ \text{prev_s0 := s0$1 init true} \end{cases} \] In the above statements, prev_s1 represents the previous value of the state s1. All the other states are specified in a similar way. It follows the definitions of signals OK_write and OK_read below: \[ | \text{OK_write := false when (prev_err_full or prev_s2) default true} \\ | \text{OK_read := false when (prev_err_empty or prev_s0) default true} \] The first equation means that a write request is not authorized when there are already two messages in the queue (prev_s2 is true), or the queue has been overloaded previously (prev_err_full is true); otherwise it can be accepted. In a same way, the other statement specifies when a read request is legal. The signals s0, s1, s2, ERR_empty, ERR_full, OK_write and OK_read are synchronized with access_clock. To use SIGNAL, a file\(^{11}\) must be produced with the required input format. Let us consider the script in Figure 5: SIGNAL is first invoked (line (1)), then all the necessary files are loaded (Creat_SDP.lib and Verif_Determ.lib contain specific functions of SIGNAL). The predicates on the lines (5) and (6) show that the state Err_full always remains false, and it is not possible that it becomes true (i.e. property S1). In another way, the last statement shows that the state Err_empty is not reachable (i.e. property S2). Now for invariant properties\(^{12}\) I1 and I2, we consider observers, represented by boolean state variables. We have to show that these variables always carry the value true. Let inv1 and inv2 denote respectively the observers for (I1) and (I2). 1. (I1) is described as follows: - On a write request (denoted by the presence of mess_in), when the queue is either in s0 or s1, the signal inv1 carries the value true if the message is actually written into the queue (i.e. new_mess_in is present), else inv1 is false. - Otherwise, inv1 keeps its previous value. The corresponding SIGNAL code is: \[ | \text{actual_write := true when(\text{new_mess_in})} \\ | \text{inv1 := actual_write when(z_s0 or z_s1) when(\text{mess_in}) default z_inv1} \\ | \text{z_inv1 := inv1 $ 1 init true} \] The boolean actual_write denotes the fact that a message is actually put into the queue. 2. In a similar way, (I2) is encoded by the following SIGNAL code: \[ | \text{actual_read := true when(\text{mess_out})} \\ | \text{inv2 := actual_read when(z_s1 or z_s2) when(\text{pull_mess}) default z_inv2} \\ | \text{z_inv2 := inv2 $ 1 init true} \] Here also, all the new variables have the same clock as the signal access_clock. Then, the properties can be checked as shown in Figure 6. The component safe_FIFO can be embodied further in a communication \(^{11}\)This file has the extension .z3z, and is obtained by compiling the SIGNAL source file with the option -z3z. \(^{12}\)Expressing these invariant properties requires to refer to the dynamics. This is addressed by the so-called sages during information exchanges between sub-systems. For instance, there must be no loss of mes- sulting system. For instance, there must be no loss of mes- models are essential to their dependability. Here, both model s guaranteed to be correct. properties are incrementally checked and specifications ar e which one can perform verifications and analysis to make consistency checking and analysis of component mo- synthetic systems) are combined to implement systems. protocol, where new properties are verified (e.g. absence of deadlock during accesses by message writers and readers). The protocol itself will be further used within an application whose behavior can be analyzed, and so on. In that way, properties are incrementally checked and specifications are guaranteed to be correct. Consistency checking and analysis of component models are essential to their dependability. Here, both models and properties are described using a unique formalism, the SIGNAL model, and adequate tools for verification and analysis are provided by the programming environment. This ensures a certain coherence in the design, contrary to approaches such as [15], where an implementation language (Java) and a formal specification language (labeled transition systems) are combined to implement systems. Discussion. Essentially, two issues can be observed about the scalability of our approach to large systems. The first concerns the correct distribution of the system functionalities on a given architecture. This is achieved by providing a synchronous model of the functionalities, on which one can perform verifications and analysis to make sure that requirements are met. In particular, one can check whether or not endo/isochrony properties [3] hold, for a safe deployment of the model on a distributed architecture. The second issue proceeds in an incremental way. Instead of modeling the whole system through its functionalities, its sub-systems are specified. They can be analyzed separately, and “composed” using communication media (e.g. the safe FIFO described here), or protocols (e.g. the LTTA protocol [4]), defined also in the SIGNAL model. This composition must obviously guarantee some critical properties in the resulting system. For instance, there must be no loss of messages during information exchanges between sub-systems. This is addressed by the so-called sampling theorem in the LTTA protocol [4]. The SIGNAL model enables such analysis. However, the main restriction of the approach lies in the fact that the synchronous modeling does not allow the description of unbounded resources. Typically, an unbounded FIFO queue cannot be completely modeled in the SIGNAL model. One may only define an associated abstraction, which does not provide all the necessary implementation details for an in-depth analysis purpose. In embedded systems, resources are always limited, so the approach remains valid. 5. Conclusions We have argued in this paper that the SIGNAL language favors an efficient approach to the design of embedded systems. Basically, a system is first specified in the SIGNAL model. Then, through formal transformations, another SIGNAL model is derived, which reflects the target architecture. These transformations proceed by a desynchronization of synchronous programs, based on the endo/isochrony properties [3]. The level of detail in which the architecture needs to be described may require specific mechanisms to achieve, for instance, communications, synchronizations, etc. Such mechanisms can be also specified and analyzed in the SIGNAL model, as we illustrated here for the modeling of a safe FIFO queue. We also have shown how properties are verified to guarantee the dependability of this FIFO queue for a further use in communication protocols (e.g. LTTA [4]). In the same way, the protocol itself can be analyzed, then may be used in a system which can be part of a larger system, where on every level of complexity we can perform our analysis. We advocate a design methodology including high level specifications using the modularity and reusability features of the SIGNAL programming; formal verification and performance evaluation; automatic code generation. In such a context, the formal basis of SIGNAL is a key aspect for validation, contrarily to other approaches based on a formalism like UML whose formal foundations are not well-established. This is essential to a reliable design of safety critical systems. A design of a real world avionics application using this approach is currently under study. The used components [8] have been defined from the specifications of the avionics standard ARINC 653. They include mechanisms for communication (e.g. buffer, blackboard), synchronization (e.g. semaphore), and execution (e.g. processes and associated management services), etc. This work is to be extented to applications from other safety critical domains like automotive or nuclear industries. In that case, an adaptation of the existing component models may be required to conform to the considered standards (e.g. OSEK for automotive). In connection, a modeling of the real-time Java API using SIGNAL is currently studied. This should allow to access the available formal techniques and tools of POLYCHRONY for the analysis of real-time Java applications. 6. Acknowledgments We thank Hervé Marchand for his worthful advices on the use of SIGAL. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00541913/document", "len_cl100k_base": 7568, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35547, "total-output-tokens": 9329, "length": "2e12", "weborganizer": {"__label__adult": 0.0005440711975097656, "__label__art_design": 0.0009131431579589844, "__label__crime_law": 0.0005197525024414062, "__label__education_jobs": 0.0008401870727539062, "__label__entertainment": 0.0001099705696105957, "__label__fashion_beauty": 0.00025725364685058594, "__label__finance_business": 0.0003871917724609375, "__label__food_dining": 0.0005335807800292969, "__label__games": 0.0009083747863769532, "__label__hardware": 0.006122589111328125, "__label__health": 0.0008392333984375, "__label__history": 0.0005373954772949219, "__label__home_hobbies": 0.0002104043960571289, "__label__industrial": 0.001522064208984375, "__label__literature": 0.0003592967987060547, "__label__politics": 0.0005092620849609375, "__label__religion": 0.00099945068359375, "__label__science_tech": 0.20166015625, "__label__social_life": 9.840726852416992e-05, "__label__software": 0.00641632080078125, "__label__software_dev": 0.77294921875, "__label__sports_fitness": 0.0004978179931640625, "__label__transportation": 0.0020809173583984375, "__label__travel": 0.0003571510314941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37125, 0.02115]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37125, 0.44925]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37125, 0.8873]], "google_gemma-3-12b-it_contains_pii": [[0, 1080, false], [1080, 5278, null], [5278, 10381, null], [10381, 13630, null], [13630, 16467, null], [16467, 19453, null], [19453, 24982, null], [24982, 28447, null], [28447, 33803, null], [33803, 37125, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1080, true], [1080, 5278, null], [5278, 10381, null], [10381, 13630, null], [13630, 16467, null], [16467, 19453, null], [19453, 24982, null], [24982, 28447, null], [28447, 33803, null], [33803, 37125, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37125, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37125, null]], "pdf_page_numbers": [[0, 1080, 1], [1080, 5278, 2], [5278, 10381, 3], [10381, 13630, 4], [13630, 16467, 5], [16467, 19453, 6], [19453, 24982, 7], [24982, 28447, 8], [28447, 33803, 9], [33803, 37125, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37125, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
b224540c7ec231495041d2c263d09b46517c07a2
Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling - Basic Concepts - Scheduling Criteria - Scheduling Algorithms - Thread Scheduling - Multiple-Processor Scheduling - Real-Time CPU Scheduling - Operating Systems Examples - Algorithm Evaluation Objectives - To introduce CPU scheduling, which is the basis for multiprogrammed operating systems - To describe various CPU-scheduling algorithms - To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system - To examine the scheduling algorithms of several operating systems Basic Concepts - Maximum CPU utilization obtained with multiprogramming - CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait - **CPU burst** followed by **I/O burst** - CPU burst distribution is of main concern Histogram of CPU-burst Times ![Histogram of CPU-burst Times](image-url) CPU Scheduler - **Short-term scheduler** selects from among the processes in ready queue, and allocates the CPU to one of them - Queue may be ordered in various ways - CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates - Scheduling under 1 and 4 is **nonpreemptive** - All other scheduling is **preemptive** - Consider access to shared data - Consider preemption while in kernel mode - Consider interrupts occurring during crucial OS activities Dispatcher - Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: - switching context - switching to user mode - jumping to the proper location in the user program to restart that program - **Dispatch latency** – time it takes for the dispatcher to stop one process and start another running Scheduling Criteria - **CPU utilization** – keep the CPU as busy as possible - **Throughput** – # of processes that complete their execution per time unit - **Turnaround time** – amount of time to execute a particular process - **Waiting time** – amount of time a process has been waiting in the ready queue - **Response time** – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) Scheduling Algorithm Optimization Criteria - Max CPU utilization - Max throughput - Min turnaround time - Min waiting time - Min response time First- Come, First-Served (FCFS) Scheduling <table> <thead> <tr> <th>Process</th> <th>Burst Time</th> </tr> </thead> <tbody> <tr> <td>( P_1 )</td> <td>24</td> </tr> <tr> <td>( P_2 )</td> <td>3</td> </tr> <tr> <td>( P_3 )</td> <td>3</td> </tr> </tbody> </table> - Suppose that the processes arrive in the order: \( P_1, P_2, P_3 \) The Gantt Chart for the schedule is: - Waiting time for \( P_1 = 0; P_2 = 24; P_3 = 27 \) - Average waiting time: \( (0 + 24 + 27)/3 = 17 \) Suppose that the processes arrive in the order: \[ P_2, P_3, P_1 \] - The Gantt chart for the schedule is: <table> <thead> <tr> <th></th> <th>P_2</th> <th>P_3</th> <th>P_1</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>3</td> <td>6</td> <td>30</td> </tr> </tbody> </table> - Waiting time for \( P_1 = 6; P_2 = 0; P_3 = 3 \) - Average waiting time: \( (6 + 0 + 3)/3 = 3 \) - Much better than previous case - **Convoy effect** - short process behind long process - Consider one CPU-bound and many I/O-bound processes Shortest-Job-First (SJF) Scheduling - Associate with each process the length of its next CPU burst - Use these lengths to schedule the process with the shortest time - SJF is optimal – gives minimum average waiting time for a given set of processes - The difficulty is knowing the length of the next CPU request - Could ask the user Example of SJF <table> <thead> <tr> <th>Process</th> <th>Burst Time</th> </tr> </thead> <tbody> <tr> <td>$P_1$</td> <td>6</td> </tr> <tr> <td>$P_2$</td> <td>8</td> </tr> <tr> <td>$P_3$</td> <td>7</td> </tr> <tr> <td>$P_4$</td> <td>3</td> </tr> </tbody> </table> - SJF scheduling chart ![](image.png) - Average waiting time = $(3 + 16 + 9 + 0) / 4 = 7$ Determining Length of Next CPU Burst - Can only estimate the length – should be similar to the previous one - Then pick process with shortest predicted next CPU burst - Can be done by using the length of previous CPU bursts, using exponential averaging 1. \( t_n = \text{actual length of } n^{th} \text{ CPU burst} \) 2. \( \tau_{n+1} = \text{predicted value for the next CPU burst} \) 3. \( \alpha, 0 \leq \alpha \leq 1 \) 4. Define: \( \tau_{n+1} = \alpha t_n + (1 - \alpha) \tau_n \). - Commonly, \( \alpha \) set to \( \frac{1}{2} \) - Preemptive version called **shortest-remaining-time-first** Prediction of the Length of the Next CPU Burst CPU burst ($t_i$) 6 4 6 4 13 13 13 … "guess" ($\tau_i$) 10 8 6 6 5 9 11 12 … Examples of Exponential Averaging - $\alpha = 0$ - $\tau_{n+1} = \tau_n$ - Recent history does not count - $\alpha = 1$ - $\tau_{n+1} = \alpha t_n$ - Only the actual last CPU burst counts - If we expand the formula, we get: \[ \tau_{n+1} = \alpha t_n + (1 - \alpha)\alpha t_{n-1} + \ldots + (1 - \alpha)^j \alpha t_{n-j} + \ldots + (1 - \alpha)^{n+1} \tau_0 \] - Since both $\alpha$ and $(1 - \alpha)$ are less than or equal to 1, each successive term has less weight than its predecessor Example of Shortest-remaining-time-first - Now we add the concepts of varying arrival times and preemption to the analysis <table> <thead> <tr> <th>Process</th> <th>Arrival Time</th> <th>Burst Time</th> </tr> </thead> <tbody> <tr> <td>$P_1$</td> <td>0</td> <td>8</td> </tr> <tr> <td>$P_2$</td> <td>1</td> <td>4</td> </tr> <tr> <td>$P_3$</td> <td>2</td> <td>9</td> </tr> <tr> <td>$P_4$</td> <td>3</td> <td>5</td> </tr> </tbody> </table> - **Preemptive** SJF Gantt Chart - Average waiting time $= [(10-1)+(1-1)+(17-2)+5-3]/4 = 26/4 = 6.5$ msec Priority Scheduling - A priority number (integer) is associated with each process. - The CPU is allocated to the process with the highest priority (smallest integer = highest priority). - Preemptive - Nonpreemptive - SJF is priority scheduling where priority is the inverse of predicted next CPU burst time. - Problem ≡ Starvation – low priority processes may never execute. - Solution ≡ Aging – as time progresses increase the priority of the process. ### Example of Priority Scheduling <table> <thead> <tr> <th>Process</th> <th>Burst Time</th> <th>Priority</th> </tr> </thead> <tbody> <tr> <td>(P_1)</td> <td>10</td> <td>3</td> </tr> <tr> <td>(P_2)</td> <td>1</td> <td>1</td> </tr> <tr> <td>(P_3)</td> <td>2</td> <td>4</td> </tr> <tr> <td>(P_4)</td> <td>1</td> <td>5</td> </tr> <tr> <td>(P_5)</td> <td>5</td> <td>2</td> </tr> </tbody> </table> - Priority scheduling Gantt Chart - Average waiting time = 8.2 msec Round Robin (RR) - Each process gets a small unit of CPU time (time quantum $q$), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. - If there are $n$ processes in the ready queue and the time quantum is $q$, then each process gets $1/n$ of the CPU time in chunks of at most $q$ time units at once. No process waits more than $(n-1)q$ time units. - Timer interrupts every quantum to schedule next process - Performance - $q$ large $\Rightarrow$ FIFO - $q$ small $\Rightarrow$ $q$ must be large with respect to context switch, otherwise overhead is too high ## Example of RR with Time Quantum = 4 <table> <thead> <tr> <th>Process</th> <th>Burst Time</th> </tr> </thead> <tbody> <tr> <td>$P_1$</td> <td>24</td> </tr> <tr> <td>$P_2$</td> <td>3</td> </tr> <tr> <td>$P_3$</td> <td>3</td> </tr> </tbody> </table> - The Gantt chart is: ``` P1 P2 P3 P1 P1 P1 P1 P1 0 4 7 10 14 18 22 26 30 ``` - Typically, higher average turnaround than SJF, but better *response* - $q$ should be large compared to context switch time - $q$ usually 10ms to 100ms, context switch < 10 usec Time Quantum and Context Switch Time Process time = 10 <table> <thead> <tr> <th>quantum</th> <th>context switches</th> </tr> </thead> <tbody> <tr> <td>12</td> <td>0</td> </tr> <tr> <td>6</td> <td>1</td> </tr> <tr> <td>1</td> <td>9</td> </tr> </tbody> </table> Turnaround Time Varies With The Time Quantum 80% of CPU bursts should be shorter than q Multilevel Queue - Ready queue is partitioned into separate queues, eg: - foreground (interactive) - background (batch) - Process permanently in a given queue - Each queue has its own scheduling algorithm: - foreground – RR - background – FCFS - Scheduling must be done between the queues: - Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. - Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR - 20% to background in FCFS Multilevel Queue Scheduling highest priority - system processes interactive processes interactive editing processes batch processes student processes lowest priority Multilevel Feedback Queue - A process can move between the various queues; aging can be implemented this way - Multilevel-feedback-queue scheduler defined by the following parameters: - number of queues - scheduling algorithms for each queue - method used to determine when to upgrade a process - method used to determine when to demote a process - method used to determine which queue a process will enter when that process needs service Example of Multilevel Feedback Queue - Three queues: - $Q_0$ – RR with time quantum 8 milliseconds - $Q_1$ – RR time quantum 16 milliseconds - $Q_2$ – FCFS - Scheduling - A new job enters queue $Q_0$ which is served FCFS - When it gains CPU, job receives 8 milliseconds - If it does not finish in 8 milliseconds, job is moved to queue $Q_1$ - At $Q_1$ job is again served FCFS and receives 16 additional milliseconds - If it still does not complete, it is preempted and moved to queue $Q_2$ Thread Scheduling - Distinction between user-level and kernel-level threads - When threads supported, threads scheduled, not processes - Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP - Known as process-contention scope (PCS) since scheduling competition is within the process - Typically done via priority set by programmer - Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system Pthread Scheduling - API allows specifying either PCS or SCS during thread creation - PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling - PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling - Can be limited by OS – Linux and Mac OS X only allow PTHREAD_SCOPE_SYSTEM Pthread Scheduling API ```c #include <pthread.h> #include <stdio.h> #define NUM_THREADS 5 int main(int argc, char *argv[]) { int i, scope; pthread_t tid[NUM_THREADS]; pthread_attr_t attr; /* get the default attributes */ pthread_attr_init(&attr); /* first inquire on the current scope */ if (pthread_attr_getscope(&attr, &scope) != 0) fprintf(stderr, "Unable to get scheduling scope\n"); else { if (scope == PTHREAD_SCOPE_PROCESS) printf("PTHREAD_SCOPE_PROCESS"); else if (scope == PTHREAD_SCOPE_SYSTEM) printf("PTHREAD_SCOPE_SYSTEM"); else fprintf(stderr, "Illegal scope value.\n"); } ``` Pthread Scheduling API /* set the scheduling algorithm to PCS or SCS */ pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); /* create the threads */ for (i = 0; i < NUM_THREADS; i++) pthread_create(&tid[i], &attr, runner, NULL); /* now join on each thread */ for (i = 0; i < NUM_THREADS; i++) pthread_join(tid[i], NULL); } /* Each thread will begin control in this function */ void *runner(void *param) { /* do some work ... */ pthread_exit(0); } Multiple-Processor Scheduling - CPU scheduling more complex when multiple CPUs are available - **Homogeneous processors** within a multiprocessor - **Asymmetric multiprocessing** – only one processor accesses the system data structures, alleviating the need for data sharing - **Symmetric multiprocessing (SMP)** – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processes - Currently, most common - **Processor affinity** – process has affinity for processor on which it is currently running - soft affinity - hard affinity - Variations including **processor sets** NUMA and CPU Scheduling Note that memory-placement algorithms can also consider affinity. Multiple-Processor Scheduling – Load Balancing - If SMP, need to keep all CPUs loaded for efficiency - **Load balancing** attempts to keep workload evenly distributed - **Push migration** – periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs - **Pull migration** – idle processors pulls waiting task from busy processor Multicore Processors - Recent trend to place multiple processor cores on same physical chip - Faster and consumes less power - Multiple threads per core also growing - Takes advantage of memory stall to make progress on another thread while memory retrieve happens Multithreaded Multicore System - **C** compute cycle - **M** memory stall cycle ``` thread C M C M C M C M ``` ``` thread\(_1\) C M C M C M C ``` ``` thread\(_0\) C M C M C M C ``` Real-Time CPU Scheduling - Can present obvious challenges - **Soft real-time systems** – no guarantee as to when critical real-time process will be scheduled - **Hard real-time systems** – task must be serviced by its deadline - Two types of latencies affect performance 1. Interrupt latency – time from arrival of interrupt to start of routine that services interrupt 2. Dispatch latency – time for schedule to take current process off CPU and switch to another Conflict phase of dispatch latency: 1. Preemption of any process running in kernel mode 2. Release by low-priority process of resources needed by high-priority processes Priority-based Scheduling - For real-time scheduling, scheduler must support preemptive, priority-based scheduling - But only guarantees soft real-time - For hard real-time must also provide ability to meet deadlines - Processes have new characteristics: *periodic* ones require CPU at constant intervals - Has processing time $t$, deadline $d$, period $p$ - $0 \leq t \leq d \leq p$ - *Rate* of periodic task is $1/p$ Virtualization and Scheduling - Virtualization software schedules multiple guests onto CPU(s) - Each guest doing its own scheduling - Not knowing it doesn’t own the CPUs - Can result in poor response time - Can effect time-of-day clocks in guests - Can undo good scheduling algorithm efforts of guests Rate Montonic Scheduling - A priority is assigned based on the inverse of its period - Shorter periods = higher priority; - Longer periods = lower priority - \( P_1 \) is assigned a higher priority than \( P_2 \). Missed Deadlines with Rate Monotonic Scheduling ![Diagram showing missed deadlines with rate monotonic scheduling.](image) Earliest Deadline First Scheduling (EDF) Priorities are assigned according to deadlines: the earlier the deadline, the higher the priority; the later the deadline, the lower the priority Proportional Share Scheduling - $T$ shares are allocated among all processes in the system - An application receives $N$ shares where $N < T$ - This ensures each application will receive $\frac{N}{T}$ of the total processor time POSIX Real-Time Scheduling - The POSIX.1b standard - API provides functions for managing real-time threads - Defines two scheduling classes for real-time threads: 1. SCHED_FIFO - threads are scheduled using a FCFS strategy with a FIFO queue. There is no time-slicing for threads of equal priority 2. SCHED_RR - similar to SCHED_FIFO except time-slicing occurs for threads of equal priority - Defines two functions for getting and setting scheduling policy: 1. `pthread_attr_getsched_policy(pthread_attr_t *attr, int *policy)` 2. `pthread_attr_setsched_policy(pthread_attr_t *attr, int policy)` POSIX Real-Time Scheduling API #include <pthread.h> #include <stdio.h> #define NUM_THREADS 5 int main(int argc, char *argv[]) { int i, policy; pthread_t_tid[NUM_THREADS]; pthread_attr_t attr; /* get the default attributes */ pthread_attr_init(&attr); /* get the current scheduling policy */ if (pthread_attr_getschedpolicy(&attr, &policy) != 0) fprintf(stderr, "Unable to get policy.\n"); else { if (policy == SCHED_OTHER) printf("SCHED_OTHER\n"); else if (policy == SCHED_RR) printf("SCHED_RR\n"); else if (policy == SCHED_FIFO) printf("SCHED_FIFO\n"); } } /* set the scheduling policy - FIFO, RR, or OTHER */ if (pthread_attr_setschedpolicy(&attr, SCHED_FIFO) != 0) fprintf(stderr, "Unable to set policy.\n"); /* create the threads */ for (i = 0; i < NUM_THREADS; i++) pthread_create(&tid[i],&attr,runner,NULL); /* now join on each thread */ for (i = 0; i < NUM_THREADS; i++) pthread_join(tid[i], NULL); } /* Each thread will begin control in this function */ void *runner(void *param) { /* do some work ... */ pthread_exit(0); } Operating System Examples - Linux scheduling - Windows scheduling - Solaris scheduling Linux Scheduling Through Version 2.5 - Prior to kernel version 2.5, ran variation of standard UNIX scheduling algorithm - Version 2.5 moved to constant order $O(1)$ scheduling time - Preemptive, priority based - Two priority ranges: time-sharing and real-time - **Real-time** range from 0 to 99 and **nice** value from 100 to 140 - Map into global priority with numerically lower values indicating higher priority - Higher priority gets larger q - Task run-able as long as time left in time slice (**active**) - If no time left (**expired**), not run-able until all other tasks use their slices - All run-able tasks tracked in per-CPU **runqueue** data structure - Two priority arrays (active, expired) - Tasks indexed by priority - When no more active, arrays are exchanged - Worked well, but poor response times for interactive processes Linux Scheduling in Version 2.6.23 + - **Completely Fair Scheduler** (CFS) - **Scheduling classes** - Each has specific priority - Scheduler picks highest priority task in highest scheduling class - Rather than quantum based on fixed time allotments, based on proportion of CPU time - 2 scheduling classes included, others can be added 1. default 2. real-time - Quantum calculated based on **nice value** from -20 to +19 - Lower value is higher priority - Calculates **target latency** – interval of time during which task should run at least once - Target latency can increase if say number of active tasks increases - CFS scheduler maintains per task **virtual run time** in variable vruntime - Associated with decay factor based on priority of task – lower priority is higher decay rate - Normal default priority yields virtual run time = actual run time - To decide next task to run, scheduler picks task with lowest virtual run time The Linux CFS scheduler provides an efficient algorithm for selecting which task to run next. Each runnable task is placed in a red-black tree—a balanced binary search tree whose key is based on the value of \( v_{\text{runtime}} \). This tree is shown below: ![Red-black tree diagram] When a task becomes runnable, it is added to the tree. If a task on the tree is not runnable (for example, if it is blocked while waiting for I/O), it is removed. Generally speaking, tasks that have been given less processing time (smaller values of \( v_{\text{runtime}} \)) are toward the left side of the tree, and tasks that have been given more processing time are on the right side. According to the properties of a binary search tree, the leftmost node has the smallest key value, which for the sake of the CFS scheduler means that it is the task with the highest priority. Because the red-black tree is balanced, navigating it to discover the leftmost node will require \( O(\log N) \) operations (where \( N \) is the number of nodes in the tree). However, for efficiency reasons, the Linux scheduler caches this value in the variable `rb_leftmost`, and thus determining which task to run next requires only retrieving the cached value. - Real-time scheduling according to POSIX.1b - Real-time tasks have static priorities - Real-time plus normal map into global priority scheme - Nice value of -20 maps to global priority 100 - Nice value of +19 maps to priority 139 Windows Scheduling - Windows uses priority-based preemptive scheduling - Highest-priority thread runs next - **Dispatcher** is scheduler - Thread runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority thread - Real-time threads can preempt non-real-time - 32-level priority scheme - **Variable class** is 1-15, **real-time class** is 16-31 - Priority 0 is memory-management thread - Queue for each priority - If no run-able thread, runs **idle thread** Windows Priority Classes - Win32 API identifies several priority classes to which a process can belong: - REALTIME_PRIORITY_CLASS, HIGH_PRIORITY_CLASS, ABOVE_NORMAL_PRIORITY_CLASS, NORMAL_PRIORITY_CLASS, BELOW_NORMAL_PRIORITY_CLASS, IDLE_PRIORITY_CLASS - All are variable except REALTIME - A thread within a given priority class has a relative priority: - TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, NORMAL, BELOW_NORMAL, LOWEST, IDLE - Priority class and relative priority combine to give numeric priority - Base priority is NORMAL within the class - If quantum expires, priority lowered, but never below base Windows Priority Classes (Cont.) - If wait occurs, priority boosted depending on what was waited for - Foreground window given 3x priority boost - Windows 7 added **user-mode scheduling (UMS)** - Applications create and manage threads independent of kernel - For large number of threads, much more efficient - UMS schedulers come from programming language libraries like C++ **Concurrent Runtime** (ConcRT) framework ## Windows Priorities <table> <thead> <tr> <th></th> <th>real-time</th> <th>high</th> <th>above normal</th> <th>normal</th> <th>below normal</th> <th>idle priority</th> </tr> </thead> <tbody> <tr> <td>time-critical</td> <td>31</td> <td>15</td> <td>15</td> <td>15</td> <td>15</td> <td>15</td> </tr> <tr> <td>highest</td> <td>26</td> <td>15</td> <td>12</td> <td>10</td> <td>8</td> <td>6</td> </tr> <tr> <td>above normal</td> <td>25</td> <td>14</td> <td>11</td> <td>9</td> <td>7</td> <td>5</td> </tr> <tr> <td>normal</td> <td>24</td> <td>13</td> <td>10</td> <td>8</td> <td>6</td> <td>4</td> </tr> <tr> <td>below normal</td> <td>23</td> <td>12</td> <td>9</td> <td>7</td> <td>5</td> <td>3</td> </tr> <tr> <td>lowest</td> <td>22</td> <td>11</td> <td>8</td> <td>6</td> <td>4</td> <td>2</td> </tr> <tr> <td>idle</td> <td>16</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> </tr> </tbody> </table> Solaris - Priority-based scheduling - Six classes available - Time sharing (default) (TS) - Interactive (IA) - Real time (RT) - System (SYS) - Fair Share (FSS) - Fixed priority (FP) - Given thread can be in one class at a time - Each class has its own scheduling algorithm - Time sharing is multi-level feedback queue - Loadable table configurable by sysadmin ### Solaris Dispatch Table <table> <thead> <tr> <th>priority</th> <th>time quantum</th> <th>time quantum expired</th> <th>return from sleep</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>200</td> <td>0</td> <td>50</td> </tr> <tr> <td>5</td> <td>200</td> <td>0</td> <td>50</td> </tr> <tr> <td>10</td> <td>160</td> <td>0</td> <td>51</td> </tr> <tr> <td>15</td> <td>160</td> <td>5</td> <td>51</td> </tr> <tr> <td>20</td> <td>120</td> <td>10</td> <td>52</td> </tr> <tr> <td>25</td> <td>120</td> <td>15</td> <td>52</td> </tr> <tr> <td>30</td> <td>80</td> <td>20</td> <td>53</td> </tr> <tr> <td>35</td> <td>80</td> <td>25</td> <td>54</td> </tr> <tr> <td>40</td> <td>40</td> <td>30</td> <td>55</td> </tr> <tr> <td>45</td> <td>40</td> <td>35</td> <td>56</td> </tr> <tr> <td>50</td> <td>40</td> <td>40</td> <td>58</td> </tr> <tr> <td>55</td> <td>40</td> <td>45</td> <td>58</td> </tr> <tr> <td>59</td> <td>20</td> <td>49</td> <td>59</td> </tr> </tbody> </table> Solaris Scheduling - Interrupt threads - Realtime (RT) threads - System (SYS) threads - Fair share (FSS) threads - Fixed priority (FX) threads - Timeshare (TS) threads - Interactive (IA) threads Solaris Scheduling (Cont.) - Scheduler converts class-specific priorities into a per-thread global priority - Thread with highest priority runs next - Runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority thread - Multiple threads at same priority selected via RR Algorithm Evaluation - How to select CPU-scheduling algorithm for an OS? - Determine criteria, then evaluate algorithms - **Deterministic modeling** - Type of **analytic evaluation** - Takes a particular predetermined workload and defines the performance of each algorithm for that workload - Consider 5 processes arriving at time 0: <table> <thead> <tr> <th>Process</th> <th>Burst Time</th> </tr> </thead> <tbody> <tr> <td>$P_1$</td> <td>10</td> </tr> <tr> <td>$P_2$</td> <td>29</td> </tr> <tr> <td>$P_3$</td> <td>3</td> </tr> <tr> <td>$P_4$</td> <td>7</td> </tr> <tr> <td>$P_5$</td> <td>12</td> </tr> </tbody> </table> Deterministic Evaluation - For each algorithm, calculate minimum average waiting time - Simple and fast, but requires exact numbers for input, applies only to those inputs - FCS is 28ms: - Non-preemptive SFJ is 13ms: - RR is 23ms: Queueing Models - Describes the arrival of processes, and CPU and I/O bursts probabilistically - Commonly exponential, and described by mean - Computes average throughput, utilization, waiting time, etc - Computer system described as network of servers, each with queue of waiting processes - Knowing arrival rates and service rates - Computes utilization, average queue length, average wait time, etc Little’s Formula - \( n = \) average queue length - \( W = \) average waiting time in queue - \( \lambda = \) average arrival rate into queue Little’s law – in steady state, processes leaving queue must equal processes arriving, thus: \[ n = \lambda \times W \] - Valid for any scheduling algorithm and arrival distribution For example, if on average 7 processes arrive per second, and normally 14 processes in queue, then average wait time per process = 2 seconds Simulations - Queueing models limited - **Simulations** more accurate - Programmed model of computer system - Clock is a variable - Gather statistics indicating algorithm performance - Data to drive simulation gathered via - Random number generator according to probabilities - Distributions defined mathematically or empirically - Trace tapes record sequences of real events in real systems Evaluation of CPU Schedulers by Simulation ![Diagram showing the evaluation of CPU schedulers by simulation. The diagram includes actual process execution, simulation for FCFS, SJF, and RR (q = 14), with performance statistics for each.] Implementation - Even simulations have limited accuracy - Just implement new scheduler and test in real systems - High cost, high risk - Environments vary - Most flexible schedulers can be modified per-site or per-system - Or APIs to modify priorities - But again environments vary End of Chapter 6
{"Source-Url": "http://deltauniv.edu.eg/new/engineering/wp-content/uploads/ch6.pdf", "len_cl100k_base": 7606, "olmocr-version": "0.1.48", "pdf-total-pages": 68, "total-fallback-pages": 0, "total-input-tokens": 102141, "total-output-tokens": 9577, "length": "2e12", "weborganizer": {"__label__adult": 0.00030303001403808594, "__label__art_design": 0.0003333091735839844, "__label__crime_law": 0.0003464221954345703, "__label__education_jobs": 0.0012063980102539062, "__label__entertainment": 8.541345596313477e-05, "__label__fashion_beauty": 0.0001423358917236328, "__label__finance_business": 0.00051116943359375, "__label__food_dining": 0.0003216266632080078, "__label__games": 0.000965595245361328, "__label__hardware": 0.0079193115234375, "__label__health": 0.00046324729919433594, "__label__history": 0.00032901763916015625, "__label__home_hobbies": 0.00017142295837402344, "__label__industrial": 0.0012063980102539062, "__label__literature": 0.00017595291137695312, "__label__politics": 0.0003066062927246094, "__label__religion": 0.0004677772521972656, "__label__science_tech": 0.205810546875, "__label__social_life": 7.671117782592773e-05, "__label__software": 0.0307769775390625, "__label__software_dev": 0.7470703125, "__label__sports_fitness": 0.00030493736267089844, "__label__transportation": 0.0006089210510253906, "__label__travel": 0.00022351741790771484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27186, 0.0201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27186, 0.38635]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27186, 0.82601]], "google_gemma-3-12b-it_contains_pii": [[0, 26, false], [26, 247, null], [247, 561, null], [561, 812, null], [812, 885, null], [885, 1482, null], [1482, 1840, null], [1840, 2309, null], [2309, 2453, null], [2453, 2843, null], [2843, 3270, null], [3270, 3610, null], [3610, 3869, null], [3869, 4475, null], [4475, 4617, null], [4617, 5125, null], [5125, 5598, null], [5598, 6060, null], [6060, 6418, null], [6418, 7057, null], [7057, 7500, null], [7500, 7708, null], [7708, 7797, null], [7797, 8377, null], [8377, 8550, null], [8550, 9000, null], [9000, 9515, null], [9515, 10013, null], [10013, 10303, null], [10303, 10999, null], [10999, 11468, null], [11468, 12111, null], [12111, 12202, null], [12202, 12575, null], [12575, 12843, null], [12843, 13032, null], [13032, 13500, null], [13500, 13672, null], [13672, 14100, null], [14100, 14409, null], [14409, 14624, null], [14624, 14748, null], [14748, 14937, null], [14937, 15167, null], [15167, 15770, null], [15770, 16397, null], [16397, 16894, null], [16894, 16982, null], [16982, 17853, null], [17853, 18818, null], [18818, 20052, null], [20052, 20285, null], [20285, 20761, null], [20761, 21373, null], [21373, 21797, null], [21797, 22639, null], [22639, 23014, null], [23014, 24107, null], [24107, 24303, null], [24303, 24597, null], [24597, 25112, null], [25112, 25350, null], [25350, 25762, null], [25762, 26231, null], [26231, 26644, null], [26644, 26883, null], [26883, 27170, null], [27170, 27186, null]], "google_gemma-3-12b-it_is_public_document": [[0, 26, true], [26, 247, null], [247, 561, null], [561, 812, null], [812, 885, null], [885, 1482, null], [1482, 1840, null], [1840, 2309, null], [2309, 2453, null], [2453, 2843, null], [2843, 3270, null], [3270, 3610, null], [3610, 3869, null], [3869, 4475, null], [4475, 4617, null], [4617, 5125, null], [5125, 5598, null], [5598, 6060, null], [6060, 6418, null], [6418, 7057, null], [7057, 7500, null], [7500, 7708, null], [7708, 7797, null], [7797, 8377, null], [8377, 8550, null], [8550, 9000, null], [9000, 9515, null], [9515, 10013, null], [10013, 10303, null], [10303, 10999, null], [10999, 11468, null], [11468, 12111, null], [12111, 12202, null], [12202, 12575, null], [12575, 12843, null], [12843, 13032, null], [13032, 13500, null], [13500, 13672, null], [13672, 14100, null], [14100, 14409, null], [14409, 14624, null], [14624, 14748, null], [14748, 14937, null], [14937, 15167, null], [15167, 15770, null], [15770, 16397, null], [16397, 16894, null], [16894, 16982, null], [16982, 17853, null], [17853, 18818, null], [18818, 20052, null], [20052, 20285, null], [20285, 20761, null], [20761, 21373, null], [21373, 21797, null], [21797, 22639, null], [22639, 23014, null], [23014, 24107, null], [24107, 24303, null], [24303, 24597, null], [24597, 25112, null], [25112, 25350, null], [25350, 25762, null], [25762, 26231, null], [26231, 26644, null], [26644, 26883, null], [26883, 27170, null], [27170, 27186, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27186, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 27186, null]], "pdf_page_numbers": [[0, 26, 1], [26, 247, 2], [247, 561, 3], [561, 812, 4], [812, 885, 5], [885, 1482, 6], [1482, 1840, 7], [1840, 2309, 8], [2309, 2453, 9], [2453, 2843, 10], [2843, 3270, 11], [3270, 3610, 12], [3610, 3869, 13], [3869, 4475, 14], [4475, 4617, 15], [4617, 5125, 16], [5125, 5598, 17], [5598, 6060, 18], [6060, 6418, 19], [6418, 7057, 20], [7057, 7500, 21], [7500, 7708, 22], [7708, 7797, 23], [7797, 8377, 24], [8377, 8550, 25], [8550, 9000, 26], [9000, 9515, 27], [9515, 10013, 28], [10013, 10303, 29], [10303, 10999, 30], [10999, 11468, 31], [11468, 12111, 32], [12111, 12202, 33], [12202, 12575, 34], [12575, 12843, 35], [12843, 13032, 36], [13032, 13500, 37], [13500, 13672, 38], [13672, 14100, 39], [14100, 14409, 40], [14409, 14624, 41], [14624, 14748, 42], [14748, 14937, 43], [14937, 15167, 44], [15167, 15770, 45], [15770, 16397, 46], [16397, 16894, 47], [16894, 16982, 48], [16982, 17853, 49], [17853, 18818, 50], [18818, 20052, 51], [20052, 20285, 52], [20285, 20761, 53], [20761, 21373, 54], [21373, 21797, 55], [21797, 22639, 56], [22639, 23014, 57], [23014, 24107, 58], [24107, 24303, 59], [24303, 24597, 60], [24597, 25112, 61], [25112, 25350, 62], [25350, 25762, 63], [25762, 26231, 64], [26231, 26644, 65], [26644, 26883, 66], [26883, 27170, 67], [27170, 27186, 68]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27186, 0.12014]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
f3bba3a46709ae08d8e55ee75e5122dfd413ee9f
Improvement Quality of Software Requirements Using Requirement Negotiation System for Supporting Decision Egia Rosi Subhiyakto¹, Yani Parti Astuti² ¹²Informatics Engineering Department, Faculty of Computer Science, Dian Nuswantoro University Semarang, Indonesia ¹egia@dsn.dinus.ac.id, ²yanipartiastuti@dsn.dinus.ac.id Abstract - The Requirement Engineering phase, where all requests and software requirements of the user and the client are delivered, understood and agreed upon. However, often the developers are just too focused on implementing the software, even though the Requirements Engineering phase is a phase that can have a big impact. The impact is not only on the final product but also on the development process itself. In this study, the authors conducted software development negotiation of software requirements as a medium for stakeholders to negotiate the requirements of software products. In the negotiation system, the author will provide a means of decision support or group decision support system that a method of resolving conflicts. The main objectives of this work are twofold: 1) to assist the negotiation process between stakeholders and 2) to improve quality software after negotiation. The workings of the E-Voting method are by giving choices to each sub-specification that has been chosen by stakeholders. We will select the choice that has the highest number of votes as a specification. We used prototyping as a method of developing a system life cycle because prototyping is very open to improvements that might occur after it releases the prototype version system. The results of evaluations show that the system has a high success rate based on 3 dimensions of testing, Performance (80.5%), Usability (78.5%), and User Satisfaction (78%). Keywords: Requirements, negotiation, improvement, prototyping I. INTRODUCTION In a software system development process, the Requirement Engineering phase is the earliest phase where the software requirements of the user and client are collected, understood together, and an agreement will occur between the developer and the client. At this stage, it can be called a crucial stage because the developer is faced by various stakeholders with their various requirements, which are tailored to the background of the company or client. This stage is complex in a series of software engineering processes [1]. It often refers this process to as the most difficult phase because the validation percentage of the Requirements Engineering process itself must reach a high number. If a system failure occurs when the development process is complete, it will cause various consequences. On a small scale, it will take a lot of time to correct errors, system performance, and functionality. Meanwhile, for a large scale, it will cause losing the trust of stakeholders, losing orders and reducing profits, also risking the reputation of the developer [2]. We can use negotiation as a method and tool for negotiation in various disciplines. We can say negotiation to be a phase in the decision-making process. In decision making there are firm foundations related to the field of science itself and the situation at hand. However, negotiating in a group-decision is challenging and produces its complexities for negotiators [3]. In the [4] introduced a tool called C-FaRM is useful in managing knowledge of requirements and various types of requirement artifacts. Within this framework uses an ontology-based recommendation system that helps prioritize, visualizing, and negotiating requirements. In the other hand, using the latest meta-heuristic algorithm Owl Search Algorithm (OSA), and chaos theory, in the [5] offer a solution based on an automated bilateral negotiation model. The Chaotic Owl Search Algorithm (COSA) was used to adapt the negotiation strategy to calculate bids during the negotiation process. Based on the comparison results, the COSA algorithm used is proven to be accurate in terms of utility, average round of negotiations and processing time. Cooperative negotiations based on fuzzy inference used as approach [6], authors propose a fuzzy-based two-layer control architecture. There are pairwise negotiations between agents according to the couplings and the communication network. The resulting pairwise control sequences are sent to a coordinator in the upper control layer. In recent decades, various methods have been proposed to manage requirements, there are several related studies, including [7] which conducted research on solution-based requirements management based on the git control system. They developed a tool that supports managing requirements for large-scale development using agile methods. In the [8] authors investigate application of three multi-criteria decision-making methods. In the other hand, trust negotiation is important. In the other hand, [9] introduced a type of trust management model for establishing trust between entities by a mutual exchange of credentials. Author’s discusses and presents a model that uses UML notation to design trust negotiations. The specifications created will become part of the SDLC, which provides a solid and reliable foundation in terms of software development. In another discussion, related to cloud computing mentioned that from the consumer’s point of view, they want their business needs to be met and provided with the best quality service. Meanwhile, from the cloud provider side, they want to sell services that suit their preferences[10]. And the last is [11] which uses the multi classifier voting method. II. METHOD A. Research Method The data collected is primary data because the authors get it directly from the research subject and are qualitative. In this study, the data used comprised 2 types, namely from the developer and data by the pharmacy (client). Developer data is got by distributing a questionnaire form regarding the respondent’s opinion on a software requirements negotiation system. In this questionnaire, the authors target respondents, namely software developers and Informatics Engineering students who have a background in software development. This questionnaire uses several questions (parameters) as the major focus of respondents’ opinions on negotiations, including: - Respondent’s agreement regarding the negotiation system conducted via the web, the respondent can answer “Yes” to agree and “No” to disagree. - Respondents’ preferences in pre-negotiation, whether respondents prefer to recognize the problem first or to identify the stakeholder first. - Respondent’s agreement regarding whether it is necessary to describe the interface design / UML / illustration of the system being built. Respondents can answer “Yes” to agree and “No” to disagree. - Respondent’s agreement regarding the role of voting as a problem-solving method in negotiations, respondents can answer the respondent, can answer “Yes” to agree and “No” to disagree. - Respondents’ preferences regarding the applicable Voting system, whether it is the Plurality Method or Majority Rule. At the end of the questionnaire, it includes a column name and email to identify the respondent. Client data is got by distributing a questionnaire form regarding the respondent’s opinion on the needs of a pharmacy system. In this questionnaire, the authors target respondents, namely pharmacy employees/owners, both pharmacies that already have a pharmacy management software system or pharmacy websites and pharmacies that don’t have both. This questionnaire uses several questions (parameters) as the primary focus of respondents regarding their needs, including: - Respondents’ preferences regarding the software to be built are based on the pharmacy’s actual priority. Respondents can choose the website as a pharmacy information system or management system. - The question is valid only if the respondent answers that they already have a software system in the pharmacy. This question discusses features that are not yet available in the existing system. The answer choices in this questionnaire include Data Security (Database Backup, Restore Database, and Privacy User), Data Menu (Supplier data, products, concoctions, similar drugs, doctor data, and stock hospitalization), Reports Menu (Recap transaction data, etc.), and Transactions with Credit, Accounts Payable, Loss / Profit Report and others which the respondent can write himself. The choice of answer to this question is a checkbox so that the respondent can choose over 1 choice. - Continuation of the previous question, respondents are required to choose a maximum of 3 choices of answers regarding the most important features that are respondents’ preferences for the pharmacy system based on the actual needs of the pharmacy. At the end of the questionnaire, the authors include a column for the respondent’s name and the respondent’s pharmacy to identify the respondent’s answer. From the data collected, the authors performed the data analysis technique by: - Both primary data will be grouped into developer data and client data. • The two data were analyzed based on the existing parameters. • Conclusions are drawn on each data source. • The conclusion from the two data sources will be the main parameter in the negotiation's development system requirements in this study, where the conclusions from the developer data sources will refer to the reference in building a comfortable negotiation space and system in the system, while the client data sources refer to the reference in forming Group decision support system to assist the voting method process in making negotiation conclusions. B. Requirement Negotiation Method Requirements Negotiation Method in this system will run through 3 stages of the negotiation process pre-negotiation, negotiation, and post-negotiation. The parameter points of the pre-negotiation process will lead to the negotiations that will be carried out by the two stakeholders. 1) The pre-negotiation process that will be carried out in this research is: • Problem definition: Identifying problems that exist in the problem object. Here, an example is the pharmacy, the problem at a pharmacy is in the form of a need for an administration system or information system. • Stakeholder Identification: Identity who is involved in negotiating the need for pharmacy software. Here, the developer needs to know who he is dealing with, whether the owner of the pharmacy or just a pharmacy manager. 2) The negotiation process will run in which the two stakeholders will exchange offers based on the needs and desires of each stakeholder. Matters related to negotiating needs in research are: • Stakeholders will communicate and negotiate in a forum/means of negotiation in the form of a chat forum that can be accessed via the web. • Stakeholders can provide/read comments and send/access pictures and product design illustrations. 3) In the post-negotiation process itself, voting will be carried out as group-decision support from the negotiation forum, both parties will test which to ensure the certainty of the negotiation results. Key stakeholders are actors who have a role to input points of need and choose to be selected. C. E-Voting Method Voting is one of the most important acts through which a community can make a collective decision [12]. Based on the results of research conducted [13] shows that e-voting has no effect on the number of voters. In this research, innovation is carried out by analyzing citizen participation. They estimated a multi-level Bayesian model on official data on Swiss citizen participation. The results showed that e-voting has increased the number of voters. Below is class e-voting, consist of name of class, attribute, and method. Fig. 1 Shows the class diagram e-voting process, and will use these algorithms. Meanwhile in the Fig. 2, the notation of the E-Voting algorithm begins with the function for the input (vote) given by the user. In the Fig. 3 show the results of the sum of votes will be accumulated into the results of negotiations with the following notation. III. RESULT AND DISCUSSION A. Data Analysis One part of a software project is to carry out the software management process. The important thing related to it is the appraisal of the developed project, and failure due to wrong management practices. In the [14] discuss about software estimation tools built to find efficient and accurate methods for estimating effort. So we also have to follow the appropriate development stage process. ![Fig. 1 Class diagram e-voting](image1) ``` for r in requirement Input choice Requirement [r][choice]++ Stakeholder [s][r] = choice ``` ![Fig. 2 Source code function vote](image2) ``` Voting_results=[ ] For r requirement Max= requirement [i] if requirement [r][i] > requirement [r][Max] Max = i Voting_results[r] = Max ``` ![Fig. 3 Source code e-voting results](image3) Based on the data collection process through the methods mentioned in the previous chapter, the authors got data results from both types of data. We present the results for client data and developer data below. In collecting developer data, we use several question parameters that will support the development and construction of this negotiation system. All the parameters of the questions given to 17 respondents, which comprise Informatics Engineering students with an interest in Software Engineering and software developers who understand the negotiation process. Table I below is a table of overall developer data collection. In collecting client data, we use several question parameters that will support the development and construction of this negotiation system, where all parameters of the questions are answered by 9 respondents who work, all of whom have status as employees of several pharmacies. Following are Table II and graphs of client data collection results. ### B. Design In this section, we have design for this system such as high level architecture, and design user interface. Fig. 4 shows the feature of our negotiation systems. The main components of this tool are: 1) registration for guest and login after register, 2) creating the negotiation for stakeholders, and discuss in the forum, stakeholders also can search negotiation forum that has been done. To describe system functionality, you can use UML as shown in [15]. Next, we will model the system using use case diagrams. Use case diagram is one of the diagrams in UML, and is often taught in universities, especially computer science[16]. #### TABLE I **QUESTIONNAIRE OVER THE DEVELOPERS** <table> <thead> <tr> <th>Parameters</th> <th>Results</th> </tr> </thead> <tbody> <tr> <td>Respondent's agreement regarding the negotiation system conducted via the web</td> <td>Agree: 82.4%</td> </tr> <tr> <td></td> <td>Disagree: 17.6%</td> </tr> <tr> <td>Respondent preference in pre-negotiation</td> <td>Identify Stakeholders first: 58.8%</td> </tr> <tr> <td></td> <td>Identify problem first: 41.2%</td> </tr> <tr> <td>Respondent’s agreement regarding whether it is necessary to describe the interface design / UML / illustration of the system is built</td> <td>Agree: 94,1%</td> </tr> <tr> <td></td> <td>Disagree: 5.9%</td> </tr> <tr> <td>Voting as a negotiation method</td> <td>Agree: 76.5%</td> </tr> <tr> <td></td> <td>Disagree: 23.5%</td> </tr> <tr> <td>Respondents' preferences regarding the applicable Voting system</td> <td>Plurality Method: 52.9%</td> </tr> <tr> <td></td> <td>Majority Rule: 47.1%</td> </tr> </tbody> </table> #### TABLE II **QUESTIONNAIRE OVER THE CLIENTS** <table> <thead> <tr> <th>Parameter</th> <th>Results</th> </tr> </thead> <tbody> <tr> <td>Respondents' preferences regarding the software to be built are based on the actual priority of the pharmacy</td> <td>Pharmacy Management System: 55%</td> </tr> <tr> <td></td> <td>Website Profile: 45%</td> </tr> <tr> <td>Respondents' preferences regarding features that are not available in the existing system (For pharmacies that are computerized)</td> <td>Credit Transaction: 46%</td> </tr> <tr> <td></td> <td>Income Statement: 23%</td> </tr> <tr> <td></td> <td>System Security: 15%</td> </tr> <tr> <td></td> <td>Reports: 15%</td> </tr> <tr> <td>The most important feature for the Pharmacy.</td> <td>Master Data: 33%</td> </tr> <tr> <td></td> <td>Reports: 26%</td> </tr> <tr> <td></td> <td>Security System: 20%</td> </tr> <tr> <td></td> <td>Transaction: 13%</td> </tr> <tr> <td></td> <td>Income Statement: 6%</td> </tr> </tbody> </table> C. Implementation The result of implementing this research is a Web-based application. From input, processing to output, all users will get on the web. We chose the name, “RENego” as the branding name of this Software Negotiation System web application, which stands for “Requirements Software Negotiation”. This application has 3 principal parts that cover the stages in negotiations: 1) Pre-negotiation shows in Fig. 6, where key stakeholders create a negotiation forum and include stakeholders. Includes the Forum Dashboard page. The following is a display of the Forum Dashboard page. 2) Negotiation shows in Fig. 7, where each stakeholder will negotiate such as giving comments, sending illustrations of software requirements design, covering the Negotiation Forum page. 3) Post-negotiation shows in Fig. 8, where each stakeholder will vote as a step for solving problems in negotiating software requirements. Includes voting pages and negotiation results. While in the Fig. 9 shows the source code for voting method. Improvement Quality of Software … | Subhiyakto, E.R., Astuti, Y.P., 89 – 97 D. User Acceptance Testing We need user Acceptance Test to increase the usefulness and advances of the system. There are many testing strategies and method can used [17]. In this test, the author provides a piece of paper having several questions that will be held out by respondents who will later be involved in this system, 10 respondents comprising 5 each from software developers (software engineering students, business analysts, and programmers) and 5 clients (assistant-pharmacist and the public). The questions that stand describe several dimensions of the system, cognitive (convenience), performance, and user satisfaction in accepting this Improvement Quality of Software Requirements Negotiation System The following are the test results of the software requirements negotiation system. Fig. 10 shows the results for the performance survey. The graph showed positive results for each part of the survey with agreement easy access of the system (96% strongly agree and agree on its easy access), the system accuracy (over 90%), voting accuracy (over 90%) and data input definition (over 90%). However, in terms of error handling, this system should be better and improved (70%). Fig. 11 shows the results tests on the cognitive/ easiness of System dimensions, from graphs and tables show that positive results from all respondents, 76% think that the features in the system are easy to use, 86% of data management is easy, and 90% consider the process for voting needs easy. However, to learn and understanding the system, this system should be better and improved (62%). Fig. 12 shows the user satisfaction dimension, the small percentage of disagreement (less 5%) over the performance and satisfaction is related to ability of the participant to understand to using this system. The results of the User Acceptance test give a value on a fairly high likert scale. Respondents agree that the "RENeGo" software requirements negotiation system has a positive impact both in terms of performance, cognitive, and user satisfaction. IV. CONCLUSION Based on a series of research steps, analysis, design, implementation, and testing of software requirements negotiation systems using the E-Voting method as decision support, we have described all of which in this paper, the following conclusions are: Based on the data analysis process, 82.4% of stakeholders agreed to negotiate through the web and the system testing process, the web application forum for the software requirements negotiation system can be applied as a forum for communication and negotiation of stakeholders comprising developers and clients. The system is used to discuss it and understand each other’s needs based on the recommendation for a forum with a particular software theme. The E-Voting method can be applied properly as a decision-making method of negotiating software requirements, based on the results of data analysis which states that 76.5% of respondents choose to vote as group decision support, and also where the results of applying to vote for negotiations are an absolute decision of choice. And the agreement between the two stakeholders and the results got as reports, to reduce the possibility of failure of the developer, both on a small scale (correcting errors, performance, functionality) and a large scale (loss of trust from clients, orders, reduced profits). This is also because we will develop later the materials and needs that result from a negotiated agreement between the two parties. In addition, based on the User Acceptance Test that the author has done, the software requirements negotiation system that uses the E-Voting method has a high success rate based on 3 test dimensions, namely Performance (80.5%), Ease (78.5%), and User Satisfaction (78%). ACKNOWLEDGEMENT This research was funded by the Lembaga Penelitian dan Pengabdian Masyarakat (LPPM) under Penelitian Dasar Unggulan Perguruan Tinggi (PDUPT) Program managed by Dian Nuswantoro University under Grant No. 056/A.38-04/UDN-09/VI/2021. REFERENCES
{"Source-Url": "http://jurnalnasional.ump.ac.id/index.php/JUITA/article/viewFile/12227/4867", "len_cl100k_base": 4506, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28542, "total-output-tokens": 6016, "length": "2e12", "weborganizer": {"__label__adult": 0.00030803680419921875, "__label__art_design": 0.00030922889709472656, "__label__crime_law": 0.0002903938293457031, "__label__education_jobs": 0.0014591217041015625, "__label__entertainment": 4.851818084716797e-05, "__label__fashion_beauty": 0.00013327598571777344, "__label__finance_business": 0.0003705024719238281, "__label__food_dining": 0.0003161430358886719, "__label__games": 0.0004575252532958984, "__label__hardware": 0.000438690185546875, "__label__health": 0.00037932395935058594, "__label__history": 0.00016117095947265625, "__label__home_hobbies": 5.4001808166503906e-05, "__label__industrial": 0.00026345252990722656, "__label__literature": 0.0002319812774658203, "__label__politics": 0.0001780986785888672, "__label__religion": 0.00028133392333984375, "__label__science_tech": 0.00756072998046875, "__label__social_life": 8.827447891235352e-05, "__label__software": 0.00528717041015625, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.0001928806304931641, "__label__transportation": 0.00032711029052734375, "__label__travel": 0.00015914440155029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25953, 0.0172]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25953, 0.14998]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25953, 0.91142]], "google_gemma-3-12b-it_contains_pii": [[0, 4378, false], [4378, 9085, null], [9085, 12970, null], [12970, 17696, null], [17696, 18724, null], [18724, 18800, null], [18800, 19453, null], [19453, 21295, null], [21295, 25953, null], [25953, 25953, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4378, true], [4378, 9085, null], [9085, 12970, null], [12970, 17696, null], [17696, 18724, null], [18724, 18800, null], [18800, 19453, null], [19453, 21295, null], [21295, 25953, null], [25953, 25953, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25953, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25953, null]], "pdf_page_numbers": [[0, 4378, 1], [4378, 9085, 2], [9085, 12970, 3], [12970, 17696, 4], [17696, 18724, 5], [18724, 18800, 6], [18800, 19453, 7], [19453, 21295, 8], [21295, 25953, 9], [25953, 25953, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25953, 0.18382]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
28c858dcddccd6aa1ecaad43631afc31dde54f16
Are your lights off? Using problem frames to diagnose system failures Conference or Workshop Item How to cite: For guidance on citations see FAQs. © 2009 IEEE Version: Version of Record Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page. oro.open.ac.uk Are Your Lights Off? Using Problem Frames to Diagnose System Failures Thein Than Tun1,2 Michael Jackson2 Robin Laney2 Bashar Nuseibeh2 Yijun Yu2 1PReCISE Research Centre, Faculty of Computer Science, University of Namur, Belgium 2Department of Computing, The Open University, UK ttu@info.fundp.ac.be {m.jackson, r.c.laney, b.nuseibeh, y.yu}@open.ac.uk Abstract This paper reports on our experience of investigating the role of software systems in the power blackout that affected parts of the United States and Canada on 14 August 2003. Based on a detailed study of the official report on the blackout, our investigation has aimed to bring out requirements engineering lessons that can inform development practices for dependable software systems. Since the causes of failures are typically rooted in the complex structures of software systems and their world contexts, we have deployed and evaluated a framework that looks beyond the scope of software and into its physical context, directing attention to places in the system structures where failures are likely to occur. We report that (i) Problem Frames were effective in diagnosing the causes of failures and documenting the causes in a schematic and accessible way, and (ii) errors in addressing the concerns of bidable domains, model building problems, and monitoring problems had contributed to the blackout. 1 Introduction In mature branches of engineering, failures and “the role played by reaction to and anticipation of failure” are regarded as essential for achieving design success [11]. Identification of the causes of past system failures, organisation and documentation of them in a way accessible by engineers within an engineering community, and application of knowledge of failures when designing future systems, all play a central role in establishing “normal design” practices [15]. Although there have been several excellent reports on high-profile system failures involving software systems [5, 7, 9], development practices for dependable systems have not exploited input from incident or accident investigations in a systematic way [2]. This work is a small step in the direction to address the gap. Requirements Engineering (RE) is concerned with defining the behaviour of required systems, and any error introduced or prevented early in the development significantly contributes to the system dependability. In this respect, RE has a valuable role to play in systematising and documenting causes of past failures, and utilising this systematised knowledge in the development of future systems. In the same way that system failures can be attributed to programming, design, and human/operational errors, it is possible to attribute certain failures to RE errors. RE errors may be due to missing requirements, incorrect assumptions about the problem context, weak formulation of requirements and unexpected interactions between requirements. Although the broader context—such as the organisational settings, regulatory regimes and market forces—often plays an important role in failures, we deliberately focus on the role of the software system in its physical context in order to bring out clear lessons for requirements engineers. Therefore, a framework is needed for investigating failures, which looks beyond the scope of software and into its physical context, and directs attention to places in the system structures where failures are likely to occur. In this paper, we report on our experience of using Problem Frames [4] to identify, organise and document knowledge about the causes of past system failures. In the Problem Frames framework, potential causes of failures—known as “concerns”—are named and associated with a specific pattern of problem structure, a style of problem composition, a type of problem world domain, the requirement and the specification. An instantiation of a pattern, for instance, will immediately raise the need to address certain concerns in the sys- 2 Preliminaries This section provides an overview of our case study, the research methodology used to investigate the failures, the conceptual framework of Problem Frames, and the expected outcome of our study. 2.1 2003 US-Canada Electricity Blackout The electricity blackout that occurred on 14 August, 2003 in large parts of the Midwest and Northeast United States and Ontario, Canada, affected around 50 million people, according to the official report by the U.S.–Canada Power System Outage Task Force [14]. The outage began around 16:00 EDT (Eastern Daylight Time), and power was not fully restored for several days in some parts of the United States. The effect of the outage could be seen in satellite images of North America, whilst financial losses reportedly ran into billions of US dollars. The official report concluded that “this blackout could have been prevented”, and software failures leading to the operator’s reliance on outdated information was identified as one of the two “most important causes” of the blackout [14, p. 46]. 2.2 Methodology Investigating real-life system failures is difficult not least because of the size and complexity of these systems and limited availability of verifiable information about the failures and the systems involved [5]. Even when it is possible to master these difficulties, it is still a challenge to locate exactly when in the development an error was introduced [10]. The official report makes clear that factors such as the sagging of power lines, overgrown trees, poor communication, and lack of personnel training all contributed to the blackout. Since our interest was to learn RE lessons, our methodology for investigating failures examined the chain of events leading up to the failure, and isolated the role of software systems in the failure. We ascertained what the components of the system did, what they should have done, and how it would have been possible to identify the causes at the RE stage. Therefore, a framework was needed that allowed us to structure the potential causes of failures in a schematic way. 2.3 Problem Frames The Problem Frames framework [4] is based on certain principles, four of which are relevant to the discussion. First, the framework encourages a systematic separation of descriptions into requirements, problem world context and specifications. For example, Figure 1 shows a high-level description of a type of software problem known as Commanded Behaviour Frame. In this problem, a software system, Control Machine, is required to apply control on a domain in the physical world, the Controlled Domain, according to the commands of a human agent, the Operator. Exactly how the Controlled Domain should behave, or what property it must have, when the Operator issues commands is described by the Commanded Behaviour Requirement. Therefore the requirement states the relationship between the operator command OCommand at the interface a\_O, and the behaviour and property of the controlled domain CDBehaviour and CDProperty at the interface a\_CD. Description of the operator behaviour is concerned with the relationship between OInput at the interface b\_O and OCommand at the interface a\_O, namely what input the operator produces when a command is issued. Similarly, description of the Controlled Domain is concerned with the relationship between CMAction at the interface a\_CM and CDBehaviour and CDProperty at the interface a\_CD, namely what behaviour or property the controlled domain produces when machine actions are performed. The Operator and the Controlled Domain constitutes the problem world context of the Control Machine. The specification, description of the Control Machine, is concerned with the relationship between OInput at the interface b\_O and CMAction at the interface a\_CM, namely what actions the machine must perform when operator input is observed. The operator may be a lift user and the controlled domain, a lift. The requirement will state how the lift should behave when the lift user issues commands. The specification will state what operations the lift controller will perform when the operator input is received. Second, this framework emphasises the need to understand the physical structure of the problem world context, and the behaviour of the domains involved. Third, the framework is based on recurring patterns of software problems, called frames. Each frame captures “concerns” of a certain type of software problems. For instance, the main concern of the “Commanded Behaviour” frame is to ensure that the system obeys the operator commands in imposing control on the behaviour of the system. An instantiation of a frame implies generation of certain conditions that need to be discharged. Fourth, the framework provides a rich scheme for categorising and recording causes of failures. For instance, there are concerns specific to problem world domains, such as reliability, identity and breakage; there are frame concerns such as that of the required behaviour frame; and there are composition concerns such as conflict, consistency and synchronisation. Therefore, we hypothesised that the Problem Frames framework provides an appropriate foundation for diagnosing failures involving software systems. 2.4 Expected Outcomes There were two expected outcomes of this study. First, to establish whether Problem Frames are appropriate for investigating systems failures in terms of (i) locating causes of failure in the system structures, and (ii) recording them in a schematic way accessible by engineers within a community. Second, to identify causes of the blackout and either confirm them as known concerns or expand the repertoire of existing concerns by recording them schematically. 3 The Case Study We now discuss two software-related failures that contributed significantly to the blackout. We briefly recount the chain of events leading to the blackout before discussing how Problem Frames were applied to diagnose the causes of failures and record the causes of failures. 3.1 Problem #1: State Estimator and Real Time Contingency Analysis The infrastructure of the electric systems are large and complex, comprising many power generation stations, transformers, transmission lines, and individual and industrial customers. Providing reliable electricity through “real-time assessment, control and coordination of electricity production at thousands of generators, moving electricity across an interconnected network of transmission lines, and ultimately delivering the electricity to millions of customers” is a major technical challenge [14]. Reliability coordinators and control operators use complex monitoring systems to collect data about the status of the power network. In addition, they use a system called State Estimator (SE) to improve the accuracy of the collected data against the mathematical model of the power production and usage. When the divergence between the actual and predicted model of power production and usage is large, State Estimator will “produce a solution with a high mismatch”. Information from the improved model is then used by various software tools, including Real Time Contingency Analysis (RTCA), to evaluate the reliability of the power system, and alert operators when necessary, for instance when the power production is critically low. This evaluation can be done periodically or on demand of the operator. “On August 14 at about 12:15 EDT, MISO’s [Midwest Independent System Operator] state estimator produced a solution with a high mismatch […] To troubleshoot this problem the analyst had turned off the automatic trigger that runs the state estimator every five minutes. After fixing the problem he forgot to re-enable it […] Thinking the system had been successfully restored, the analyst went to lunch. The fact that the state estimator was not running automatically on its regular 5-minute schedule was discovered about 14:40 EDT.” When the automatic trigger was subsequently re-enabled, the state estimator produced a solution with a high mismatch due to further developments on the network. The official report assesses the situation as follows. “In summary, the MISO state estimator and real time contingency analysis tools were effectively out of service between 12:15 EDT and 16:04 EDT. This prevented MISO from promptly performing precontingency “early warning” assessments of power system reliability over the afternoon of August 14.” 3.1.1 Problem Analysis Based on this information, we constructed several problem diagrams to analyse relationships between the problem world domains mentioned in the description. Figure 2 shows a composite of two problem diagrams. The problem of State Estimator is to produce Revised-Data for the Improved Electrical System Model of the grid, based on StatusData, and Estimates produced by the Mathematical Model. In Problem Frames, this type of problem is known as a “model building problem”. The problem of RTCA System is to examine Revised-Data and raise appropriate alerts on the Display Screen used by the Operator. This type of problem is known as an “information display problem”. 3.1.2 A Requirements Engineering Error? On August 14, when the SE could not produce a consistent model, the operator turned off the automatic trigger of the SE in order to carry out maintenance work. Figure 3 shows the problem diagram, where the Maintenance Engineer uses the machine SE Trigger to turn on or turn off the State Estimator. This problem fits the Commanded Behaviour frame shown in Figure 1. Part of the requirement here is to ensure that when the engineer issues the command OffNow, the SE should cease running. When the maintenance work was done, the engineer forgot to re-enable the SE, leaving the electrical system model which the operators rely on, outdated. The resulting reliance by the operator on the outdated information was a significant contributing factor. Clearly, the maintenance engineer should not have forgotten to re-engage the monitoring systems, and as a result, the problem would not have arisen. However, there is more to the problem than this being a “human error”. Perhaps the fallibility of human operators should have been better recognised in the system’s model of the world context. 3.1.3 Naming and Categorising Concerns A key part of the problem is the requirement that says that the operator commands always have precedence over the system actions. This requirement relies on the world assumption that the biddable domain—i.e., a human agent such as the maintenance engineer—always gives the correct commands. However, the Commanded Behaviour frame recognises that the operator is a biddable domain, whose behaviour is non-causal and may not be reliable. Therefore, the operator always giving the correct command may be too strong a condition to discharge. This gives rise to two concerns: one related to the biddable domain and the other, related to the Commanded Behaviour frame. We will call the concern related to the biddable domain the reminder concern, which raises the following conditions to discharge: (i) Whenever the biddable domain overrides the system operations, which system domain(s) should be reminded about the override? (ii) How long should the override last? (iii) What happens when the length of time expires? In the case of the blackout, this may be translated into a requirement that says (i) whenever the SE has stopped, the system should remind the operator of the SE status and how long it has had that status, and (ii) at the end of a maintenance procedure, the system should remind the engineer of the SE status. Such a reminder could make the engineer’s behaviour more reliable and perhaps could have helped prevent the failure. A concern related to the Commanded Behaviour frame is whether the system should ignore the operator commands and take control of the system under certain circumstances. We will call this the system precedence concern. This may mean that the system should monitor the actions by the biddable domain, and intervene when the domain does not seem to be reliable. In that case, the requirement should be formulated as follows: Whenever maintenance work is thought to have been completed, the automatic trigger should be enabled. Another key part of the problem is related to the issue of fault-tolerance in information display: What happens when the input the system receives from the analogous model is unexpected? This may be due to an incorrect data type or an untimely input from the analogous model. We will call this the outdated information concern. Pertinent questions in this case are: 1) Can RTCA know that the Improved Electrical System Model is outdated? 2) What should it do about it? Had requirements engineers asked such questions, it could have led to a requirement such as “The Improved Electrical System Model must have a timestamp of when it was last updated successfully” and “If the Improved Electrical System Model is older than 30 minutes, the RTCA system should alert the operator that the electrical system model is now outdated”. This will at least warn the operator not to rely on the information provided by the improved electrical system model. 3.2 Problem #2: Alarm and Event Processing Routine (AEPR) System Another significant cause of the blackout was due, in part, to the Alarm and Event Processing Routine (AEPR) system, “a key software program that gives grid operators visual and audible indications of events occurring on their portion of the grid” [14]. “Alarms are a critical function of an EMS [Energy Management System], and EMS-generated alarms are the fundamental means by which system operators identify events on the power system that need their attention. If an EMS’s alarms are absent, but operators are aware of the situation and the remainder of the EMS’s functions are intact, the operators can potentially continue to use the EMS to monitor and exercise control of their power system. In the same way that an alarm system can inform operators about the failure of key grid facilities, it can also be set up to warn them if the alarm system itself fails to perform properly. FE’s EMS did not have such a notification system.” The problem of alerting the Grid Operator of the grid status, ascertained from the Grid & Sensors is shown in Figure 4. This problem fits a type of problem known as the Information Display Frame. The requirement is to raise a separate alarm to the operator (GOAlertedGrid) if and only if there are events on the grid that threaten the system reliability (GridOK): ¬GridOK ↔ GOAlertedGrid. The specification of AEPR could be to raise an alert (RaiseAlert) if and only if danger is detected on the grid (DangerDetected): DangerDetected ↔ RaiseAlert. In the case study, the AEPR system failed silently, leading the operators to continue to rely on outdated information, and was one of “the most important causes” of the blackout. 3.2.1 A Requirements Engineering Error? The official report is very clear about the fact that there was a missing requirement “to monitor the status of EMS and report it to the system operators.” The British Standard 5839 on fire detection and fire alarm systems [12] is also concerned with monitoring systems, and anticipates such a requirement. Since fire alarms may fail when electricity is disconnected, the standard requires that alarms are fitted with a secondary independent source of power. In addition, when the source of power is switched from the primary to secondary source, the system should raise an alarm. 3.2.2 Naming and Categorising Concerns The cause of this failure can be called a silent failure of alarm systems. Addressing this concern could raise questions such as: What happens if AEPR fails silently? Is it possible to detect such failures? What should be done when such failures are detected. This could have led the designers to the requirement that the system should monitor the behaviour of AEPR and raise an additional alarm when AEPR is thought to have failed. Figure 5 shows a problem diagram in which a wrapper intercepts the input to and output from the AEPR and when AEPR fails to respond as expected, a separate alarm is raised. (GOAlertedAEPR). The wrapper AEPR Monitor can pass on danger detection from the grid to AEPR (DangerDetected@GS \rightarrow DangerDetected@b' AM) and pass on the alert trigger from AEPR to the grid operator \((\text{RaiseAlert}@a' AM \leftrightarrow \text{RaiseAlert}@a' AM)\). Then the requirement to alert silent failure of AEPR is \(\neg\text{GridOK} \land \neg\text{GOAlertedGrid} \leftrightarrow \text{GOAlertedAEPR}\). The specification for AEPR Monitor is \(\text{DetectDanger}@b\text{GS} \land \neg\text{RaiseAlert}@a' AM \leftrightarrow \text{RaiseSecondaryAlert}@a' AM\). An implementation of such a specification could have prevented the failure. 4 Related Work There are many studies of software-related failures. Leveson, for instance, carried out several studies of software-related accidents, including those involving Therac-25 [7]. Johnson also has contributed an extensive literature on system accidents and incidents [5, 6, 2]. However, those studies of system failure of which we are aware have not been based on a clear conceptual structure for identifying, classifying, and recording the lessons learned at the level of detail appropriate for use by software engineers. For instance, the software engineering lessons Leveson and Turner [7] draw from the Therac-25 accidents include: “Documentation should not be an afterthought”, and “Designs should be kept simple”. Johnson investigated this power blackout in order to “sleep arguments for and against deregulation as a cause of the black-out” [6]. In this paper, we have applied a systematic approach to learning software engineering lessons, structured and described in ways that software engineers can relate to specifically. Several variants of the Failure Modes and Effect Analysis (FMEA) method have been developed and applied in the development of dependable systems. Lutz and Woodhouse [8], for instance, applied a FMEA-based method to identify critical errors in requirements documents of two spacecraft systems. Our work is complementary to such methods, in the sense that we are concerned with identifying, structuring and documenting past software failures, which can then be used to narrow the search space in failure analysis. 5 Summary Our experience of using Problem Frames to investigate system failures involving software systems showed that the framework of Problem Frames was appropriate for identifying causes of system failures and documenting the causes in a schematic and accessible way. The suggestion by the framework that requirements engineers should “look out” into the physical world, rather than “look into” the software was useful in directing and focusing the attention, because many of the causes of failures originated in the physical world context. The separation of descriptions into requirements, problem world context and the specification enabled us to locate sources of failures in specific descriptions. Some failures were related to the requirements (such as missing requirements) and others to the problem world context (such as mismatch between the assumed and actual behaviour of the problem world domains). Furthermore, associating concerns to the requirement, problem world context, frame, domain type, style of composition, and the specifications provides a good basis for recording concerns in a schematic way. In summary, specific lessons learnt from the blackout case study are: (i) a further specialisation of the reliability of the biddable domain, called the reminder concern, (ii) a further specialisation of the concern of the Commanded Behaviour frame where the system may have to take precedence over the operator action, called the system precedence concern, (iii) a further specialisation of the Information Display frame called the outdated information concern, and (iv) the silent failure concern related to the monitoring systems. References
{"Source-Url": "http://oro.open.ac.uk/19427/1/Tun09RE.pdf", "len_cl100k_base": 4781, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22044, "total-output-tokens": 5878, "length": "2e12", "weborganizer": {"__label__adult": 0.00033473968505859375, "__label__art_design": 0.0005364418029785156, "__label__crime_law": 0.0005955696105957031, "__label__education_jobs": 0.00293731689453125, "__label__entertainment": 0.00010031461715698242, "__label__fashion_beauty": 0.00017762184143066406, "__label__finance_business": 0.00060272216796875, "__label__food_dining": 0.00034689903259277344, "__label__games": 0.0005998611450195312, "__label__hardware": 0.0028591156005859375, "__label__health": 0.0007352828979492188, "__label__history": 0.0003495216369628906, "__label__home_hobbies": 0.00017261505126953125, "__label__industrial": 0.0012102127075195312, "__label__literature": 0.0004019737243652344, "__label__politics": 0.0002713203430175781, "__label__religion": 0.00038743019104003906, "__label__science_tech": 0.2222900390625, "__label__social_life": 0.0001360177993774414, "__label__software": 0.0208282470703125, "__label__software_dev": 0.7431640625, "__label__sports_fitness": 0.0002065896987915039, "__label__transportation": 0.0007605552673339844, "__label__travel": 0.00016641616821289062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26804, 0.03455]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26804, 0.6165]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26804, 0.92577]], "google_gemma-3-12b-it_contains_pii": [[0, 713, false], [713, 4688, null], [4688, 8744, null], [8744, 13392, null], [13392, 17144, null], [17144, 20937, null], [20937, 26804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 713, true], [713, 4688, null], [4688, 8744, null], [8744, 13392, null], [13392, 17144, null], [17144, 20937, null], [20937, 26804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26804, null]], "pdf_page_numbers": [[0, 713, 1], [713, 4688, 2], [4688, 8744, 3], [8744, 13392, 4], [13392, 17144, 5], [17144, 20937, 6], [20937, 26804, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26804, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
155bec8d26e08dffc8105b5a49fbd6598df3a2a4
[REMOVED]
{"len_cl100k_base": 4589, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20817, "total-output-tokens": 5747, "length": "2e12", "weborganizer": {"__label__adult": 0.0004427433013916016, "__label__art_design": 0.0015430450439453125, "__label__crime_law": 0.0008392333984375, "__label__education_jobs": 0.051788330078125, "__label__entertainment": 0.00024211406707763672, "__label__fashion_beauty": 0.0004091262817382813, "__label__finance_business": 0.0010194778442382812, "__label__food_dining": 0.0005850791931152344, "__label__games": 0.0006732940673828125, "__label__hardware": 0.0015325546264648438, "__label__health": 0.0011930465698242188, "__label__history": 0.0011043548583984375, "__label__home_hobbies": 0.0002601146697998047, "__label__industrial": 0.00072479248046875, "__label__literature": 0.003816604614257813, "__label__politics": 0.0005421638488769531, "__label__religion": 0.0007224082946777344, "__label__science_tech": 0.40576171875, "__label__social_life": 0.00043129920959472656, "__label__software": 0.10833740234375, "__label__software_dev": 0.4169921875, "__label__sports_fitness": 0.00024580955505371094, "__label__transportation": 0.0005807876586914062, "__label__travel": 0.0003204345703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22590, 0.03679]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22590, 0.42434]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22590, 0.89136]], "google_gemma-3-12b-it_contains_pii": [[0, 2567, false], [2567, 6449, null], [6449, 9367, null], [9367, 10466, null], [10466, 13101, null], [13101, 14413, null], [14413, 18872, null], [18872, 22590, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2567, true], [2567, 6449, null], [6449, 9367, null], [9367, 10466, null], [10466, 13101, null], [13101, 14413, null], [14413, 18872, null], [18872, 22590, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22590, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22590, null]], "pdf_page_numbers": [[0, 2567, 1], [2567, 6449, 2], [6449, 9367, 3], [9367, 10466, 4], [10466, 13101, 5], [13101, 14413, 6], [14413, 18872, 7], [18872, 22590, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22590, 0.22857]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
869032675866ab41f39685ca5aa0dba61c659796
[REMOVED]
{"len_cl100k_base": 6696, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29386, "total-output-tokens": 9289, "length": "2e12", "weborganizer": {"__label__adult": 0.0003418922424316406, "__label__art_design": 0.00042510032653808594, "__label__crime_law": 0.0005855560302734375, "__label__education_jobs": 0.002262115478515625, "__label__entertainment": 0.0001271963119506836, "__label__fashion_beauty": 0.0002205371856689453, "__label__finance_business": 0.0007023811340332031, "__label__food_dining": 0.0004203319549560547, "__label__games": 0.0009164810180664062, "__label__hardware": 0.0012006759643554688, "__label__health": 0.0009136199951171876, "__label__history": 0.0004830360412597656, "__label__home_hobbies": 0.00016391277313232422, "__label__industrial": 0.0007276535034179688, "__label__literature": 0.00043582916259765625, "__label__politics": 0.0004372596740722656, "__label__religion": 0.0005674362182617188, "__label__science_tech": 0.380859375, "__label__social_life": 0.0001728534698486328, "__label__software": 0.031005859375, "__label__software_dev": 0.576171875, "__label__sports_fitness": 0.0003170967102050781, "__label__transportation": 0.00046944618225097656, "__label__travel": 0.000240325927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35454, 0.04392]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35454, 0.46148]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35454, 0.9103]], "google_gemma-3-12b-it_contains_pii": [[0, 3529, false], [3529, 8807, null], [8807, 14132, null], [14132, 15742, null], [15742, 17198, null], [17198, 22044, null], [22044, 25607, null], [25607, 29306, null], [29306, 35454, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3529, true], [3529, 8807, null], [8807, 14132, null], [14132, 15742, null], [15742, 17198, null], [17198, 22044, null], [22044, 25607, null], [25607, 29306, null], [29306, 35454, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35454, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35454, null]], "pdf_page_numbers": [[0, 3529, 1], [3529, 8807, 2], [8807, 14132, 3], [14132, 15742, 4], [15742, 17198, 5], [17198, 22044, 6], [22044, 25607, 7], [25607, 29306, 8], [29306, 35454, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35454, 0.08571]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
48eb9644801fdb82355404c0b7edb56859da5ba2
MESSAGE CLASSES: AN APPROACH TO PROCESS SYNCHRONIZATION* Gregory R. Andrews TR 76-275 April 1976 Department of Computer Science Cornell University Ithaca, New York 14853 * This research was supported in part by the National Science Foundation under Grant GJ-42512. MESSAGE CLASSES: AN APPROACH TO PROCESS SYNCHRONIZATION. Gregory R. Andrews Abstract In multiprogramming systems, parallel processes compete for access to shared resources and cooperate by exchanging information. Semaphores are a useful means for controlling competition and synchronizing execution and inter-process messages are useful for communication. Neither semaphores nor inter-process messages, however, are natural for solving both problems. This paper introduces a new approach, message classes, which combines and extends features of both semaphores and message passing. Using message classes, numerous mutual exclusion, producer/consumer, process communication, and resource allocation problems can be readily solved. * This research was supported in part by the National Science Foundation under Grant GJ-42512. 1. INTRODUCTION The two fundamental process coordination problems in multiprogramming systems are mutual exclusion and communication. When two or more processes compete for a shared resource which can only be used by one at a time, each must exclude the others when accessing it. This requires that each process reserve the resource, use it, and then release it. Communication on the other hand is the means by which two or more processes cooperate in performing a task; one process produces information which another consumes. For many years, semaphores [2], or variants thereof, have been the most commonly used means for coordinating processes. A semaphore is a special type of integer operated on by only two operations: P (Wait) and V (Signal). Although semaphores are a simple and natural means for mutually excluding processes, they cannot be used directly for communication. Using semaphores, one can implement inter-process message buffers, but the message facility is then distributed among processes and each must call buffer manipulation procedures. The communication problem can be directly solved using inter-process messages such as those described by Brinch Hansen [1]. Inter-process messages are sent to a named process and added to a queue, or mailbox, associated with the process. While one process can receive from many senders, he can only send to one receiver at a time. Using message passing alone, the mutual exclusion problem cannot be directly solved; it requires having one process which mediates requests for access or performs the access itself. In this paper, we describe a single coordination mechanism, message classes, which combines properties of semaphores and message passing and can directly solve both exclusion and communication problems. A message class is a group of "resources" of the same type; for example, a set of empty buffers, or IO messages, or channels. It is operated on by two operations, Send and Receive which respectively release (communicate) and acquire members of the class\(^1\). Like a semaphore, a message class is a shared variable; it is not tied to a process. It is different from a semaphore, however, because an item in a message class can contain information such as a buffer address, a channel address, or an IO command. In addition to combining aspects of semaphores and inter-process messages, a message class can also be used to solve problems where a combination of exclusion and communication are present. In a spooling system, for example, there might well be a fixed number of buffers used by reader and writer processes. Each buffer is accessed by only one process at a time but after being filled (or emptied) it is passed on to another process; it becomes a message stored in a pool. In the remainder of this paper, we define message classes --- \(^1\) Message classes are a major simplification, with some changes, of a more general process synchronization facility described by Shaw \([4,5]\). and give examples to illustrate their application. Their implementation is also considered. They have been successfully used in numerous student operating systems at Cornell, and have been the basis for the design of a real-time executive for military applications. 2. DEFINITION OF MESSAGE CLASSES A message class is a set of "messages", each containing the same type of information, accessed by Send and Receive operations. For the moment, assume that we can declare a message class as follows: \[ \text{name}: \text{message class}(\text{type}, \text{length}); \] The name refers to a descriptor describing the contents and status of the class; type and length will be described shortly. The operation Send(class name, message) adds a message to the class and awakens a waiting process, if possible; the operation Receive(class name, message) returns a message, delaying the receiver, if necessary, until one becomes available. Both are considered indivisible (un-interruptible) operations with respect to the class. A message class descriptor has the following fields, as shown in Figure 1: (1) The class type, either directed or undirected. (2) An available list of messages sent but not yet received. ### Figure 1 Message Class Descriptor <table> <thead> <tr> <th>Field</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>Type</td> <td>&quot;directed&quot; or &quot;undirected&quot;</td> </tr> <tr> <td>Avail_list</td> <td>Header of a linked list of available messages.</td> </tr> <tr> <td>Length</td> <td>Length of each message on the available list.</td> </tr> <tr> <td>Wait_list</td> <td>Header of a list of processes waiting for messages.</td> </tr> </tbody> </table> (3) A length field indicating how long each message is. (4) A waiting list of processes who have executed Receives which are not yet satisfied. A directed type of class is used for process to process communication; each message is sent to a specific process identified by the first word in the message. A directed class acts like a large switch, connecting senders and receivers; it may be used by a single sender and receiver or by many. An undirected class, on the other hand, contains a pool of messages which can be produced (sent) or consumed (received) by any user of the class. It is used for exclusion, for resource allocation, or for communication when a process does not care who receives his message. The purpose of the length parameter is to indicate how much information is to flow from senders to receivers. As will be shown, this may vary from zero (to simulate semaphores) to many words, depending on the function of the class. The operation Send(class, message) adds a message to the named class; the message parameter is the address of (pointer to) the message contents. Directed classes require that the first component of the message name another process. If that process is waiting, then a copy of the message is given to it and it is awakened (by some unspecified means). If the intended receiver is not waiting, then a copy of the message is stored on the available list. In either case, the process executing Send can continue execution. The only difference for undirected classes is that a Send awakens any waiting process and gives it a copy of the message. These actions are outlined in a pseudo-Algol style in Figure 2. Send Operation Send: procedure (class, message); /* Add a message to the named class. Give it to a waiting receiver, if any. */ if Type(class) = "directed" then begin /* first word of message identifies the intended receiver, IR */ IR := first word of message; if IR is on Wait_list(class) then begin Remove(IR) from Wait_list(class); Copy message contents into IR's storage; change first word of message to name of sender Wake_up(IR) end else store copy of message on Avail_list(class), storing Length(class) words and name of sender end else begin /* type is undirected - any process can receive message */ if Wait_list(class) not empty then begin Receiver := first process on Wait_list(class) Copy message contents into Receiver's storage Wake_up(Receiver) end else begin Store copy of message on Avail_list(class), storing Length(class) words end end end Send; The Receive(class, message) operation takes a message from the Avail_list of the named class; the message parameter is the address of (pointer to) the locations where a copy of the message is to be stored. It is outlined in Figure 3. A Receive of a directed class selects the first available message sent to the receiving process, if one is available, and returns a copy of the message, changing its first component (which named the receiver) to the name of the sender. If no message is available, the calling process is blocked until one is sent; some scheduling mechanism then selects another process for execution unless a busy wait is employed. A Receive of an undirected class selects the first available message without concern for who sent it. In this case, the message is copied in its entirety into the receiver's address space; the caller is blocked only if no messages are available. A zero length message, of course, results in no copying. It merely serves as a signal. As stated, the difference between directed and undirected classes is that the former pass messages from senders to specified receivers whereas the latter just queue them up without regard for the receiver. Therefore, in an undirected class, at least one of the Avail_list or Wait_list is empty. In a directed class there might be entries on both lists but no process is kept waiting if a message for him is available. In either case, the Send or Receive operations have very few statements to execute. We now look at some examples to clarify these concepts. Figure 3 Receive Operation Receive: procedure (class, message); /* Take a message from the named class, if there is one, and store it in the locations referred to by message. Otherwise block the caller. */ if Type(class) = "directed" then begin /* Find a message intended for calling process. Let CP denote current(calling) process */ if a message for CP is on Avail_list(class) then begin Remove first such from Avail_list(class) Copy contents into locations pointed to by message parameter (store Length(class) words) Set first word of message to name of sender end else begin /* no message available */ Store CP name and message parameter on Wait_list(class) Block(CP) end end else begin /* Type is undirected - any message can be received. Let CP denote current process */ if Avail_list(class) not empty then begin Remove first message from Avail_list(class) Copy contents into locations pointed to by message parameter (store Length(class) words) end else begin /* no message available */ Store CP name and message parameter on Wait_list(class); Block(CP); end end Receive; 3. APPLICATIONS In this section we illustrate the use of message classes to simulate semaphores, handle IO and interrupt communication, and manage a pool of buffers in a spooling system. 3.1 Mutual Exclusion and Semaphores The critical section problem is to insure that at most one process at a time can access a shared variable. The usual means for solving the problem is to use semaphores. Let V be a shared variable (or set of variables). Then each process accessing V does so via statements contained in a critical section, implemented as follows. Let $S_V$ be a semaphore, with initial value 1, which is used to synchronize access to V. Then the critical section becomes: \[ \begin{align*} \text{Wait}(S_V) \\ \langle \text{statements accessing V}\rangle \\ \text{Signal}(S_V) \end{align*} \] To use a message class instead of a semaphore, we need to declare a class, initialize it and then use it with Send and Receive. Since semaphores are not attached to processes, and Wait and Signal do not name processes, an undirected class is appropriate. Also, since semaphores do not contain information directly accessible to processes, all the Avail_list need indicate is present or absent. Therefore our class to simulate $S_V$ is declared as: $C_V$: message class ("undirected", 0); Since $S_V$ was initialized to 1, $C_V$ must initially contain one "message", generated by Send($C_V$, 0); a zero second argument is used since there is no data in the message. Finally, a critical section becomes: Receive($C_V$, 0); <statements accessing V> Send($C_V$, 0); Receive serves the same purpose as Wait, namely it sees if anything is available and takes it if possible. Similarly, Send and Signal serve the same purpose: they both increment a value and awaken a waiting process if there is one. This relation between semaphores and message classes is more generally true. Whenever a semaphore is initialized to some value $C$, a message class is initialized by $C$ Sends; this is a little awkward but need only be done once per class. Whenever a semaphore is signalled, a message (of length 0) is sent. Whenever a semaphore is awaited, a message (of length 0) is received. Message classes can therefore do anything that a semaphore can at the slight cost of extra initialization code and one extra parameter per Send and Receive. It should be noted that any implementation of semaphores which uses a waiting list (as opposed to a busy wait) executes the same basic actions that are taken by Send and Receive (see Figures 2 and 3). 3.2 IO and Interrupt Processing An efficient means for performing IO on peripheral devices is to have a distinct driver process for each device. IO is then initiated by sending a message to a device driver and the result is returned by a completion message. Coordination between a device driver and the hardware IO interrupt mechanism can also be handled by message passing. After starting IO, a driver waits for an interrupt message. When the interrupt occurs, the interrupt handler sends a message to the appropriate driver process. This IO processing technique is depicted in Figure 4. To effect the coordination, three directed message classes are needed, DOIO, COMP, and INT. We assume that a DOIO message has the format: 1. driver process name 2. operation 3. address on device (e.g. sector and track) 4. buffer address 5. byte count A completion message has the format: 1. user process name 2. completion status Finally, an interrupt message has the format: 1. driver process name 2. device status DOIO, COMP, and INT messages therefore have lengths 5, 2, and 2, respectively. Figure 4 IO and Interrupt Processing With these assumptions, we can now declare the message classes: - **DOIO**: `message class ("directed",5);` - **COMP**: `message class ("directed",2);` - **INT**: `message class ("directed",2);` To activate a driver process, a user does a Send of a DOIO message describing the operation to be performed. Completion of the operation is awaited by executing a Receive of a COMP message. With this approach, a user can do other work before awaiting device completion. For example, a process logging a user onto a terminal could send an acknowledgement to the terminal and a message to the operator's console, or system log, before waiting for either completion. In fact, the order of completion does not matter, since messages are stored by type, not by who sends or receives them. An IO driver process in this scheme has the basic form: ```plaintext do forever: Receive(DOIO, IO_message) build channel program Start IO Receive(INT, status) Send(COMP, status) end; ``` When a DOIO message is received, recall that the sender's name is returned as the first parameter. Therefore, the IO driver knows who to send the completion message to. Notice also that because messages are grouped by type, the driver can first get a DOIO message and then an INT message. In a system such as the RC4000 [1], a process has only one queue of message buffers and consequently needs to look through the queue for the correct message if it can be sent more than one type. It is quite conceivable that DOIO messages and INT messages are not perfectly inter-leaved, one after the other. This causes no problems with message classes; it does when a process only has one message queue. Grouping messages by type, not by process, gives message classes a major advantage relative to other communication techniques. To complete our IO handling scheme, the interrupt handler takes the following actions when entered via an interrupt: - Save state of executing process - Determine identity and status of device - Format interrupt message to device driver - Send(INT, status) - Restore state This scheme for IO and interrupt processing has been used successfully in experimental systems at the University of Washington [4,5] and in student projects at Cornell. In both cases, the overhead has been quite tolerable and any slight inefficiencies have been more than offset, we feel, by a clearly structured approach. Because DOIO and COMP act as large switches with "users" on one side and device drivers on the other, it is very easy to add or delete device drivers or users without any change to the coordination scheme. And the use of distinct drivers for each device makes it easy to concentrate on the special channel program structure or other constraints imposed by the device. Although we have defined two large classes, DOIG and CONT, for user-driver communication, there is no reason why separate message classes for each type of device couldn't be used. This might be useful if IO message formats differ from device to device\(^2\). It would also shorten the lengths of the Avail-list and Wait-list within the class descriptor thus decreasing the execution time of Send and Receive. Before leaving this example, we would like to point out one further use of message classes. For serial devices, such as card readers or line printers, it is necessary for one user at a time to reserve the device, do his IO, and then release his reservation. In the previous example, it was shown how to use message classes to implement mutual exclusion. For device reservation, however, a user needs to know what device to use. For this, we can use a message class to implement a pool of driver process names. A user then receives a name from the pool, does IO by communicating with that driver, and then sends the name back to the pool. The implementation goes as follows. Let a message class DEVICE be declared by: \(^2\) Separate classes per device would also be useful if other than FIFO allocation of messages were possible. Then more efficient scheduling of devices such as drums could be achieved. This point is considered further in Section 5. DEVICE: message class ("undirected", 1); and initialized by as many Send(DEVICE, driver name message) operations as there are devices in the pool. A user then does I/O within a critical section preceded by a Receive(DEVICE, driver name) and followed by a Send(DEVICE, driver name). This same technique can also be used to control the allocation of other classes of serially reusable resources; for example, memory blocks or channels. But it must be used with care, because a message class uses lists of resources which can be inefficient for keeping track of such things as free pages or drum records where a bit map may be more appropriate. Message classes can still be used, however, to synchronize the allocation and thereby preserve the integrity of the data structures used. 3.3 Buffer Passing in a Spooling System The previous examples illustrated the use of message classes for mutual exclusion, process communication and reusable resource allocation. In a spooling system, buffers are typically used to hold card or line images. These buffers are effectively reusable resources requiring exclusive access, but they are usually passed between processes as spooling proceeds. Suppose we have an input spooling system with two main processes, one to read cards and the other to write card images to the disk. The card reader process cyclically acquires an empty buffer, fills it, and passes it on to the disk writer. The disk writer in turn acquires full buffers, transfers them to the disk and then releases an empty buffer. This coordination is depicted in Figure 5. An output spooling system would perform analogous operations. To synchronize the execution of the reader and writer processes, we can use message classes to form pools of empty and input-full buffers. Each buffer is represented by a "message" with one item of data: a buffer address. The classes are declared as follows: ``` EMPTY: message_class ("undirected",1); INPUT_FULL: message_class ("undirected",1); ``` If there are initially N empty buffers, and no input full ones, the EMPTY class is initialized by N calls of Send(EMPTY, buffer address message), each message containing a different address. The two processes then use the classes by executing code such as that shown in Figure 5. At all times there are exactly N buffers in use or in classes. Each changes type as spooling proceeds but the total number is conserved. The same synchronization scheme using a single buffer pool can also be used if there are more readers and/or writers. All control needed to allocate buffers and synchronize their use is taken care of by message classes. A multiprogramming system having semaphores and/or inter process messages for synchronization would need to use both and have extra buffer management procedures in order to synchronize multiple readers and writers. Figure 5 Buffer Passing in an Input Spooler Reader Process \[ \text{do } \text{forever;} \\ \quad \text{Receive(EMPTY, answer);} \\ \quad \text{read card into buffer;} \\ \quad \text{process it;} \\ \quad \text{Send(INPUT\_FULL, answer);} \\ \quad \text{end;} \] Writer Process \[ \text{do } \text{forever;} \\ \quad \text{Receive(INPUT\_FULL, answer);} \\ \quad \text{write to disk;} \\ \quad \text{Send(EMPTY, answer);} \\ \quad \text{end;} \] 4. IMPLEMENTATION In order to implement message classes, it is necessary to code the Send and Receive procedures and manage storage space for descriptors and list entries. In addition, there needs to be some means for defining each class. In our implementation, we have proceeded as outlined below. Within an operating system nucleus which also implements processes, Send and Receive exist as user accessible primitive operations executed with interrupts inhibited. In addition, a primitive Create_message_class exists to build class descriptors. It is called by: Create_message_class(name, type, length) and dynamically implements the static message class declaration which we have been using. Create has an area of descriptor storage from which it selects a block of space whenever called. In practice, Create_message_class is called during system initialization; if users can define classes it could also be called during user execution. Its function is to remember the class name and initialize the descriptor (see Figure 1). Initially both the available and waiting lists are empty. Implementing Send and Receive requires a few support procedures to look up the class name and find its descriptor, manage the Avail_list and Wait_list, and block or wakeup (i.e. schedule) processes. When Send, Receive and Create are each indivisible primitives, the most efficient storage management for available list entries is to have a pool of free storage, implemented by linked lists. When a message needs to be saved by Send, one node is removed from the free list, filled with the message, and inserted on the appropriate available list. Conversely, when a message is allocated by Receive, the message node can be returned to the free list. One of the attributes of a message class is the length of its messages. Since this can vary from class to class, it has to be taken into account in managing free storage. One approach is to have one size of message node for all classes and just fill as much as needed for each type of message. Another approach would be to have pools of different size message nodes, managed by using the buddy system perhaps [3], and then have Send select a node of the appropriate size. The first approach is simple but might waste some storage; the second requires a little more overhead but achieves much better storage utilization if message lengths vary. Conceivably, one could instead have a separate pool of free space for each class but there appears to be no advantage to this approach. With a common pool an over-abundance of messages can be queued in one class as long as other classes have short Avail_lists. Fluctuations in Avail_list size cannot be handled by a pool per class without wasting a good deal of storage. Regardless of the technique used to implement free storage for messages, it is possible that free space will be exhausted. As long as enough space is allocated for all expected messages, this should only happen if one (or more) processes have erroneously or maliciously generated many messages. A suitable recovery action in such an event would be to destroy the offending process. More generally, the exhaustion of free space can be prevented if each message class or each process has a maximal claim associated with it and the sum of the claims is never allowed to exceed the available storage. In order to implement the Wait_list, it is natural to link wait list entries through process descriptors. A process then is always on a Wait_list or a ready list [4,5]. The same linkage space in the process descriptor can be used for both purposes. The components and relationships of a representative implementation of message classes are depicted in Figure 6. It is our experience that a Send or Receive operation results in the execution of about 250 machine instructions on an IBM S/360 or S/370. This includes all list manipulations and process scheduling (using priorities). Therefore, the time during which interrupts are inhibited never exceeds a half a millisecond. Interrupt handling, when done as outlined in Section 3, takes about 300 instructions, 50 or so for state saving and acknowledgement of the interrupt followed by 250 to Send the interrupt message. The approach of using message classes therefore appears feasible for all but possibly the most time sensitive applications. 5. EXTENSIONS AND CONCLUSION In this paper, we have described an approach to process synchronization, called message classes, which can be used to solve mutual exclusion, message passing and resource allocation problems. As such, it is more powerful than either semaphores or inter-process messages by themselves. In addition, it can be implemented as cheaply as inter-process messages. The only extra cost with respect to semaphores is the use of list manipulation to decrease or increase the value of the "semaphore". Our students have also found message classes easy to understand, implement, and use. Three possible extensions to message classes as they have been defined suggest themselves. First, for many problems it is desirable to order available messages. Some tasks might be more urgent than others, for example. And in scheduling activity on a drum or disk, efficiency improves greatly if accesses are ordered to minimize rotational delay or movement of heads. In order to attach a priority order to available messages, it is necessary to do something more than a queue (FIFO) insert on the Avail_list. If one field of a message is designated as the key, an insertion routine could then insert the message into the appropriate place. This, of course, takes longer in general than insertion at the end of the list but may well be worth the overhead. A "library" of list insertion routines could be implemented Figure 6 Implementation of Message Classes: Components and Connections - Define - Build - Create - Message Class Description - Message Class Name Management - Lookup - Block - Wakeup - Send and Receive - Insert/Remove - Examine - Update - List Manipulation Routines - Update - Available Messages - Free List Scheduler Links and the appropriate one selected when a class is created (via an extra parameter). The second extension is to allow a choice of list structures for storing the available list. We previously mentioned the use of message classes to control memory and to limit sector allocation. If bit maps or some other data structure can be selected when a class is created, more problems could be efficiently solved with message classes. The price is implementing extra insert and remove routines. The final extension is to protect access to the classes. It may be desirable to limit the use of a specific message class to just a few processes thus preventing others from executing Send or Receive on the class. In addition, it might be useful to distinguish between producers who only send a certain type of message and consumers who only receive it. Both situations can be controlled by either storing capabilities with each process, identifying the classes and operations it can access, or by storing access lists of process names with each message class. If message classes cannot be created dynamically by users, however, this type of protection is probably not worth the expense unless it is necessary to dynamically change the set of resources a process can access. ACKNOWLEDGEMENTS I am most grateful to Alan Shaw who inspired this approach to process synchronization, and to Susan Owicki and James McGraw who provided helpful comments on a draft of this paper. BIBLIOGRAPHY
{"Source-Url": "https://ecommons.cornell.edu/bitstream/handle/1813/6831/76-275.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 6320, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 30146, "total-output-tokens": 7684, "length": "2e12", "weborganizer": {"__label__adult": 0.0003330707550048828, "__label__art_design": 0.0003151893615722656, "__label__crime_law": 0.00034618377685546875, "__label__education_jobs": 0.001117706298828125, "__label__entertainment": 8.970499038696289e-05, "__label__fashion_beauty": 0.0001513957977294922, "__label__finance_business": 0.00028133392333984375, "__label__food_dining": 0.00038361549377441406, "__label__games": 0.000518798828125, "__label__hardware": 0.004329681396484375, "__label__health": 0.0005521774291992188, "__label__history": 0.0003197193145751953, "__label__home_hobbies": 0.00014865398406982422, "__label__industrial": 0.0007634162902832031, "__label__literature": 0.00025844573974609375, "__label__politics": 0.00025010108947753906, "__label__religion": 0.0004525184631347656, "__label__science_tech": 0.145751953125, "__label__social_life": 9.864568710327148e-05, "__label__software": 0.01349639892578125, "__label__software_dev": 0.82861328125, "__label__sports_fitness": 0.0003037452697753906, "__label__transportation": 0.000926971435546875, "__label__travel": 0.000209808349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30539, 0.01316]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30539, 0.43865]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30539, 0.90782]], "google_gemma-3-12b-it_contains_pii": [[0, 271, false], [271, 271, null], [271, 1099, null], [1099, 2535, null], [2535, 4080, null], [4080, 5293, null], [5293, 5706, null], [5706, 7357, null], [7357, 8614, null], [8614, 10155, null], [10155, 11317, null], [11317, 12568, null], [12568, 13854, null], [13854, 14945, null], [14945, 14982, null], [14982, 16093, null], [16093, 17496, null], [17496, 19114, null], [19114, 20589, null], [20589, 21961, null], [21961, 22411, null], [22411, 23803, null], [23803, 25292, null], [25292, 26763, null], [26763, 28186, null], [28186, 28513, null], [28513, 29773, null], [29773, 29971, null], [29971, 30539, null], [30539, 30539, null]], "google_gemma-3-12b-it_is_public_document": [[0, 271, true], [271, 271, null], [271, 1099, null], [1099, 2535, null], [2535, 4080, null], [4080, 5293, null], [5293, 5706, null], [5706, 7357, null], [7357, 8614, null], [8614, 10155, null], [10155, 11317, null], [11317, 12568, null], [12568, 13854, null], [13854, 14945, null], [14945, 14982, null], [14982, 16093, null], [16093, 17496, null], [17496, 19114, null], [19114, 20589, null], [20589, 21961, null], [21961, 22411, null], [22411, 23803, null], [23803, 25292, null], [25292, 26763, null], [26763, 28186, null], [28186, 28513, null], [28513, 29773, null], [29773, 29971, null], [29971, 30539, null], [30539, 30539, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30539, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30539, null]], "pdf_page_numbers": [[0, 271, 1], [271, 271, 2], [271, 1099, 3], [1099, 2535, 4], [2535, 4080, 5], [4080, 5293, 6], [5293, 5706, 7], [5706, 7357, 8], [7357, 8614, 9], [8614, 10155, 10], [10155, 11317, 11], [11317, 12568, 12], [12568, 13854, 13], [13854, 14945, 14], [14945, 14982, 15], [14982, 16093, 16], [16093, 17496, 17], [17496, 19114, 18], [19114, 20589, 19], [20589, 21961, 20], [21961, 22411, 21], [22411, 23803, 22], [23803, 25292, 23], [25292, 26763, 24], [26763, 28186, 25], [28186, 28513, 26], [28513, 29773, 27], [29773, 29971, 28], [29971, 30539, 29], [30539, 30539, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30539, 0.02034]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
40e39f233edc0ff83a02fc4ff9a9967f51e78681
[REMOVED]
{"Source-Url": "http://nob.cs.ucdavis.edu/classes/ecs235b-2014-01/slides/2014-01-16.pdf", "len_cl100k_base": 6949, "olmocr-version": "0.1.53", "pdf-total-pages": 60, "total-fallback-pages": 0, "total-input-tokens": 97109, "total-output-tokens": 9263, "length": "2e12", "weborganizer": {"__label__adult": 0.0004394054412841797, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.0011091232299804688, "__label__education_jobs": 0.0013608932495117188, "__label__entertainment": 9.512901306152344e-05, "__label__fashion_beauty": 0.00021195411682128904, "__label__finance_business": 0.0005745887756347656, "__label__food_dining": 0.0005412101745605469, "__label__games": 0.0020160675048828125, "__label__hardware": 0.00431060791015625, "__label__health": 0.001049041748046875, "__label__history": 0.0005240440368652344, "__label__home_hobbies": 0.0003826618194580078, "__label__industrial": 0.0014209747314453125, "__label__literature": 0.0004544258117675781, "__label__politics": 0.0005550384521484375, "__label__religion": 0.0008091926574707031, "__label__science_tech": 0.278076171875, "__label__social_life": 0.00012242794036865234, "__label__software": 0.009613037109375, "__label__software_dev": 0.69384765625, "__label__sports_fitness": 0.0004146099090576172, "__label__transportation": 0.0011959075927734375, "__label__travel": 0.00025343894958496094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19890, 0.00876]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19890, 0.26126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19890, 0.74042]], "google_gemma-3-12b-it_contains_pii": [[0, 31, false], [31, 152, null], [152, 419, null], [419, 616, null], [616, 975, null], [975, 1443, null], [1443, 2016, null], [2016, 2191, null], [2191, 2514, null], [2514, 3088, null], [3088, 3691, null], [3691, 4127, null], [4127, 4647, null], [4647, 5367, null], [5367, 5832, null], [5832, 6356, null], [6356, 6628, null], [6628, 7032, null], [7032, 7533, null], [7533, 7982, null], [7982, 8278, null], [8278, 8431, null], [8431, 8526, null], [8526, 8716, null], [8716, 8889, null], [8889, 9163, null], [9163, 9516, null], [9516, 9792, null], [9792, 9956, null], [9956, 10172, null], [10172, 10706, null], [10706, 11069, null], [11069, 11346, null], [11346, 11469, null], [11469, 11651, null], [11651, 11890, null], [11890, 12134, null], [12134, 12583, null], [12583, 12822, null], [12822, 13096, null], [13096, 13401, null], [13401, 13710, null], [13710, 14011, null], [14011, 14299, null], [14299, 14853, null], [14853, 15064, null], [15064, 15223, null], [15223, 15455, null], [15455, 15827, null], [15827, 16054, null], [16054, 16301, null], [16301, 16494, null], [16494, 16882, null], [16882, 17285, null], [17285, 17586, null], [17586, 17872, null], [17872, 18462, null], [18462, 18605, null], [18605, 19331, null], [19331, 19890, null]], "google_gemma-3-12b-it_is_public_document": [[0, 31, true], [31, 152, null], [152, 419, null], [419, 616, null], [616, 975, null], [975, 1443, null], [1443, 2016, null], [2016, 2191, null], [2191, 2514, null], [2514, 3088, null], [3088, 3691, null], [3691, 4127, null], [4127, 4647, null], [4647, 5367, null], [5367, 5832, null], [5832, 6356, null], [6356, 6628, null], [6628, 7032, null], [7032, 7533, null], [7533, 7982, null], [7982, 8278, null], [8278, 8431, null], [8431, 8526, null], [8526, 8716, null], [8716, 8889, null], [8889, 9163, null], [9163, 9516, null], [9516, 9792, null], [9792, 9956, null], [9956, 10172, null], [10172, 10706, null], [10706, 11069, null], [11069, 11346, null], [11346, 11469, null], [11469, 11651, null], [11651, 11890, null], [11890, 12134, null], [12134, 12583, null], [12583, 12822, null], [12822, 13096, null], [13096, 13401, null], [13401, 13710, null], [13710, 14011, null], [14011, 14299, null], [14299, 14853, null], [14853, 15064, null], [15064, 15223, null], [15223, 15455, null], [15455, 15827, null], [15827, 16054, null], [16054, 16301, null], [16301, 16494, null], [16494, 16882, null], [16882, 17285, null], [17285, 17586, null], [17586, 17872, null], [17872, 18462, null], [18462, 18605, null], [18605, 19331, null], [19331, 19890, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19890, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19890, null]], "pdf_page_numbers": [[0, 31, 1], [31, 152, 2], [152, 419, 3], [419, 616, 4], [616, 975, 5], [975, 1443, 6], [1443, 2016, 7], [2016, 2191, 8], [2191, 2514, 9], [2514, 3088, 10], [3088, 3691, 11], [3691, 4127, 12], [4127, 4647, 13], [4647, 5367, 14], [5367, 5832, 15], [5832, 6356, 16], [6356, 6628, 17], [6628, 7032, 18], [7032, 7533, 19], [7533, 7982, 20], [7982, 8278, 21], [8278, 8431, 22], [8431, 8526, 23], [8526, 8716, 24], [8716, 8889, 25], [8889, 9163, 26], [9163, 9516, 27], [9516, 9792, 28], [9792, 9956, 29], [9956, 10172, 30], [10172, 10706, 31], [10706, 11069, 32], [11069, 11346, 33], [11346, 11469, 34], [11469, 11651, 35], [11651, 11890, 36], [11890, 12134, 37], [12134, 12583, 38], [12583, 12822, 39], [12822, 13096, 40], [13096, 13401, 41], [13401, 13710, 42], [13710, 14011, 43], [14011, 14299, 44], [14299, 14853, 45], [14853, 15064, 46], [15064, 15223, 47], [15223, 15455, 48], [15455, 15827, 49], [15827, 16054, 50], [16054, 16301, 51], [16301, 16494, 52], [16494, 16882, 53], [16882, 17285, 54], [17285, 17586, 55], [17586, 17872, 56], [17872, 18462, 57], [18462, 18605, 58], [18605, 19331, 59], [19331, 19890, 60]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19890, 0.03672]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
12cd9ecdec19af26f9a4ca194e9d4564fc6c37d8
SQuAD 2.0 (QANet, Character Embeddings and Token Features, Hyperparameter Tuning) Bernardo Casares, Eric Nielsen, Eric Redondo {bcasares, nielsene, eredondo}@stanford.edu Abstract The Stanford Question Answering Dataset (SQuAD) dataset is one of the most popular benchmarks for reading comprehension and question answering tasks. This project explores several techniques to improve upon a baseline model that is designed to answer questions using phrases in given context paragraphs. We are focusing specifically on the SQuAD 2.0 dataset, in which the correct answers may or may not appear in the context paragraphs. The baseline model is based on Bidirectional Attention Flow (BiDAF), and the enhancement techniques explored include character-level embeddings, the addition of more expressive token features (e.g., part-of-speech tagging), hyperparameter tuning, and an Encoder Block architecture called QANet. The character-level embeddings and part-of-speech tagging were the most impactful improvements in terms of model performance. 1 Introduction Training computers to understand text has gained significant popularity over the past several years due to the many applications it enables. The SQuAD 2.0 dataset is one of the most popular reading comprehension benchmarks. The dataset comprises many questions, each associated with a context paragraph that may or may not include the answer. The goal of the challenge is to train a computer to answer the questions as correctly as possible - providing a measure for how well the computer can ‘understand’ text. Significant research has been done over the past several years to design models to compete in this difficult challenge. In general, submissions fall into one of two categories: those that use Pre-trained Contextual Embeddings (PCE) such as ELMo and BERT, and those that do not. PCE models tend to have much higher performance than non-PCE models, however come with an added level of complexity. We chose to focus our project on enhancing existing non-PCE models. For a baseline, we were given a Bidirectional Attention Flow (BiDAF) model that used only word-level embeddings. We improved this baseline by adding a more expressive embedding layer that supplemented word-level embeddings with character-level representations and three additional token features: part-of-speech (POS), named entity recognition (NER) tags, and term frequency (TF). These new embeddings allow the model to better capture rich semantic and structural word information and to better understand word relationships. Ultimately, these updates and hyperparameter tuning resulted in a superior BiDAF model. We also worked to improve upon a different model architecture called QANet. Prior to BERT, QANet had state-of-the-art performance for SQuAD 1.1. This led us to believe that a successful implementation of QANet could obtain high scores in the SQuAD 2.0 dataset as well. The QANet design allows training to be parallelizable, train faster, and theoretically obtain better results as compared to BiDAF models. Although we did not obtain faster speeds or better results using the QANet, we believe that further updates could be made to improve performance. Below, we outline a collection of related work, our approach and implementation, the experiments we conducted and their results, and a brief analysis of overall model performance. 2 Related Work 2.1 BiDAF The baseline model is heavily inspired by the BiDAF model developed by Seo et al. (1). This 2016 paper expands on previous work utilizing attention mechanisms in machine comprehension by introducing a multi-stage hierarchical process to improve the representation of context. Additionally, a BiDAF mechanism is presented to obtain a query-aware context representation without the need for prior summarization. Using these techniques, this model achieved an F1 score of 77.3 and an EM score of 68.0 on the test set of SQuAD 1.0 dataset. The primary difference between the original model and the baseline employed in this work is that the original implementation included a character-level embedding layer. 2.2 QANet In a 2018 paper (8), Yu et al. present the QANet architecture, which addresses one of the major issues with BiDAF models: their recurrent neural networks (RNNs). The authors describe the lengthy run-times of both training and inference due to the sequential (non-parallelizable) nature of RNNs. To solve this, the QANet architecture does not require RNNs. It’s encoder instead consists only of convolution and self-attention, where convolution models the local interactions and self-attention models the global interactions. Due to the large computational power they had available, their model was 3x to 13x faster in training and 4x to 9x faster in inference while achieving equivalent accuracy to recurrent models. Additionally, one of the extra benefits of QANets is that the speed-up gain allows the model to be trained with much more data. The paper describes a data augmentation technique to take advantage of this benefit, however we did not include this component in our work. 2.3 DrQA In a 2017 paper by Chen et al. (7), the authors outline a model called DrQA that is designed to compete in the SQuAD 1.0 challenge. The model is notable due to its implementation of token embeddings, which include several token features in addition to Glove (11) word embeddings. Specifically, each token embedding includes information about the token’s part-of-speech, named entity recognition tag, and normalized term frequency. At the time the paper was published, the DrQA model achieved competitive scores on the SQuAD 1.0 leaderboard - 69.5 EM and 78.8 F1 on the dev set, and 70.0 EM and 79.0 F1 on the test set. Although the authors of the paper offer little detail about exactly how the additional token features were generated and then aggregated in each embedding, our team was inspired to attempt a similar approach in our work. More expressive token embeddings should provide the model with more information to learn from. 3 Approach Our team worked to build two types of models to compete on the non-PCE SQuAD 2.0 leaderboard. Our first model design kept the overall BiDAF architecture from the provided baseline, and improved its performance with tuned hyperparameters and enhanced token embeddings. These enhancements included the addition of character-level embeddings, as well as part-of-speech, named entity recognition tag, and term frequency embeddings. Our second model was designed using the QANet architecture, and also made use of the enhanced token embeddings. Initial attempts were made to instead integrate BERT embeddings, however this approach was terminated due to a number of implementation issues. It is briefly described below for completeness. 3.1 Baseline The baseline model used for benchmark purposes was the baseline model provided by CS224N course staff. As mentioned previously, the baseline model is based on a BiDAF architecture. 3.2 Character-Level Embeddings (BiDAF) The first improvement we made to the provided baseline was the addition of character-level embeddings. Originally, each word in the vocabulary was represented as a length 300 GLoVe embedding vector \( \text{vec} \). Word-level embeddings are commonly used in NLP tasks and typically result in higher accuracy and lower computational costs as compared to character-level embeddings. However, models using word-level embeddings are inherently limited due to the fact that they cannot reason about words not included in the vocabulary. To address this limitation, we concatenate the word-level embeddings with length 64 character-embedding vectors. The character-level embeddings are generated using a convolutional encoder that was built as part of a previous CS224N problem set. The first step of the encoder is a lookup of the individual length 64 character embeddings, using the character indices and embeddings provided by the baseline model. Then, the embeddings of all characters in each word are combined using a 1-dimensional convolutional network. To calculate the \( i^{th} \) output feature for the \( t^{th} \) window of the input, a convolution is done between input window \( \text{vec}[\cdot, t:t+k-1] \) and weights \( W[i, :, :] \), and bias term \( b_i \): \[ (x_{\text{conv}})_{i,t} = \text{sum}(W[i, :, :] \odot x_{\text{reshaped}}[\cdot, t:t+k-1]) + b_i \] The ReLU function and max-pooling are applied to the overall convolution output to produce the final embedding \( x_{\text{conv}.\text{out}} \). Concatenating each of these length 64 vectors with each length 300 word-level embedding producing final embeddings that are length 364 vectors. \[ x_{\text{conv}.\text{out}} = \text{MaxPool}(\text{ReLU}(\text{Conv1D}(x_{\text{reshaped}}))) \] ### 3.3 Additional Word Features (BiDAF) In addition to word-level and character-level embeddings for each token, we also added three additional embedding features using different attributes of each token. Part-of-speech (POS), named entity recognition (NER) tags, and normalized term frequency (TF) vectors of length 6, 5, and 1, respectively, were concatenated with each word-level (length 300) and character-level (length 64) embedding vector to produce overall token embeddings of length 376. POS embeddings were generated using Python’s \texttt{nltk} (Natural Language Toolkit) library, which tags tokens with one of 46 possible parts-of-speech. Initially, the POS embeddings were represented as a one-hot encoding vector of length 46. However, to reduce overall embedding size (which shortens model training time), the final model represents each token’s POS using a binary vector of length 6. The numerical value of each binary vector equals the index of the assigned POS tag when placed in a list of length 46. Due to time and compute constraints, the use of one-hot encoding vectors was not tested, although it is possible that this representation could lead to superior results since it would make the embeddings more expressive in terms of POS. NER tag embeddings were generated using Python’s \texttt{spacy} library, which tags tokens with one of 19 possible NER designations. These designations correspond to a series of descriptive categories (e.g., person names, organizations, locations, times, money). Similar to the POS embeddings, NER tags were initially represented as one-hot encoding vectors, but were shortened to binary vectors of length 5. Of note is the limitation of \texttt{spacy} (and other comparable libraries) in tagging many tokens with NER tags. Of the 88,714 total words in the model’s vocabulary, only 14,696 (16.6\%) could be associated with NER tags. Future model enhancements could include the ensembling of multiple NER tagging tools to provide greater coverage. Finally, the normalized term frequency of each token added a single additional embedding value. The frequencies were computed using all phrases in the training set (both context paragraphs and questions). The number of times each unique token appeared were first counted; then the counts were normalized using the largest value to restrict values to the range of 0 to 1. Though not attempted in this project, it is possible that scaling the term frequencies or representing them in another way could lead to better model performance, as it could allow the frequencies to play a more significant role in each token embedding. ### 3.4 Hyperparameter Tuning and Additional Updates (BiDAF) Based on initial experimentation with the BiDAF models, several hyperparameters were tuned via experimentation in an attempt to combat noted weaknesses. In an effort to make the model more expressive, the hidden size was increased from 100 to 128. To prevent overfitting, the drop probability was increased from 0.2 to 0.3 and L2 weight decay was added with parameter $\lambda = 0.001$. Finally, to improve the model’s convergence, the Adam optimizer (9) was substituted for Adagrad (10), due to the fact that it generally does better modulation of the learning rate and makes use of momentum in the gradient. 3.5 QANet Before QANet, most question answering models were based on RNNs with attention. However, RNNs have a major downside; they are not parallelizable and slow. In an attempt to solve this, while trying to maintain a similar performance, our group added on top of the QANet architecture described in (8). The high level structure of the model consists of an embedding layer, an embedding encoder layer, a context-query attention layer, a model encoder layer and an output layer. A visual description can be seen in Figure 1. ![Figure 1: A visual description of the QANet Architecture](image) We followed the implementation from the paper as closely as possible, with the exception of the embedding layer. We also used several functions of existing code from public sources, specifically two repositories containing implementations of transformers (from which QANet adapts many ideas): jadore801120/attention-is-all-you-need-pytorch (13) and SamLynnEvans/Transformer (14). The embedding layer from our most successful BiDAF model was used, meaning that embeddings included word-level and character-level representations, in addition to three additional token features. For the encoder blocks (both the stacked embedding and stacked model blocks in Figure 1) we used existing positional encoding code from (14) and self-attention and feed-forward code from (13). The repeated convolutional sub-block was coded by our team. The context-query attention block code from taken from the class-provided BiDAFAttention function, while the three layers in each output block were coded by our team. 3.6 BERT Our first attempt for the project was trying to modify an existing Bert implementation and adapt it to the baseline model. We started from the following github repository "huggingface/pytorch-pretrained-BERT", and worked to extract the bert embeddings for our project. However, after several hours of work, we decided that it was going to be easier to just run Bert from the huggingface repository. We run into memory errors, decreased the batch size to 8, and changed the gradient accumulation steps to 4. However, doing this approach did not work well for us, and we got very bad results (EM: 0.099, F1: 3.869). Although our initial results could likely be improved by fine-tuning the non-embedding layers for the SQuAD 2.0 dataset, we realized it was going to be very challenging to add additional improvements on top of BERT. Given our experience and the advice of course TAs, our team decided not to integrate BERT with the provided BiDAF baseline, and to stop any further work with BERT. 4 Experiments 4.1 Data The dataset for the project is SQuAD 2.0 (7). The train and dev splits used for the project are sourced from the official SQuAD 2.0 train set, while the test split is sourced from the official dev set. 4.2 Evaluation Method The primary methods for model evaluation are EM and F1 scores, the official SQuAD 2.0 evaluation metrics. Results will be compared to those of other students on the leaderboard. Since the final model design does not use BERT embeddings, we are competing in the non-PCE division. 4.3 Experimental Details For all experiments in the subsections below, the default train and dev sets were used. The BiDAF experiments built on top of or altered the provided baseline model. Unless otherwise noted, a learning rate of .001 was used, and the model was trained for 30 epochs. 4.3.1 BiDAF Model Experiments The initial experiment utilized the provided BiDAF model with no changes made to the default architecture or parameters in order to reproduce the expected baseline performance. The learning rate used was 0.5. Subsequent experiments revolved around the addition of new word and/or character embedding information. First, experiments were run with the addition of character embeddings. These experiments incorporated the baseline model with character-level embeddings and were trained with the Adam optimizer. Next, experiments were done to gauge the effectiveness of character-level embeddings supplemented with additional embedding features: part-of-speech (POS), named entity recognition (NER), and term frequency (TF). Each of these experiments was run with the BiDAF model and the Adam optimizer. Final BiDAF experiments with varying hyperparameters were run on a model using full token embeddings (combining word and character embeddings with all three additional token features). The version using the hyperparameter configuration leading to the best EM and F1 scores is included in the results below. Specifically, this model using a learning rate of 0.001, L2 weight decay with $\lambda = 0.001$, drop probability of 0.3, a hidden layer size of 128, and the Adam optimizer. It was trained for 35 epochs. 4.3.2 QANet Model Experiments Due to time and computing constraints after the QANet architecture was successfully implemented, a single experiment was conducted using the model. For this experiment, the embedding layer of highest performing BiDAF model was used; token embeddings comprised word-level and character-level representations, as well as POS, NER tags, and normalized TF vectors. The hyperparameters used were largely taken from the original QANet paper (8). The learning rate increased logarithmically from 0 to 0.001 during the first 1000 training steps, then remained constant. Normalization was implemented using L2 weight decay with parameter $\lambda = 1 \times 10^{-7}$, and dropout layers with drop rates of 0.1. Differing from the paper, we chose a batch size of 8 and a hidden size of 96 for better performance on the hardware used. All other hyperparameters matched those described in the original QANet paper. The experiment was run on an NV12 virtual machine for a total of 25 epochs, each including 129,941 iterations. Each epoch took approximately 1.5 hours to complete. The number of epochs trained was fewer than the standard 30 due to model’s lengthy run time. 4.4 Results 4.4.1 BiDAF Models Table 1 and Figure 2 provide a summary of our final BiDAF model results. The model using full embeddings (word, character, POS, NER tag, and TF) with tuned hyperparameters performed the best out of the 7 models tested. It achieved an EM score of 60.07 and an F1 score of 63.19 on the dev set. Compared to the baseline model, this is an EM score increase of 3.66 and an F1 score increase of 3.30. The training loss plot in Figure 2 shows that the tuned model maintained the highest training loss out of any of the models. This suggests that the other models overfit the training set, and that the tuned model was better able to generalize, therefore performing better on the dev set. Also of note, it appears that adding part-of-speech embeddings had the most beneficial effect out of the three additional token features. The dev EM and F1 scores of the POS model were nearly as high as the scores of the model using the full embeddings. Adding only named entity recognition tags actually decreased performance as compared to the model with only word and character embeddings; adding only normalized term frequency resulted in limited improvement. These results indicate that the model could be simplified and sped up by removing NER tag and TF embeddings, while maintaining similar performance. Table 1: BiDAF Model EM and F1 Scores <table> <thead> <tr> <th>Model</th> <th>EM Score</th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>Baseline BiDAF</td> <td>56.14</td> <td>59.89</td> </tr> <tr> <td>Word+Character Embeddings</td> <td>58.91</td> <td>62.06</td> </tr> <tr> <td>Word+Character+POS Embeddings</td> <td>59.74</td> <td>62.80</td> </tr> <tr> <td>Word+Character+NER Embeddings</td> <td>58.34</td> <td>61.78</td> </tr> <tr> <td>Word+Character+TF Embeddings</td> <td>59.13</td> <td>62.38</td> </tr> <tr> <td>Word+Character+POS+NER+TF (All) Embeddings</td> <td>59.50</td> <td>62.82</td> </tr> <tr> <td>All Embeddings, Hyperparameters Tuned (Best)</td> <td>60.07</td> <td>63.19</td> </tr> </tbody> </table> Figure 2: Final BiDAF model results. (a) Training loss, (b) Dev loss, (c) Dev EM, (d) Dev F1 4.4.2 QANet Model Table 2 and Figure 3 provide a summary of our single QANet model results. After 25 epochs of training, the QANet model achieved a dev EM score of 57.00 and F1 score of 60.59. These scores are marginally better than the baseline results, but worse than all other BiDAF models. The training and dev plots in Figure 3 suggest that further tuning the model’s hyperparameters could result in far superior results. The QANet model decreased loss faster than the baseline and best BiDAF models during both training and dev. However, the loss then began to increase, often erratically. Decaying the learning rate after approximately 500,000 iterations would likely improve this behavior and result in higher scores. Increasing the batch size could also lead to improvement in this area. A small batch size of 8 was chosen due to computing constraints; a larger batch size such as 64 would lead to less variability between batches and more consistent training. <table> <thead> <tr> <th>Model</th> <th>EM Score</th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>All Embeddings with QANet</td> <td>57.00</td> <td>60.59</td> </tr> </tbody> </table> Figure 3: Final QANet model results. (a) Training loss, (b) Dev loss, (c) Dev EM, (d) Dev F1. Orange: Baseline BiDAF, Cyan: All Embed Tuned BiDAF (best), Green: All Embed QANet 5 Analysis 5.1 Ablation Studies Based on an ablative analysis of the BiDAF model experiments, it is possible to gather some insight into which changes to the original baseline had a significant positive effect on the result. The final tuned model (drop prob = 0.3, hidden size = 128) using all embeddings achieved an F1 score of 63.19. The same model using untuned hyperparameters (drop prob = 0.2, hidden size = 100) achieved a score of 62.82. The difference of 0.37 (0.6%) indicates that tuning the hyperparameters provided only a minor advantage. We can then compare the model with all (word+character+POS+NER+TF) embeddings to the three models with word, character, and one of POS/NER/TF embeddings. The POS model achieved an F1 score of 62.80, while the NER achieved 61.78 and TF achieved 62.38. From these results, we see evidence that the POS embeddings provided a significant positive effect while the other two provided very little, since performance drops off heavily when removing the POS embeddings, but remains nearly constant when removing both NER and TF. Finally, we can analyze the effect of removing character embeddings from the model by comparing the results of the character embeddings experiment with the original baseline model experiment. The model with character embeddings achieved an F1 score of 62.06 while the baseline achieved 59.89. Of all the model components analyzed so far, this difference of 2.17 (3.5%) represents the greatest effect. It is likely that, with the addition of character embedding information, the model is better able to handle instances of unknown words appearing in either the question or context by using character-level information to gain insight into these unknown words. 5.2 Characteristic Examples The example outputs viewable in TensorBoard provide some evidence for general strengths and weakness of the trained models. For this analysis, the best performing BiDAF model (enhanced-embeddings, fine-tuned) and the enhanced-embedding QANet model are considered. Although it is difficult to generalize about model performance, we note several interesting behaviors below. Results from the enhanced-embedding, fine-tuned BiDAF model show many chosen answers contain context phrases that are close to correct, but too lengthy. For example, the correct answer of the question 'What is the most important type of Norman art preserved in churches?' is 'mosaics', but the model chose 'sculptured fonts, capitals, and more importantly mosaics'. Similarly, the correct answer of the question 'The time required to output an answer on a deterministic Turing machine is expressed as what?' is 'state transitions', but the model chose 'M on input x is the total number of state transitions'. Continuing the trend, the correct answers of two other questions are 'lack of net force' and 'problem instance', but the model chose 'a lack of net force' and 'a problem instance', respectively. Strategies to address these types of errors were not implemented in this project, but future work could be done in this area. The enhanced-embedding QANet model demonstrated an ability to answer with the correct type of item or idea, but often struggled with choosing which of the items from the context to choose from. For example, the correct answer to the question 'Which directive mentioned was created in 1994?' from the context ‘...the 1994 Works Council Directive, which required workforce consultation in businesses, and the 1996 Parental Leave Directive...’ is ‘Works Council Directive’, but the model instead answered with ‘Parental Leave Directive’. This could be due to the sub-optimal training of the QANet architecture resulting in a weak ability of the attention mechanism to identify from where in the context the answer is likely to come. 6 Conclusion As demonstrated by this project, there are multiple paths to improving the performance of a baseline BiDAF model competing in the SQuAD 2.0 challenge. Supplementing word-level embeddings with character-level embeddings leads to EM and F1 scores both improved by approximately 2 points. EM and F1 scores can be further increased by approximately 1 point each by fine-tuning hyperparameters and adding three additional features to the embeddings: part-of-speech, named entity recognition tags, and term frequency. Adding only the part-of-speech feature seems to be nearly as effective as adding all three. Further tuning the model’s hyperparameters and adding different, more descriptive token feature embeddings could result in even better performance. Our work with the QANet model also achieved results better than the baseline, but there are many opportunities to further improve its performance. Many more experiments could be run using different hyperparameters. Specifically, decaying the learning rate at later time steps and increasing the batch size would likely improve the sometimes erratic training behavior. Also, increasing the model’s hidden size could allow for a more expressive and therefore more accurate model. Overall, it is recognized that the increase from baseline performance achieved in this project was relatively small. However, given more time and computing power, we are confident that the simple improvements described above could achieve far better results. References [6] The initial Bert Github repository. huggingface/pytorch-pretrained-BERT [13] Attention is all you need in pytorch repository: jadore801120/attention-is-all-you-need-pytorch [14] Another Attention is all you need in pytorch repository: SamLynnEvans/Transformer
{"Source-Url": "http://web.stanford.edu/class/cs224n/reports/default/15744366.pdf", "len_cl100k_base": 5986, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21612, "total-output-tokens": 7303, "length": "2e12", "weborganizer": {"__label__adult": 0.0005159378051757812, "__label__art_design": 0.0009613037109375, "__label__crime_law": 0.0006375312805175781, "__label__education_jobs": 0.005901336669921875, "__label__entertainment": 0.00031185150146484375, "__label__fashion_beauty": 0.0003180503845214844, "__label__finance_business": 0.00035881996154785156, "__label__food_dining": 0.0004508495330810547, "__label__games": 0.0009679794311523438, "__label__hardware": 0.001354217529296875, "__label__health": 0.0009255409240722656, "__label__history": 0.0004758834838867187, "__label__home_hobbies": 0.0001226663589477539, "__label__industrial": 0.0006952285766601562, "__label__literature": 0.0016813278198242188, "__label__politics": 0.0005121231079101562, "__label__religion": 0.0007691383361816406, "__label__science_tech": 0.322265625, "__label__social_life": 0.00026679039001464844, "__label__software": 0.038330078125, "__label__software_dev": 0.62109375, "__label__sports_fitness": 0.0004045963287353515, "__label__transportation": 0.000591278076171875, "__label__travel": 0.0002722740173339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29497, 0.0554]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29497, 0.26712]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29497, 0.9255]], "google_gemma-3-12b-it_contains_pii": [[0, 3384, false], [3384, 7232, null], [7232, 11798, null], [11798, 14339, null], [14339, 18080, null], [18080, 20328, null], [20328, 22354, null], [22354, 26948, null], [26948, 29497, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3384, true], [3384, 7232, null], [7232, 11798, null], [11798, 14339, null], [14339, 18080, null], [18080, 20328, null], [20328, 22354, null], [22354, 26948, null], [26948, 29497, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29497, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29497, null]], "pdf_page_numbers": [[0, 3384, 1], [3384, 7232, 2], [7232, 11798, 3], [11798, 14339, 4], [14339, 18080, 5], [18080, 20328, 6], [20328, 22354, 7], [22354, 26948, 8], [26948, 29497, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29497, 0.10435]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
0be58a5a912d40e80a9ea82f37ce8d1221301588
Code Generation – Wilhelm/Maurer: Compiler Design, Chapter 12 – Mooly Sagiv Tel Aviv University and Reinhard Wilhelm Universität des Saarlandes wilhelm@cs.uni-sb.de 19. Dezember 2007 “Standard” Structure source(text) → lexical analysis(7) → tokenized-program → syntax analysis(8) → syntax-tree → semantic-analysis(9) → decorated syntax-tree → optimizations(10) → intermediate rep. → code-generation(11, 12) → machine-program finite automata pushdown automata attribute grammar evaluators abstract interpretation + transformations tree automata + dynamic programming + … Code Generation Real machines (instead of abstract machines): - Register machines, - Limited resources (registers, memory), - Fixed word size, - Memory hierarchy, - Intraprocessor parallelism. Architectural Classes: CISC vs. RISC **CISC** IBM 360, PDP11, VAX series, INTEL 80x86, Pentium, Motorola 680x0 - A large number of addressing modes - Computations on stores - Few registers - Different instruction lengths - Different execution times for instructions - Microprogrammed instruction sets **RISC** Alpha, MIPS, PowerPC, SPARC - One instruction per cycle (with pipeline for load/stores) - Load/Store architecture – Computations in registers (only) - Many registers - Few addressing modes - Uniform lengths - Hard-coded instruction sets - Intra-processor parallelism: Pipeline, multiple units, Very Long Instruction Words (VLIW), Superscalarity, Speculation Phases in code generation **Code Selection:** selecting semantically equivalent sequences of machine instructions for programs, **Register Allocation:** exploiting the registers for storing values of variables and temporaries, **Code Scheduling:** reordering instruction sequences to exploit intraprocessor parallelism. Optimal register allocation and instruction scheduling NP-hard. Phase Ordering Problem Partly contradictory optimization goals: Register allocation: minimize number of registers used $\Rightarrow$ reuse registers, Code Scheduling: exploit parallelism $\Rightarrow$ keep computations independent, no shared registers Issues: - Software Complexity - Result Quality - Order in Serialization Challenges in real machines: CISC vs. RISC **CISC** IBM 360, PDP11, VAX series, INTEL 80x86, Motorola 680x0 - A large number of addressing modes - Computations on stores - Few registers - Different instruction lengths - Different execution times for instructions - Microprogrammed instruction sets **RISC** Alpha, MIPS, PowerPC, SPARC - One instruction per cycle (with pipeline for load/stores) - Load/Store architecture – Computations in registers (only) - Many registers - Few addressing modes - Uniform lengths - Hard-coded instruction sets - Intra-processor parallelism: Pipeline, multiple units, Very Long Instruction Words (VLIW), Superscalarity, Speculation Example: \( x = y + z \) **CISC/Vax** \texttt{addl3 4(fp), 6(fp), 8(fp)} **RISC** - load \( r_1, 4(fp) \) - load \( r_2, 6(fp) \) - add \( r_1, r_2, r_3 \) - store \( r_3, 8(fp) \) The VLIW Architecture - Several functional units, - One instruction stream, - Jump priority rule, - FUs connected to register banks, - Enough parallelism available? # Instruction Pipeline Several instructions in different states of execution Potential structure: 1. instruction fetch and decode, 2. operand fetch, 3. instruction execution, 4. write back of the result into target register. <table> <thead> <tr> <th>Pipeline stage</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>$B_1$</td> <td>$B_2$</td> <td>$B_3$</td> <td>$B_4$</td> <td>$B_1$</td> <td>$B_2$</td> <td>$B_3$</td> </tr> <tr> <td></td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> <td>7</td> </tr> </tbody> </table> Pipeline hazards - Cache hazards: Instruction or operand not in cache, - Data hazards: Needed operand not available, - Structural hazards: Resource conflicts, - Control hazards: (Conditional) jumps. Program Representations - Abstract syntax tree: algebraic transformations, code generation for expression trees, - Control Flow Graph: Program analysis (intraproc.) - Call Graph: Program analysis (interproc.) - Static Single Assignment: optimization, code generation - Program Dependence Graph: instruction scheduling, parallelization - Register Interference graph: register allocation Code Generation: Integrated Methods - Integration of register allocation with instruction selection, - Simple machine model: - \( r \) general purpose register \( R_0, \ldots, R_{r-1} \), - Two address instructions \[ \begin{align*} R_i & := M[V] & \text{Load} \\ M[V] & := R_i & \text{Store} \\ R_i & := R_i \text{ op } M[V] & \text{Compute} \\ R_i & := R_i \text{ op } R_j \end{align*} \] - Two phases: 1. Computing register requirements, 2. Generating code, allocating registers and temporaries. Example Tree Source \( r := (a + b) - (c - (d + e)) \) Tree ``` := r - - + a b - c + d e ``` ## Generated Code 2 Registers $R_0$ and $R_1$ Two possible code sequences: <table> <thead> <tr> <th>$R_0$</th> <th>$M[a]$</th> <th>$R_0$</th> <th>$M[c]$</th> </tr> </thead> <tbody> <tr> <td>$R_0$</td> <td>$R_0 + M[b]$</td> <td>$R_0$</td> <td>$R_0 - R_1$</td> </tr> <tr> <td>$R_1$</td> <td>$M[d]$</td> <td>$R_1$</td> <td>$R_1 + M[e]$</td> </tr> <tr> <td>$R_1$</td> <td>$R_1 + M[e]$</td> <td>$R_0$</td> <td>$R_0 - R_1$</td> </tr> <tr> <td>$M[t_1]$</td> <td>$R_1$</td> <td>$R_1$</td> <td>$M[a]$</td> </tr> <tr> <td>$R_1$</td> <td>$M[c]$</td> <td>$R_1$</td> <td>$R_1 + M[b]$</td> </tr> <tr> <td>$R_1$</td> <td>$R_1 - M[t_1]$</td> <td>$R_1$</td> <td>$R_1 - R_0$</td> </tr> <tr> <td>$R_0$</td> <td>$R_0 - R_1$</td> <td>$M[f]$</td> <td>$R_1$</td> </tr> <tr> <td>$M[f]$</td> <td>$R_0$</td> <td></td> <td></td> </tr> </tbody> </table> stores result for $c - (d + 2)$ evaluates $c - (d + 2)$ first in a temporary (needs 2 registers) no register available saves one instruction The Algorithm Principle: Given tree $t$ for expression $e_1 \ op \ e_2$ $t_1$ needs $r_1$ registers, $t_2$ needs $r_2$ registers, $r \geq r_1 > r_2$: After evaluation of $t_1$: $r_1 - 1$ registers freed, one holds the result, $t_2$ gets enough registers to evaluate, hence $t$ can be evaluated in $r_1$ registers, $r_1 = r_2$: $t$ needs $r_1 + 1$ registers to evaluate, $r_1 > r$ or $r_2 > r$: spill to temporary required. Labeling Phase - Labels each node with its register needs, - Bottom-up pass, - Left leaves labeled with '1' have to be loaded into register, - Right leaves labeled with '0' are used as operands, - Inner nodes: \[ \text{regneed}(\text{op}(t_1, t_2)) = \begin{cases} \max(r_1, r_2), & \text{if } r_1 \neq r_2 \\ r_1 + 1, & \text{if } r_1 = r_2 \end{cases} \] where \( r_1 = \text{regneed}(t_1) \), \( r_2 = \text{regneed}(t_2) \) Example \[ \begin{array}{c} \text{:=} \\ \text{−} 2 \\ \text{+} 1 \\ \text{−} 2 \\ \text{+} 1 \\ a \quad b \\ 1 \quad 0 \\ 1 \\ d \quad e \\ 1 \quad 0 \end{array} \] Generation Phase Principle: - Generates instruction \textbf{Op} for operator \textit{op} in \textit{op}(t_1, t_2) after generating code for \textit{t}_1 and \textit{t}_2. - Order of \textit{t}_1 and \textit{t}_2 depends on their register needs, - The generated \textbf{Op}–instruction finds value of \textit{t}_1 in register, - \textit{RSTACK} – available registers, initially all registers, \textbf{Before processing \textit{t}}: top(\textit{RSTACK}) is determined as result register for \textit{t}, \textbf{After processing \textit{t}}: all registers available, but top(\textit{RSTACK}) is result register for \textit{t}. - \textit{TSTACK} – available temporaries. Algorithm Gen_Opt_Code Algorithm \[ \begin{array}{l} \text{var } RSTACK: \text{ stack of register;} \\ \text{var } TSTACK: \text{ stack of address;} \\ \text{proc Gen\_Code}(t: \text{ tree}); \\ \text{var } R: \text{ register, } T: \text{ address;} \\ \text{case } t \text{ of} \\ (\text{leaf } a, 1): \quad (\text{*left leaf*}) \\ \quad \text{emit}(\text{top}(RSTACK) := a); \\ (\text{op}((t_1, r_1), (\text{leaf } a, 0))): \quad (\text{*right leaf*}) \\ \quad \text{Gen\_Code}(t_1); \\ \quad \text{emit}(\text{top}(RSTACK) := \text{top}(RSTACK) \text{ Op } a); \\ \end{array} \] <table> <thead> <tr> <th>$RSTACK$-Contents</th> <th>result register</th> </tr> </thead> <tbody> <tr> <td>((R', R'', \ldots))</td> <td>(\text{result in } R')</td> </tr> </tbody> </table> \[ \text{op}((t_1, r_1), (t_2, r_2)) : \] \begin{cases} r_1 < \min(r_2, r): \\ \text{begin} \\ \text{exchange}(\text{RSTACK}); \\ \text{Gen\_Code}(t_2); \\ R := \text{pop}(\text{RSTACK}); \\ \text{Gen\_Code}(t_1); \\ \text{emit}(\text{top}(\text{RSTACK}) := \text{top}(\text{RSTACK}) \text{ Op } R); \\ \text{push}(\text{RSTACK}, R); \\ \text{exchange}(\text{RSTACK}); \\ \text{end} ; \\ \end{cases} (R', R'', \ldots) \[ r_1 \geq r_2 \land r_2 < r: \] \[ \text{begin} \quad \text{Gen\_Code}(t_1); \quad R := \text{pop}(RSTACK); \quad \text{Gen\_Code}(t_2); \quad \text{emit}(R := R \text{ Op top}(RSTACK)); \quad \text{push}(RSTACK, R); \text{end} ; \] \[ (R', R'', \ldots) \] result in \( R' \) \[ (R'', \ldots) \] result in \( R'' \) \[ (R', R'', \ldots) \] \( r_1 \geq r \land r_2 \geq r \): \[ \begin{align*} \text{begin} \\ \quad \text{Gen\_Code}(t_2); \\ \quad T := \text{pop}(TSTACK); \\ \quad \text{emit}(M[T] := \text{top}(RSTACK)); \\ \quad \text{Gen\_Code}(t_1); \\ \quad \text{emit}(\text{top}(RSTACK) := \text{top}(RSTACK) \text{ Op } M[T]); \\ \quad \text{push}(TSTACK, T); \\ \text{end}; \\ \text{endcases} \\ \text{endcase} \\ \text{endproc} \end{align*} \] Register Allocation and Instruction Selection by Dynamic Programming - More complex architecture, - \( r \) general purpose registers \( R_0, \ldots, R_{r-1} \), \[ R_i \quad := \quad e \quad \text{Compute} \] - Instruction formats: \( R_i \quad := \quad M[V] \quad \text{Load} \) \[ M[V] \quad := \quad R_i \quad \text{Store} \] - \( e \) term with registers and memory cells, costs associated with each instruction. - Goal: Generate cheapest instruction sequence using no more than \( r \) registers. - Assume contiguous computation of subtrees \( \implies \) only one register required to hold the result - Use instruction selection techniques with tree parsing, compute cheapest derivation. Canonical recursive solution - Assume $e$ of instruction $R_i := e$ matches tree $t$ - some subtrees of $t$ corresp. to memory operands of $e$ – computed into memory, no registers occupied after that - let $e$ have $k$ register operands: compute corresponding subtrees $t_1, t_2, \ldots, t_k$ into these registers - assume order $i_1, i_2, \ldots, i_k$ on computation and $j$ available registers - $t_{i_1}$ has $j$ registers available, $t_{i_2}$ has $j - 1$, \ldots, $t_k$ has $j - k$ available - if this fits for all subtrees $(j - k - \text{regnee}(t_{i_k}) \geq 0)$, add the minimal costs for computing all subtrees in this way to the costs of $e$ to yield the minimal costs for this combination - if not enough registers are available, compute enough subtrees into memory, sum up costs like above Canonical recursive solution (cont’d) Doing it for all potential combinations recomputes the costs for subtrees $\Rightarrow$ exponential complexity Dynamic Programming - Convert top-down algorithm into bottom-up algorithm tabulating partial solutions - Associate cost vector $C[0..r]$ with each node $n$, $C[0]$ cheapest costs for computing $t/n$ into a temporary, $C[i]$ cheapest costs computing $t/n$ into a register using $i$ registers. - Compute cost vector at $n$ minimizing over all “legal” combinations of - one applicable instruction, - the cost vectors of the nodes “under” non-terminal nodes in the applied rule. - What is a legal combination for $C[j], j > 0$? A combination of generated code for subtrees needing $\leq j$ registers. - Extract cheapest instruction sequence in a second pass. Global Register Allocation So far, register allocation for assignments. Now, register allocation across whole procedures/programs. Tasks of the Register Allocator: 1. determine candidates, i.e., variables and intermediate results, called Symbolic Registers, to keep in real registers, and determine their “life spans”. 2. assign symbolic registers without “collisions” to real registers using some optimality criterion, 3. modify the code to implement the decisions. Constraint for assignment: - Two symbolic registers collide if their contents are “live” at the same time, - Colliding symbolic registers cannot be allocated to the same real register. Definitions - A **definition** of a symbolic register is the computation of an intermediate result or the modification of a variable, - A **use** of a symbolic register is a reading access to the corresponding variable or a use of the intermediate value, Note: uses of symbolic registers in an individual computation step, e.g. execution of an instruction or of an assignment **precede** definitions of symbolic registers. - A **definition–path** of $s$ to program point $p$ is a path from the entry point of the program to $p$ containing a definition of $s$, - A **use–path** from $p$ is a definition-free path starting at $p$ containing a use of $s$, - Symbolic register $s$ is **live** at program point $p$ if exists a definition–path to $p$ and a use–path from $p$, The **life span** of \( s \) is the set of all program points, on which \( s \) is live. Value of a live symbolic register may still be used. Two life spans of symbolic registers **collide** if one of the registers is set in the life span of the other. A life span for variable \( X \) Computation of life ranges Needs du (definition-use) chains. A du (definition-use) chain connects a definition of a variable to all the associated uses, i.e., uses that a value set at the definition may flow to. Two du chains are use-connected iff they share a use. One could say, shared uses were vel-defined\(^1\). A life range of a variable is the connected component of all use-connected du chains of that variable. \(^1\)Thanks to Raimund Seidel Register Interference Graph - nodes – life spans, - edge between colliding life spans. Allows to view the register-allocation problem as a graph coloring problem. - $k$ physical registers available, - Solve $k$–coloring problem, - NP–complete for $k > 2$, - Use heuristics. Build constructs the register interference graph $G$, Reduce initializes an empty stack; repeatedly removes locally colorable nodes and pushes them onto the stack. Continue at Assign Colours, if arrived at the empty graph: $G$ is $k$-colorable Continue at Spill if locally uncolorable nodes remain in the graph. Algorithm cont’d **Assign Colours** pops nodes from the stack, reinserts them into the graph, and assigns a color not assigned to any neighbour. **Spill** uses heuristics to select one node (variable) to spill to memory, inserts a **load** before each use of the variable and a **store** after each definition. Then continues with **Build**. The classical method by Chaitin uses $\text{degree}(n) < k$ as local-colorability criterion. It means, $n$ and its neighbours can be colored with different colors. Properties - **Assign Colours** pops nodes off the stack in reverse order as Reduce pushed them onto the stack. - The $degree(n) < k$ criterium holding, when $n$ was pushed, guarantees colorability. - **Termination:** - Reduce repeatedly removes nodes from the finite set of nodes; each cycle through Spill reduces the graph by 1 node. Heuristics for Node Removal 1. degree of the node: high degree causes many deletions of edges, 2. costs of spilling. Example Input-program x := 1 y := 2 w := x + y u := y + 2 z := x * y x := u + z print x,z,u Example Input-program Symbolic Reg. Assign. \[ x := 1 \] \[ s_1 := 1 \] \[ y := 2 \] \[ s_2 := 2 \] \[ w := x + y \] \[ s_3 := s_1 + s_2 \] \[ u := y + 2 \] \[ s_4 := s_2 + 2 \] \[ z := x \times y \] \[ s_5 := s_1 \times s_2 \] \[ x := u + z \] \[ s_6 := s_4 + s_5 \] \[ \text{print } x, z, u \] \[ \text{print } s_6, s_5, s_4 \] Example Input-program Symbolic Reg. Assign. x := 1 s1 := 1 y := 2 s2 := 2 w := x + y s3 := s1 + s2 u := y + 2 s4 := s2 + 2 z := x * y s5 := s1 * s2 x := u + z s6 := s4 + s5 print x,z,u print s6,s5,s4 Register interference graph s3 s2 s6 s1 s4 s5 Example Input-program Symbolic Reg. Assign. \(x := 1\) \(s_1 := 1\) \(y := 2\) \(s_2 := 2\) \(w := x + y\) \(s_3 := s_1 + s_2\) \(u := y + 2\) \(s_4 := s_2 + 2\) \(z := x \times y\) \(s_5 := s_1 \times s_2\) \(x := u + z\) \(s_6 := s_4 + s_5\) \(\text{print } x, z, u\) \(\text{print } s_6, s_5, s_4\) Register interference graph Example <table> <thead> <tr> <th>Input-program</th> <th>Symbolic Reg. Assign.</th> <th>After Register Allocation</th> </tr> </thead> <tbody> <tr> <td>x := 1</td> <td>s1 := 1</td> <td>r1 := 1</td> </tr> <tr> <td>y := 2</td> <td>s2 := 2</td> <td>r2 := 2</td> </tr> <tr> <td>w := x + y</td> <td>s3 := s1 + s2</td> <td>r3 := r1 + r2</td> </tr> <tr> <td>u := y + 2</td> <td>s4 := s2 + 2</td> <td>r3 := r2 + 2</td> </tr> <tr> <td>z := x * y</td> <td>s5 := s1 * s2</td> <td>r1 := r1 + r2</td> </tr> <tr> <td>x := u + z</td> <td>s6 := s4 + s5</td> <td>r2 := r3 + r1</td> </tr> <tr> <td>print x, z, u</td> <td>print s6, s5, s4</td> <td>print r2, r1, r3</td> </tr> </tbody> </table> Register interference graph Problems Architectural irregularities: - not every physical register can be allocated to every symbolic register, - some symbolic registers need combinations of physical registers, e.g. pairs of aligned registers. Dedication: Some registers are dedicated for special purposes, e.g. transfer of arguments. Extensions Remember: An edge in the interference graph means: the connected objects cannot be allocated to the same physical register. Assume, that physical register \( r \) cannot be allocated to symbolic register \( s \). Solution: Add nodes for physical registers to the interference graph; connect \( r \) with \( s \). Disadvantage: Graph now describes program-specific constraints (\( s_1 \) and \( s_2 \) live at the same time) and architecture-specific constraints (fixed-point operands should not be allocated to floating-point registers). Separating Architectural and Program Constraints **Machine description:** - **Regs** register names, - **Conflict** relation on Regs, \((r_1, r_2) \in \text{Conflict} \iff r_1 \text{ and } r_2 \text{ can not be allocated simultaneously.} \) Example: registers and register pairs containing them. - **Class** Subsets of registers - required as operands of instructions, or - dedicated for special purposes of the run-time system **Constraints on allocation** (connection between symb. and phys. registers) - Association of register classes with symbolic registers - Conjunction of constraints \(\implies\) intersection of register classes is new register class. Generalized Interference Graph extended by assoc. register classes to symbolic registers. Assignment for $S \subseteq SymbRegs$ is $A : S \mapsto Regs$ such that $A(s) \in \text{class}(s)$ for all $s \in S$. New local colorability criterion: $s \in S \subseteq SymbRegs$ is locally colorable iff for all assignments $A$ of the neighbours of $s$ there exists a register $r \in \text{class}(s)$ that does not conflict with the assignment on any neighbour. Coloring the Generalized Interference Graph Register classes with conflicts and generalized interference graph. s1 and s2 are locally colorable, s3 is not. Old local-colorability criterion is satisfied, \(\text{degree} = 2\) for all three symb. registers. Efficient Approximative Test for Local Colorability Let $A$, $B$ be two register classes. $$\maxConflict_A(B) = \max_{a \in A} \left| \{ b \in B | (a, b) \in \text{Conflict} \} \right|$$ maximal nume of registers in $B$, that a single register in $A$ can conflict with. **Approximative colorability test** for $s$ with $\text{class}(s) = B$: $$\sum_{(s, s') \in E, \text{class}(s') = A} \maxConflict_A(B) < |B|$$ Precompute $\maxConflict_A(B)$ for all $A$ and $B$, depends only on the architecture! Example <table> <thead> <tr> <th>C \ D</th> <th>A</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>A</td> <td>B</td> </tr> <tr> <td>B</td> <td></td> <td></td> </tr> </tbody> </table> Example Tabulating $\maxConflicts_A(B)$ <table> <thead> <tr> <th>$C \setminus D$</th> <th>$A$</th> <th>$B$</th> </tr> </thead> <tbody> <tr> <td>$A$</td> <td>1</td> <td>1</td> </tr> <tr> <td>$B$</td> <td>2</td> <td>1</td> </tr> </tbody> </table>
{"Source-Url": "http://www.rw.cdl.uni-saarland.de/teaching/cc07/slides/l12_code_generation.pdf", "len_cl100k_base": 6500, "olmocr-version": "0.1.49", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 99044, "total-output-tokens": 8246, "length": "2e12", "weborganizer": {"__label__adult": 0.0003421306610107422, "__label__art_design": 0.0005283355712890625, "__label__crime_law": 0.0003161430358886719, "__label__education_jobs": 0.0005221366882324219, "__label__entertainment": 5.0961971282958984e-05, "__label__fashion_beauty": 0.0001798868179321289, "__label__finance_business": 0.00022125244140625, "__label__food_dining": 0.00038242340087890625, "__label__games": 0.0006799697875976562, "__label__hardware": 0.003925323486328125, "__label__health": 0.0004169940948486328, "__label__history": 0.00028967857360839844, "__label__home_hobbies": 0.0001881122589111328, "__label__industrial": 0.0008172988891601562, "__label__literature": 0.0001704692840576172, "__label__politics": 0.00024247169494628904, "__label__religion": 0.000545501708984375, "__label__science_tech": 0.02978515625, "__label__social_life": 6.014108657836914e-05, "__label__software": 0.0048065185546875, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.00045418739318847656, "__label__transportation": 0.0006709098815917969, "__label__travel": 0.00024211406707763672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20216, 0.02032]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20216, 0.28441]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20216, 0.68104]], "google_gemma-3-12b-it_contains_pii": [[0, 185, false], [185, 578, null], [578, 772, null], [772, 1442, null], [1442, 1830, null], [1830, 2158, null], [2158, 2825, null], [2825, 3010, null], [3010, 3176, null], [3176, 3710, null], [3710, 3910, null], [3910, 4297, null], [4297, 4837, null], [4837, 5005, null], [5005, 5739, null], [5739, 6175, null], [6175, 6620, null], [6620, 6787, null], [6787, 7466, null], [7466, 8182, null], [8182, 8643, null], [8643, 8985, null], [8985, 9400, null], [9400, 10131, null], [10131, 10934, null], [10934, 11084, null], [11084, 11746, null], [11746, 12403, null], [12403, 13176, null], [13176, 13464, null], [13464, 13917, null], [13917, 14194, null], [14194, 14506, null], [14506, 15015, null], [15015, 15354, null], [15354, 15472, null], [15472, 15565, null], [15565, 15920, null], [15920, 16217, null], [16217, 16557, null], [16557, 17224, null], [17224, 17532, null], [17532, 18082, null], [18082, 18760, null], [18760, 19215, null], [19215, 19473, null], [19473, 19978, null], [19978, 20059, null], [20059, 20216, null]], "google_gemma-3-12b-it_is_public_document": [[0, 185, true], [185, 578, null], [578, 772, null], [772, 1442, null], [1442, 1830, null], [1830, 2158, null], [2158, 2825, null], [2825, 3010, null], [3010, 3176, null], [3176, 3710, null], [3710, 3910, null], [3910, 4297, null], [4297, 4837, null], [4837, 5005, null], [5005, 5739, null], [5739, 6175, null], [6175, 6620, null], [6620, 6787, null], [6787, 7466, null], [7466, 8182, null], [8182, 8643, null], [8643, 8985, null], [8985, 9400, null], [9400, 10131, null], [10131, 10934, null], [10934, 11084, null], [11084, 11746, null], [11746, 12403, null], [12403, 13176, null], [13176, 13464, null], [13464, 13917, null], [13917, 14194, null], [14194, 14506, null], [14506, 15015, null], [15015, 15354, null], [15354, 15472, null], [15472, 15565, null], [15565, 15920, null], [15920, 16217, null], [16217, 16557, null], [16557, 17224, null], [17224, 17532, null], [17532, 18082, null], [18082, 18760, null], [18760, 19215, null], [19215, 19473, null], [19473, 19978, null], [19978, 20059, null], [20059, 20216, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20216, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20216, null]], "pdf_page_numbers": [[0, 185, 1], [185, 578, 2], [578, 772, 3], [772, 1442, 4], [1442, 1830, 5], [1830, 2158, 6], [2158, 2825, 7], [2825, 3010, 8], [3010, 3176, 9], [3176, 3710, 10], [3710, 3910, 11], [3910, 4297, 12], [4297, 4837, 13], [4837, 5005, 14], [5005, 5739, 15], [5739, 6175, 16], [6175, 6620, 17], [6620, 6787, 18], [6787, 7466, 19], [7466, 8182, 20], [8182, 8643, 21], [8643, 8985, 22], [8985, 9400, 23], [9400, 10131, 24], [10131, 10934, 25], [10934, 11084, 26], [11084, 11746, 27], [11746, 12403, 28], [12403, 13176, 29], [13176, 13464, 30], [13464, 13917, 31], [13917, 14194, 32], [14194, 14506, 33], [14506, 15015, 34], [15015, 15354, 35], [15354, 15472, 36], [15472, 15565, 37], [15565, 15920, 38], [15920, 16217, 39], [16217, 16557, 40], [16557, 17224, 41], [17224, 17532, 42], [17532, 18082, 43], [18082, 18760, 44], [18760, 19215, 45], [19215, 19473, 46], [19473, 19978, 47], [19978, 20059, 48], [20059, 20216, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20216, 0.07415]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
8e7be8e99d73b6263e1cc4f7d755628d186b7232
A Structure Editor for VHDL Matthew P. Phillips, Defence Science and Technology Organisation, Adelaide, South Australia. Peter J. Ashenden, Department of Computer Science, University of Adelaide, Adelaide, South Australia. Abstract While VHDL is a powerful tool for the design and verification of digital electronics, this power is accompanied by great language complexity. This complexity can only be reduced by the use of intelligent tools capable of watching the design as it evolves, catching errors and speeding input. This paper describes the development of such a tool, a structure editor for VHDL. 1. Introduction Hardware description languages (HDL’s) have become an invaluable tool for the design and testing of electronic hardware systems. HDL’s have many advantages over the more traditional schematic design technique but, most importantly, a HDL incorporates the behaviour of the hardware along with its structure. HDL models are also useful in that they allow the designer to verify a high-level behavioural description of the hardware and then gradually design its structural analogue. Furthermore, a HDL design is generally far easier for others to comprehend. However, there is a price to be paid when using a HDL, in that the engineer must come to terms with a new and complex tool. To give the most flexibility, HDL’s often require many syntactical constructs and complex semantic rules to regulate their combination. The designer must also face the task of programming and debugging the design, two new phases of development that require time and experience. Furthermore, the engineer may want more than just a view of the HDL text. It may be necessary to view the end-product structural design as a schematic outline and to have the hardware simulation displayed within this framework. Thus what is needed is more than just a standard text-editing system, but an integrated HDL programming environment, with an editor that can assist the designer in writing and debugging the design, and the ability for the HDL code to be viewed and debugged in different ways. 1.1 The project The goal of this project was to produce the first stage of an integrated programming environment for the VHSIC Hardware Description Language (VHDL) [10, 17, 18] - a VHDL-specific editor. The editor's task is to speed the design of VHDL models by both accelerating code entry and formatting and detecting syntactical errors as the model is created. In order for this to occur, the editor must have a knowledge of VHDL syntax and formatting conventions. It must also be able to use this knowledge to provide intelligent editing commands and context-sensitive template creation for the main VHDL syntactic constructs. Furthermore, it must be able to share its output with other applications and preferably be able to operate in tandem with them. This sort of language-specific editor is often called a structure editor or syntax-directed editor. Therefore the goal of the project is to create a structure editor for VHDL that can be integrated with other design entry tools and with other applications. In the remainder of this section, a survey of structure editor technology is presented. Section 2 discusses the editor environment (MultiView), while Section 3 discusses the editor implementation itself. Section 4 presents a conclusion to this project and some ideas for further work. 1.2 Structure editors The most widely used programming editors today are purely text-oriented. Such text editors generally make no use of the structured nature of programming languages for which they are used. Some text editors, such as Emacs [14], allow extensions that can provide a degree of language-specific editing, but these extensions tend to be based on ad hoc pattern matching rules and can be unreliable. They also provide no automatic syntax checking, and indeed may go completely awry when presented with incorrect syntax. A structure editor generally parses the program code as it is entered and stores the result as an abstract syntax tree (AST). The editor then allows the user to edit this tree within the syntactic constraints of the programming language. It may also allow the user to edit the program as text if required. The editor's internal AST representation of programs raises two issues that are not present in textual editors: the editing and the display of an abstract syntax tree. 1.2.1 Editing A pure structure editor only manipulates AST's via tree create, modify and delete operations that are allowable within the syntax of the language. This sort of editor faces the challenge of providing the user with an easy way of controlling such tree operations. The most common way for the user to edit AST's in such an editor is by selection and expansion, which is a process of successively deriving non-terminals in the AST until the desired result is achieved. This is a procedure resembling, in some ways, stepwise refinement. However, as Allison points out, this is not an intuitive way for most people to enter programs [2]. In fact, while people tend to think of programs as text with underlying structure, their editing preference seems to be text-oriented. The selection/ expansion approach has three main disadvantages: - it is less straightforward than text entry. - it requires more steps than simply typing the constructs - it forces the programmer to know the formal syntax of the language in order to know which branches of the syntax tree to expand. Another problem with a strictly tree-oriented approach is that it can make it more difficult to modify existing code. For example, the user may wish to make the change to a loop construct as illustrated in Figure 1. In a text editor, the user would simply make the changes, with the program passing through some inconsistent state in the process. This sort of inconsistency cannot happen in a strictly tree-based editor because there is no way to represent the intermediate state as an AST. A partial solution is to provide tools that make these sorts of changes automatically. If well designed, such tools might even speed up some changes, but it does not seem possible to foresee and provide for all the editing operations a programmer might need. Another solution is to provide the ability cut and paste subtrees to and from a number of “clipboards”. In this sort of editor, to make the changes in shown in Figure 1, the user would cut the <expr> and <statements> subtrees into two separate clipboards and then delete the entire loop subtree. Then the user would create an appropriate new loop template and paste the old subtrees over the placeholders. This is adequate for this example, however for more complex changes this approach still requires considerably more effort than a few text cut-and-paste operations. In practice, a simple and effective solution is to provide the ability to edit selected portions of the program as text and then reintegrate the changes into the tree when the construct is fully entered. Editors using this approach are termed hybrid structure editors and are by far the most common. Examples of successful hybrid structure editors include the Cornell Program Synthesizer [15] and CAPS [19]. An interesting alternative approach to hybrid editing is to provide intelligent subtree search and replace as in SED [2]. 1.2.2 Display A structure editor must have some means of showing its internal AST representation of the program in textual form. Unfortunately, the process of parsing usually means that the original textual layout of the program is lost. The layout can be recreated automatically by traversing the tree and using unparsing rules to govern how given subtrees are displayed as text. This has the advantage of providing a consistent layout for any code processed with the editor. Most structure editors provide the user with customisable unparsing and some also provide ways to elide constructs below a certain level. This provides a useful way of seeing the overall structure of the code in much the same way as might be shown with an outlining tool. Most structure editors can also display handles for required or optional constructs and allow the user to quickly select and expand them. ``` while <expr> loop <statements> end loop; ``` ``` loop <statements> exit when not <expr>; end loop; ``` Figure 1. Converting between two loop structures 1.2.3 Structure editor survey A brief survey of structure editors is presented in this section, outlining various features and approaches that have been tried over the past 20 years. One of the most well-known, and powerful, structure editing environments is the Cornell Program Synthesizer (CPS) [15]. This environment provides both structured and textual editing at user-defined levels. It also provides the ability to catch static semantic errors at the time of entry and provides an integral interpreter and debugging system. The Synthesizer Generator [13] is a related development designed to be easily customised for different languages. Emacs [14] and Z [20] both provide pseudo-structured editing via suites of language-specific extensions to their text-editing modes. Although this type of "structured" editing is widely used, it has a number of drawbacks, including the inability to perform syntax checking. There exist many editors with specific enhancements to the usual structure editing features. The Pascal-Oriented Editor (POE) [4] provides an error-correcting parser that can dynamically detect and correct entry errors. The Display-Oriented Structure Editor (DOSE) [6] allows changes to its language specifications to be made on the fly and can even be used to edit its own language specification language. Finally, SED [2] is a pure structure editor which allows trees to be modified using a powerful tree match-and-replace system. There also exist integrated programming environments such as MultiView (see section 2.1) and PECAN [12] which provide structured editing in tandem with cooperative tools such as debuggers, declaration editors, flow-chart views, etc. 2. Implementation environment The initial implementation plan for the project was to build the VHDL structure editor from scratch, using a commercial package to handle the storage of VHDL modules in a central database. The editor was to have been written in C++ and use the Motif toolkit for the user interface. Many ideas for the editing interface were discussed and research into existing structure editors was undertaken. It was decided that the goals for the project could be faster achieved by building a VHDL version of the MultiView Integrated Software Engineering Environment [3, 7, 8, 11]. MultiView would provide the integrated environment, while the structure editor would be a VHDL TextView. 2.1 MultiView MultiView is a language editing environment designed to support multiple views of a program. The MultiView system provides an integrated environment where program units may be viewed and edited concurrently in different ways. For example, program text may be entered within a textual view and its overall structure viewed graphically within another view. MultiView is designed to be language nonspecific, and a number of tools are provided to accelerate the process of specialising MultiView for different languages. A MultiView system consists of a database, a database server and one or more views. The relationship between these components is shown in Figure 2. The server and views are implemented as separate UNIX processes and communicate via a message-passing interface. The messages are structured using a Protocol Specification Language (PSL), which is a language designed to easily define the communication structure between a view and the database server. Most views retain a cached copy of the unit they operate on to reduce the load on the database server and to accelerate editing operations. A sophisticated caching control system is provided by the database server to ensure cache coherency. MultiView has recently undergone a major revision of its communication subsystem [9], and a number of new tools have been created to allow easier implementation of new views and adaptation to different languages. Program units are parsed by the MultiView system and stored in abstract syntax tree form (AST) within the database. The units can then be viewed or edited by one or more views which interact with the database server to store and retrieve units from the database. Figure 2. MultiView organisation An important feature of MultiView is that more than one view of a given unit may be used at one time. The database server ensures that changes made in one view are reflected in other open views and that the units remain mutually consistent. A version of MultiView specialised for VHDL provides an ideal way for a VHDL structure editor to combine with other views and debugging aids. A designer may build a model within TextView and then switch to viewing and editing in a schematic view. Whichever view the designer finds easiest for a particular operation can be used without leaving the environment and without adversely affecting other open views. 2.1.1 The advantage for this project From its inception, the aim of this project was to produce a VHDL structure editor that can operate in tandem with other tools. In particular the editor would be complemented, and greatly enhanced by, a VHDL schematic view. Such a view would display the components of a VHDL model and their connections to each other in graphical network format familiar to most engineers. This sort of view and its merits are discussed more fully in section 4.1.3. The advantage of the MultiView system is that it allows the seamless integration of later views with already existing ones. It also allows views to be extended to display dynamically changing data, or to integrate data from another source within a view. For instance, an extended TextView might interface to a VHDL simulation environment to allow the user to dynamically display and edit model variables from within TextView. Similarly, a schematic view may be extended to display a model's simulation graphically. 2.1.2 Abstract syntax Abstract syntax trees are a tree representation of structured text. They provide a notation for programs that is independent of the actual concrete syntax used to textually represent the program. Each node in an AST corresponds an operator within the language. Generally there is one operator for each abstract action permitted by the language. Each AST operator is defined to be of a particular sort or phylum. The sort of an operator is analogous to the type of a variable in a high-level programming language. Operators are logically grouped by their sort and they may only appear as children of a tree node where the “slot” for that child has the same sort. The number of child nodes a node has is termed its arity — depending on the particular AST model, the node may either have fixed or variable arity. As a simple example, the expression “1 + 2 * 3” might be parsed to the AST shown in Figure 3. In this example each node in the AST shows the sort of node at the top and the actual operator at the bottom. The leaf nodes have a literal value string rather than an operator. 2.2 TextView TextView [3, 7, 11] is an X Windows-based textual view for the MultiView environment. It provides the following facilities: - both structured and textual editing operations, - automatic code formatting, - immediate feedback on syntax correctness. TextView is a hybrid structure editor. Text may be entered in the main text pane and then parsed at the user’s command. Parsing errors are displayed in a window below the text editing pane — currently only syntax is checked. Program constructs may either be entered directly within the code pane, or inserted by selection from a list of currently valid constructs at the left of the code pane. The user can choose to show either the entire text of the program, or have some constructs shown as placeholders. The VHDL TextView main window is illustrated in Figure 4. The cursor appears as a black rectangle and the current subtree within which it lies is indicated by underlining the corresponding text. A list of valid constructs for that subtree is displayed to the left of the pane where the user can readily access it. If the user clicks on one of these buttons, the current operator that the cursor lies on is replaced with the operator associated with the button. The “prepend” and “append” buttons at the top left of the screen allow the user to insert a new list element either at the beginning or the end of the current list. ![Figure 3. Abstract syntax tree for a simple expression](image-url) When the user has finished editing a construct, a parse command is invoked. At this point any text that has been changed is sent to MultiView for parsing. If the parse is successful, the result is unparsed and displayed on the screen, giving the user immediate visual feedback. Any parse errors are displayed in the message pane at the bottom of the screen. 3. **VHDL MultiView** In order to build a version of MultiView for a given language it is necessary to both describe the formal grammar of the language and how to build an AST for each production of that grammar. This description is done in a meta-language called the Language Specification Language (LSL), a compiler for which is provided by MultiView. The LSL compiler produces output suitable for automatic scanner and parser generators (Flex and Yacc) and a number of Ada code units that are later linked into the MultiView kernel. The first stage of the project was to produce a full LSL description of VHDL-93. The next stage was to ensure that the parser generated from this description correctly handled VHDL-93 syntax. 3.1 **LSL** The LSL description of VHDL is a combination of the VHDL-93 grammar in pure Backus-Naur Form (BNF), plus an associated abstract syntax construction clause for each production. Pure BNF does not provide "optional" or "list" operators, which means the ```lsl sort statement; end; oper assign_stmt: name expr -> statement; <assign_stmt> := <name> '=' <expr> ';' { assign_stmt (<name>, <expr>) }; ``` Figure 5. LSL description of an assignment statement number of productions in BNF descriptions is greatly increased over the equivalent Extended Backus-Naur Form (EBNF). The LSL format combines a description of the associated abstract syntax form with each BNF production. An example of the LSL description for a VHDL assignment statement is given in Figure 5. In this example, an AST sort named statement and an operator named assign_stmt have been declared. The operator assign_stmt has two child nodes, of sort name and expr, and is itself of sort statement. This example AST description instructs the MultiView parser, on a successful parse of the associated <assign_stmt> production, to build a subtree with root node assign_stmt and to make the first child the subtree generated by the parse of <name> and the second child the subtree generated by the parse of <expr>. As an example, the string "id := id + 1" when parsed with this scheme would produce the AST shown in Figure 6. MultiView uses a fixed arity AST model, which means that variable-length lists must be implemented with a right-recursive tree structure. This means that lists consist of a node with two children, the left of which is a list element, while the right child is either a list node itself or a null operator signalling the end of the list (see Figure 9 for an example). 3.2 LSL Implementation The BNF used for the VHDL LSL implementation in this project was taken directly from the VHDL Language Reference Manual [18]. This was done in order to help the user of the editor to recognize the names of the productions rather than to adhere to an unambiguous definition of VHDL. However the BNF used was not designed to be used within the context of an automatic parser generator and contains many non-LR(0) productions. To some extent, such ambiguities are unavoidable since VHDL inherits much of Ada's notoriously difficult-to-parse syntax. The automatic parser generator (Ayacc) that is used with the MultiView language compiler generates a shift-reduce LALR parser. This is effectively an LR(0) parser with some ability to disambiguate conflicts via lookahead. However, there are limitations to the LALR parser approach and, because of this, the initial LSL specification for VHDL had to undergo a long "debugging" stage for it to correctly parse VHDL-93 models. There is no standard validation suite for VHDL-93, so this debugging process involved passing various correct VHDL units through a test parser to test for syntax errors. When an error occurred, the behaviour of the parser was traced to discover at what point it took the wrong turn. When this was found some way of removing the problem had to be discovered, as discussed later in this section. A typical example of the sort of problem that arose is the statement \[ f(a, b, c) := 0; \] The parser knows from the context in which this appears that this must be a statement, but it is unable to know what kind until it sees the `:=` sign. The problem occurs when the parser reads in the `a` and must make a choice to reduce it to a subprogram parameter or an array index. This is a problem because, at this point, the construct could be one of three things: - an array assignment - an assignment to the array result of a function call - a procedure call If the parser chooses to reduce the `a` to a procedure call parameter, this will cause a syntax error to occur at the `:=` operator later on in the parse. In a VHDL compiler, the solution would be to extend the lexer to look at a symbol table to find out what `f` refers to and represent this as a separate token type [1]. Unfortunately, within the tools provided by MultiView the only solution was to merge the BNF productions for subprograms and array references into a single production that accepts both. This results in a parser that could accept some VHDL syntax that is technically ``` <discrete_range> ::= <range> | <subtype_indication> <range> ::= <simple_expression> ('to' | 'downto') <simple_expression> ``` Figure 6. AST for an assignment statement Figure 7. BNF for ambiguous array definition <table> <thead> <tr> <th>Format</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>$ string $</td> <td>Print string using the keyword style.</td> </tr> <tr> <td>' string '</td> <td>Print string using normal text style.</td> </tr> <tr> <td>% &lt;production&gt; %</td> <td>Show &lt;production&gt; as a required placeholder.</td> </tr> <tr> <td>! &lt;production&gt;</td> <td>Show &lt;production&gt; as an optional placeholder.</td> </tr> <tr> <td>/</td> <td>Begin a new line.</td> </tr> <tr> <td>&gt;</td> <td>Increase the indent level one unit.</td> </tr> <tr> <td>&lt;</td> <td>Decrease the indent level one unit.</td> </tr> <tr> <td>#n</td> <td>Unparse child n.</td> </tr> <tr> <td>?n[scheme1]{scheme2}</td> <td>If child n exists unpars e with scheme1, otherwise use scheme2.</td> </tr> <tr> <td>@</td> <td>Display the literal value of the node.</td> </tr> <tr> <td>\x</td> <td>Display \x as a literal without interpreting it as an unparsing symbol.</td> </tr> </tbody> </table> Figure 8. TextView unparsing operators incorrect. Another method of solving language ambiguities is to build a new tree of productions for use in the production where the ambiguity arises. This involves duplicating one or more productions appearing in the ambiguous production so that the ambiguous production has its own version that is not used in any other context. This effectively gives the parser more context information and allows it to make the correct decision. This approach was used to solve a problem in the <discrete_range> production used in array type declarations (illustrated in Figure 7). The problem is that discrete range declarations consisting of just a <subtype indication> (an identifier) were reduced to a <simple_expression> which then lead to a syntax error when no “to” or “downto” was found after the identifier. The solution was to create a new <array_discrete_range> production that was identical to the original <discrete_range>, but using new <array_range> and <array_subtype_indication> productions, which were identical to the originals without the array prefix. This local copy of the production tree allowed the parser to make the correct decision when it found an identifier in an <array_range> based on the more restricted set of choices now offered. 3.3 TextView unparsing As mentioned in section 2.1, views interacting with the MultiView database server only have access to units in AST form. TextView requires a textual format both for its on-screen display and to provide hybrid editing features. The process of turning an AST into a pretty-printed textual form is termed unparsing, and is achieved by associating formatting templates with each operator in the ISL specification for the language. A particular set of such templates is termed an unparsing scheme. TextView allows more than one scheme to be defined and the current scheme may be changed by the user as necessary. The operations available within unparsing schemes control both the text that is produced and how the text is laid out and printed. For instance, keywords may be in boldface, placeholders in italics and normal text in the base font. The operations available in an unparsing scheme are shown in Figure 8. Different styles can be defined for each of the $, ', ! and % operators. Layout is controlled by the <, > and / operators which decrease/increase the indentation level and begin a new line respectively. The @ operator is generally only used for the leaves of the AST to display actual identifiers, numbers, string literals, etc. The ? operator allows the unparsers to select different outputs based on whether a child exists or not. This is often used for displaying optional children of an operator or to provide list separators. As an example of an unparsing scheme, the scheme used to display a VHDL assignment statement with no placeholders will be given. The AST for this statement is shown in Figure 9. The unparsers begins at the root of the tree and looks for an unparsing scheme for statements. This scheme might look like: ``` statements -> ?1(/ #1)] #2 ``` Remember that this is a list of statements defined recursively: child 1 is a statement while child 2 is of sort statements. Therefore this scheme results in each non-null statement being prefixed by a new line. If no statements exist, then nothing will change. In the example, the unparsers sees that child 1 exists, and uses the first part of the unparsing scheme in braces. This means that a new line is begun and child 1, an assign_stat, is unparsed. ``` assign_stat -> #1 ' := ' #2 ' ;' ``` Child 1 is an identifier for which the literal value “1” is printed. Then a “:=” is displayed, followed by child 2, which is a simple_expr evaluating to “0”. After unparsing assign_stat’s children, the unparsers goes up one level and finds the end of statements, marked by stmts_null. At this point, the unparsers has finished parsing the root and terminates. The final output of this process is: ``` newline X := 0; ``` ### 3.4 TextView editing As mentioned in the discussion of structure editors in section 1.2, one of the main text-entry features of a structure editor is intelligent template insertion. Template operations within TextView are controlled by the current unparsing scheme. The required “canonical” scheme simply shows all non-null nodes in the AST in textual form. To allow the user to enter structures using a process of stepwise refinement, one or more unparsing schemes showing placeholders must be defined. Therefore it is not enough for the builder of a TextView for a new language to provide only a canonical view for units, since this would render one of the most powerful features of the editor effectively useless. Accordingly, as part of this project, a “placeholders” view that allows this form of “point-and-click” editing has been produced. An example of a view with placeholders displayed is shown in Figure 10. The placeholders are printed as their production names in italics. Here all placeholders are optional—they do not to be expanded in order for the unit to be parsed and they would all disappear if the canonical scheme were selected. In a view that displays placeholders, a placeholder, or handle, appears wherever an optional or required subtree can be placed and does not currently exist. The user may select the placeholder and view a list of valid transformations at the left hand side of the screen. Clicking on a transformation will expand the selected placeholder to that construct. In order to provide useful and efficient editing for TextView, the designer must consider how the user would like to use the placeholder expansions and build the placeholder conventions accordingly. The users that wishes to personalise these schemes may do so by creating their own variant views. As an example of how the user might edit a VHDL process statement using placeholder expansion, suppose the user wishes to add an if statement to the process construct shown in Figure 10. The user has already expanded the placeholders to get to `<sequential_statement>` and now clicks on the `seq_if` The button to expand the production to the if statement template shown in Figure 11. Now the user can fill in the <expression> and <sequence_of_statements> placeholders. The user may also add, for instance, an else statement by moving to the <opt_else> placeholder and then clicking on the opt_else button. ### 3.4.1 Creating placeholder schemes Missing operators in the AST are represented by their completing operator which usually has the same name with "_null" appended. In order to have a placeholder displayed for the missing construct, an unparsing scheme for its completing operator needs to be defined. For example, in the case of an if statement (see section 3.3) the unparsing scheme for its completing operator might be as shown in Figure 12. The <expression> placeholder is indicated as required by enclosing it in %s, while the other placeholders are all marked as optional by !s. The view author needs to consider whether it might be easier to "skip" some connecting productions. For instance, the optional <sequence_of_statements> placeholder in the example above might be changed to the next production down in the tree, a <labelled_sequential_statement>, since it is unlikely the user wants to insert an if statement with no internal statements. By this sort of careful design, schemes can be optimised for VHDL. ### 4. Conclusion The project has achieved its goal of providing a structure editor for VHDL designers. The editor can help designers enter and modify VHDL models, and provides assistance with handling VHDL’s complex syntax. The editor also catches syntax errors as they are entered, reducing development time. Two unparsing views are available, the standard view which shows no placeholders, and a view that displays all possible placeholders. This latter view allows the user to enter code via a “point and click” expansion technique. The decision to build a VHDL TextView under the MultiView environment has been vindicated, since the time saved in development of the basic structure editor technology was used to implement an editor that can accept the full VHDL-93 syntax and provides two useful, customisable views of the code. Given the time constraints, it is likely that, had the editor been built from scratch, it would have been necessary to settle for a subset of VHDL. 4.1 Further work 4.1.1 MiniView Although the unparsing operators provided by TextView can be combined to produce a wide range of layouts, there are a few limitations that become apparent for a language like VHDL. One of these limitations is that there is no provision for handling long lists of items in an intelligent manner. In VHDL, long lists of parameters occur frequently (see Figure 13) and it only makes these lengthy lists harder to read if they extend off the screen or wrap at a strange point. Since TextView does not allow the user to manually force new lines at appropriate points, some provision should be made for handling this automatically. There are two problems to be solved in order to implement automatic line-wrapping in TextView. One is that, at this time, TextView is still undergoing extensive development, and has not been written to be easily read by others. The second problem is that the TextView unparsing description has no support for this sort of extension. This means that, at least initially, any parameters for automatic line-wrapping would have to be "hard-wired" or stored in a configuration file separate to the unparsing scheme. It seems that the best solution to the first problem would be to incorporate the TextView unparsing unit into an experimental "MiniView" that simply displays the results of experimental unparsing schemes in an X Windows text pane. This would be feasible because the unparsing unit is connected into TextView via a narrow and simple interface: the unparsing unit takes an AST and an unparsing scheme as inputs and outputs a series of tokens which can then be used to generate the display. Once the unparsing scheme is made to operate within MiniView, work can be done to extend the unparser and the results can be viewed within MiniView. The new unparsing engine could be incorporated back into TextView at a later date. The list-handling abilities incorporated into MiniView would probably need to be customisable for each particular list type. In general however, when automatic wrapping occurs, the user would probably like the next line to automatically indent so it aligns with the first element of the list. For instance, with this rule in place, the port map list in Figure 13 might be displayed as in Figure 14. 4.1.2 Unparsing schemes As mentioned in section 3.4.1, two unparsing schemes currently exist for VHDL TextView, a canonical scheme and a scheme that displays all possible placeholders allowing the user to perform template insertion and replacement. The placeholders view achieves its purpose, but at the cost of introducing a large amount of clutter on the screen, most of which the user will not be interested in. This is because VHDL has a large number of optional constructs, many of which are not normally used (such as statement labels). Therefore, in order to be less confusing to the user, it would be helpful to create new placeholder schemes displaying only required placeholders and "useful" optional placeholders. What defines a useful placeholder in a given context would need to be discussed with designers as they use the editor. Another use for placeholder schemes is to provide elided views of designs. Elision is a way of removing low-level detail from code to provide an overall view of its structure, similar to the way an outline program is used to collapse and expand headings at different levels. An elision facility would be particularly useful in VHDL TextView, since models in VHDL can become very large, and high-level views can help the user "see the forest for the trees". 4.1.3 Schematic view The VHDL structure editor built for this project was intended from the beginning to be one of a number of possible tools for VHDL designers. In particular, it is envisioned that a complementary schematic view for VHDL would be extremely useful. The schematic view would present a graphical representation of the connections between entities within a model. It might also show processes within entities and their interconnections via signals if this level of detail were required. In Figure 15 a simple schematic class entity coeff_ram is port ( rd, wr : in bit; addr : in coeff_ram_address; d_in : in real; d_out : out real); end entity coeff_ram; Figure 13. Example of a long list in VHDL class entity coeff_ram is port ( rd, wr : in bit; addr : in coeff_ram_address; d_in : in real, d_out : out real); end entity coeff_ram; Figure 14. A long list with automatic wrapping of a CPU connected to a random access memory (RAM) unit is shown. The schematic view could be connected to a VHDL simulator so that it can display the results of a simulation in schematic form. Editing operations might include changing the way entities connect, adding and deleting entities and changing the architecture of selected entities. The design of a schematic view poses a number of problems. Some of these are: - How to automatically lay out the connecting signals between entities. Should the user be able to override this layout? Some work on layout schemes has already been done in the context of FlowView, a MultiView flowchart view [5]. - How to present information regarding the internal structure of each entity to the user. Some sort of elision and/or zoom mechanism would be required. - How to handle multiple interconnected units that form a model. Within the MultiView system each unit is self contained, so if a model in one unit references an external entity, a way needs to be found for the schematic view to easily find the correct unit. If the schematic view were to be interfaced to a VHDL simulation system, a number of technical problems arise, including the handling of asynchronous messages from both the simulator and the MultiView kernel concurrently. References
{"Source-Url": "http://www.eda.org/VIUF_proc/Spring95/PHILLIPS95A.PDF", "len_cl100k_base": 7779, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 14049, "total-output-tokens": 9283, "length": "2e12", "weborganizer": {"__label__adult": 0.00034117698669433594, "__label__art_design": 0.0004978179931640625, "__label__crime_law": 0.00021278858184814453, "__label__education_jobs": 0.0004732608795166016, "__label__entertainment": 6.80685043334961e-05, "__label__fashion_beauty": 0.00015044212341308594, "__label__finance_business": 0.00014793872833251953, "__label__food_dining": 0.0002548694610595703, "__label__games": 0.0004642009735107422, "__label__hardware": 0.00429534912109375, "__label__health": 0.00032210350036621094, "__label__history": 0.00019359588623046875, "__label__home_hobbies": 9.751319885253906e-05, "__label__industrial": 0.0005979537963867188, "__label__literature": 0.00016570091247558594, "__label__politics": 0.0001844167709350586, "__label__religion": 0.0005478858947753906, "__label__science_tech": 0.026397705078125, "__label__social_life": 4.863739013671875e-05, "__label__software": 0.00701141357421875, "__label__software_dev": 0.95654296875, "__label__sports_fitness": 0.00025177001953125, "__label__transportation": 0.0005326271057128906, "__label__travel": 0.00016164779663085938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40965, 0.02458]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40965, 0.63304]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40965, 0.91441]], "google_gemma-3-12b-it_contains_pii": [[0, 3784, false], [3784, 8398, null], [8398, 12530, null], [12530, 16763, null], [16763, 18328, null], [18328, 22413, null], [22413, 26637, null], [26637, 29802, null], [29802, 32123, null], [32123, 36669, null], [36669, 40105, null], [40105, 40965, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3784, true], [3784, 8398, null], [8398, 12530, null], [12530, 16763, null], [16763, 18328, null], [18328, 22413, null], [22413, 26637, null], [26637, 29802, null], [29802, 32123, null], [32123, 36669, null], [36669, 40105, null], [40105, 40965, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40965, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40965, null]], "pdf_page_numbers": [[0, 3784, 1], [3784, 8398, 2], [8398, 12530, 3], [12530, 16763, 4], [16763, 18328, 5], [18328, 22413, 6], [22413, 26637, 7], [26637, 29802, 8], [29802, 32123, 9], [32123, 36669, 10], [36669, 40105, 11], [40105, 40965, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40965, 0.06047]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2aba9b3b65ad83d626ad00d38b670e0621079829
[REMOVED]
{"Source-Url": "http://web.mysites.ntu.edu.sg/fcbond/open/pubs/2007-IWIC-lextypedb.pdf", "len_cl100k_base": 6526, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 33394, "total-output-tokens": 9467, "length": "2e12", "weborganizer": {"__label__adult": 0.0017862319946289062, "__label__art_design": 0.002780914306640625, "__label__crime_law": 0.0011119842529296875, "__label__education_jobs": 0.0240631103515625, "__label__entertainment": 0.0019330978393554688, "__label__fashion_beauty": 0.0008697509765625, "__label__finance_business": 0.0010080337524414062, "__label__food_dining": 0.0011873245239257812, "__label__games": 0.00383758544921875, "__label__hardware": 0.001140594482421875, "__label__health": 0.0022296905517578125, "__label__history": 0.002391815185546875, "__label__home_hobbies": 0.0002837181091308594, "__label__industrial": 0.0011444091796875, "__label__literature": 0.18798828125, "__label__politics": 0.0015926361083984375, "__label__religion": 0.00299835205078125, "__label__science_tech": 0.309814453125, "__label__social_life": 0.00121307373046875, "__label__software": 0.047698974609375, "__label__software_dev": 0.3994140625, "__label__sports_fitness": 0.001056671142578125, "__label__transportation": 0.001822471618652344, "__label__travel": 0.0005121231079101562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37626, 0.01844]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37626, 0.77371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37626, 0.86887]], "google_gemma-3-12b-it_contains_pii": [[0, 2255, false], [2255, 5612, null], [5612, 8452, null], [8452, 11017, null], [11017, 13470, null], [13470, 16541, null], [16541, 18953, null], [18953, 20328, null], [20328, 23409, null], [23409, 24926, null], [24926, 27995, null], [27995, 30808, null], [30808, 33810, null], [33810, 37227, null], [37227, 37626, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2255, true], [2255, 5612, null], [5612, 8452, null], [8452, 11017, null], [11017, 13470, null], [13470, 16541, null], [16541, 18953, null], [18953, 20328, null], [20328, 23409, null], [23409, 24926, null], [24926, 27995, null], [27995, 30808, null], [30808, 33810, null], [33810, 37227, null], [37227, 37626, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37626, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37626, null]], "pdf_page_numbers": [[0, 2255, 1], [2255, 5612, 2], [5612, 8452, 3], [8452, 11017, 4], [11017, 13470, 5], [13470, 16541, 6], [16541, 18953, 7], [18953, 20328, 8], [20328, 23409, 9], [23409, 24926, 10], [24926, 27995, 11], [27995, 30808, 12], [30808, 33810, 13], [33810, 37227, 14], [37227, 37626, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37626, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
94abc652f4b48c247a10ce062e7a064185295f5f
Perl For Computer Science Grad Students Walt Mankowski (walt@cs.drexel.edu) Drexel University YAPC::NA 22 June 2010 Who am I? CS grad student at Drexel 5 1/2 years No, I don’t know when I’m going to defend “Canonical Behavior Patterns” What I thought CS grad would be like "I think you should be more explicit here in Step Two." \[ \begin{align*} \Lambda & = 203 - 6.24 \\ \frac{2K^2}{5} & = \frac{0.06518}{0.04} \\ & + \frac{1}{10} \frac{345}{\sqrt{10}} \end{align*} \] LISP is over half a century old and it still has this perfect, timeless aura about it. I wonder if the cycles will continue forever. A few coders from each new generation rediscovering the LISP arts. These are your father's parentheses. Elegant weapons for a more... civilized age. What CS grad school is really like PROCESS THE DATA USING METHOD X. REALLY? Yeah, remember? I spent weeks on it because you insisted it should work. HMM. Should I try it again? No, no. That would be a waste of effort. I'll ask one of the other students to do it. I'm... actually ok with that. Math, CS stuff Finding Canonical Behaviors in User Protocols Walter C. Mankowski, Peter Bogunovich, Ali Shokoufandeh, and Dario D. Salvucci Department of Computer Science, Drexel University Philadelphia, PA 19104, USA {walt, pjb38, ashokouf, salvucci}@cs.drexel.edu ABSTRACT While the collection of behavioral protocols has been common practice in human-computer interaction research for many years, the analysis of large protocol data sets is often extremely tedious and time-consuming, and automated analysis methods have been slow to develop. This paper proposes an automated method of protocol analysis to find canonical behaviors — a small subset of protocols that is most representative of the full data set, providing a reasonable “big picture” view of the data with as few protocols as possible. The automated method takes advantage of recent algorithmic developments in computational vision, modifying them to allow for distance measures between behavioral protocols. The paper includes an application of the method to web-browsing protocols, showing how the canonical behaviors found by the method match well to sets of behaviors identified by expert human coders. Author Keywords Protocol analysis, sequential data analysis, web browsing ACM Classification Keywords H.1.2 Models and Principles: User/Machine Systems; H.5.2 Information Interfaces and Presentation: User Interfaces INTRODUCTION In the study of user behavior, researchers and practitioners alike often collect data in the form of behavioral protocols — sequences of actions recorded during the execution of a task. The analysis of these protocols provides a wealth of information about user behavior, and thus protocol data have been collected for a wide range of data types and utilized in a wide variety of ways. For instance, protocols have been employed for examining manual actions such as mouse clicks and keystrokes [e.g., 2], verbal reports [e.g., 7], and eye movements [e.g., 1], and sometimes combinations of these and other data [e.g., 9]. Based on this rich set of data, researchers have used protocols for such varied purposes as exploratory data analysis [e.g., 15], classification and recognition [14], and cognitive modeling [e.g., 17]. At the same time, the richness and quantity of protocol data have a severe limitation: it is typically difficult, if not impossible, to process and analyze the data by hand, thus requiring some form of data aggregation to make analysis and understanding feasible. While such aggregation provides information with respect to overall behavior, it washes over the interesting patterns that may appear in particular protocol instances from individual users. In this paper we introduce the idea of canonical behaviors in user protocols and propose a computational method to identify them in an automated way. Canonical behaviors are a small subset of the protocol data set that is most representative of the entire data set, providing a reasonable “big picture” view of the data with as few protocols as possible. The identification of canonical behaviors is often performed laboriously by hand in standard protocol analysis; for instance, eye-movement or verbal protocol analysis often includes the identification and dissection of a few individual protocols that exemplify interesting strategies in the task [see, e.g., 7, 14]. While methods for automated protocol analysis have been studied in previous efforts, the methods focus on aligning observed protocols with the predicted behaviors of a user model [e.g., 13, 16]. Our work takes a very different but complementary approach, using a specification of the similarity between behaviors to automatically identify canonical behaviors in a set of user protocols. To illustrate our approach in a real-world domain, we apply the method to the domain of web browsing. Recent efforts [e.g., 4, 5] have analyzed web-browsing behavior in a number of ways, typically involving some type of aggregation across individual user protocols. There has been some work on the analysis of individual protocols by aligning browsing protocols with the predictions of a cognitive or task model [3, 11, 12]. We aim to complement this work by proposing a method for finding canonical web-browsing behaviors without the need for any type of a priori model. The resulting canonical behaviors could be useful, for instance, in determining standard paths to desired information on a given web site, or in identifying circuitous paths of confused users and revising the web site accordingly. FINDING CANONICAL BEHAVIORS Our technique for computing canonical behaviors derives from work in the area of computational vision, where techniques have been developed to identify canonical members of a class of visual patterns [6]. In the context of user proto- Computing the Canonical Subsets of User Protocols Walter Mankowski, Peter Bogunovich, Ali Shokoufandeh, and Dario Salvucci Vision and Cognition Laboratory, Drexel University, Philadelphia, PA, USA Introduction • We are interested in automatically finding canonical behaviors — a small subset of user protocols that is most representative of the full data set. • The collection of user protocols is common in human factors research, but analyzing the large datasets produced can be tedious and time-consuming. • Our canonical set algorithm can automatically identify canonical behaviors with no a priori model; all that is needed is a similarity measure between pairs of protocols. Canonical Behaviors Small subset of protocols which a) is most representative of the entire set, and b) contains as few protocols as possible Problems: a) Too much data to analyze by hand b) Data aggregation often necessary c) Interesting patterns can be missed Canonical Sets Denton’s approximation algorithm automatically finds the most representative samples from a set of patterns. Finding the canonical set can be expressed as an optimization problem, where the goal is to minimize the weights of the intra edges while simultaneously maximizing the weights of the cut edges. This is known to be NP-Complete, so we employ an approximation algorithm (Denton, 2008): 1. Formulate as integer programming problem 2. Relax to semidefinite or quadratic program 3. Use off-the-shelf solver in MATLAB There is one free parameter, $\lambda \in [0, 1]$, which scales the weighting given to cut edges versus intra edges. Web Browsing Experiment • Each subject was asked 32 questions about a college website (www.cmu.edu) - No search engines permitted - 12 users (2 female, 10 male) unfamiliar with site - User protocol = series of pages visited by each user to answer each question • Found canonical protocols, then compared with “ground truth” — 2 expert human coders who identified distinct behaviors by hand for each question, using their own judgement of the best division of “similar” and “different” behaviors • Similarity measures: - edit distance (Mankowski et al., CHI 2009) - similarity of the induced subgraphs (Mankowski et al., GbR 2009) Discussion We have presented an automated method for finding canonical subsets of user protocols that matches well with those found by expert human coders. Potential uses include: • Extracting critical protocols to facilitate the development of cognitive models of user behavior • Analyzing a corpus of protocols by associating each behavior to its most similar canonical behavior Future Work • Development of similarity measures to support multi-modal data • Development of stability measures suited to behavioral data, e.g. incorporating the timing of the actions in the protocol • Larger-scale experiments Results LAST NIGHT I DRIFTED OFF WHILE READING A LISP BOOK. Huh? SUDDENLY, I WAS BATHED IN A SUFFUSION OF BLUE. AT ONCE, JUST LIKE THEY SAID, I FELT A GREAT ENLIGHTENMENT. I SAW THE NAKED STRUCTURE OF LISP CODE UNFOLD BEFORE ME. MY GOD IT'S FULL OF CAR'S THE PATTERNS AND METAPATTERNS DANCED. SYNTAX FADED, AND I SWAM IN THE PURITY OF QUANTIFIED CONCEPTION. OF IDEAS MANIFEST. TRULY, THIS WAS THE LANGUAGE FROM WHICH THE GODS WROUGHT THE UNIVERSE. No, it's not. It's not? I MEAN, OSTensibly, YES. HONESTLY, WE HACKED MOST OF IT TOGETHER WITH PERL. No one else in the department uses Perl What do they use? Matlab Java Python C++ What’s this talk about? Perl as a glue language When to use perl? When not to use perl? Lessons learned What do grad students do? HOW GRAD SCHOOL IS JUST LIKE KINDERGARTEN ALL DAY NAPPING IS ACCEPTABLE / THERE IS CONSTANT ADULT SUPERVISION YOU GET COOKIES FOR LUNCH / MOST COMMON ACTIVITY: CUTTING AND PASTING THERE ARE NO GRADES (YOU JUST HAVE TO PLAY WELL WITH OTHERS) / CRYING FOR YOUR MOMMY IS NORMAL WWW.PHDCOMICS.COM 1. take classes 2. do research 3. etc. Intro to Computer Graphics Program Generation and Optimization Mini MMM Performance (64 x 64 Matrices) Mflops vs NB - naive ijk - blocking - blocking+unrolling, k=2 - blocking+unrolling, k=4, ijk - blocking+unrolling, k=4, jik Mini MMM Performance (256 x 256 Matrices) Mflops vs NB naive ijk blocking blocking+unrolling, k=2 blocking+unrolling, k=4, ijk blocking+unrolling, k=4, jik Mini MMM Performance (512 x 512 Matrices) Mflops vs NB - naive ijk - blocking - blocking+unrolling, k=2 - blocking+unrolling, k=4, ijk - blocking+unrolling, k=4, jik set term postscript color set xlabel 'NB set ylabel 'Mflops' set title "Mini MMM Performance (64 x 64 Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "nb_mflops.64.eps" plot 'nb_mflops.64.out' u 1:2 t 'naive ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:3 t 'blocking' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:4 t 'blocking+unrolling, k=2' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:6 t 'blocking+unrolling, k=4, jik' w l set term postscript color set xlabel 'NB' set ylabel 'Mflops' set title "Mini MMM Performance (64 x 64 Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "nb_mflops.64.eps" plot 'nb_mflops.64.out' u 1:2 t 'naive ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:3 t 'blocking' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:4 t 'blocking+unrolling, k=2' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:6 t 'blocking+unrolling, k=4, jik' w l set term postscript color set xlabel 'NB' set ylabel 'Mflops' set title "Mini MMM Performance (64 x 64 Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "nb_mflops.64.eps" plot 'nb_mflops.64.out' u 1:2 t 'naive ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:3 t 'blocking' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:4 t 'blocking+unrolling, k=2' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:6 t 'blocking+unrolling, k=4, jik' w l set term postscript color set xlabel 'NB set ylabel 'Mflops' set title "Mini MMM Performance (64 x 64 Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "nb_mflops.64.eps" plot 'nb_mflops.64.out' u 1:2 t 'naive ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:3 t 'blocking' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:4 t 'blocking+unrolling, k=2' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "nb_mflops.64.eps" replot 'nb_mflops.64.out' u 1:6 t 'blocking+unrolling, k=4, jik' w l set term postscript color set xlabel 'NB set ylabel 'Mflops' set title "Mini MMM Performance (256 x 256 Matrices) Mflops vs NB" set grid set key bottom set xtics 16,4,80 set out "nb_mflops.256.256_out" u 1:2 t 'naive ijk' w l set out "nb_mflops.256.256_out" u 1:3 t 'blocking' w l set out "nb_mflops.256.256_out" u 1:4 t 'blocking+unrolling, k=2' w l set out "nb_mflops.256.256_out" u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "nb_mflops.256.256_out" u 1:6 t 'blocking+unrolling, k=4, jik' w l #!/usr/bin/perl -w use strict; use autodie; my $n = $ARGV[0]; my $datafile = "nb_mflops.$n.out"; my $outfile = "nb_mflops.$n.eps"; open my $GP, "|-", "gnuplot"; print $GP <<EOT; set term postscript color set xlabel 'NB set ylabel 'Mflops' set title "Mini MMM Performance ($n x $n Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "$outfile" plot '$datafile' u 1:2 t 'naive ijk' w l set out "$outfile" replot '$datafile' u 1:3 t 'blocking' w l set out "$outfile" replot '$datafile' u 1:4 t 'blocking+unrolling, k=2' w l set out "$outfile" replot '$datafile' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "$outfile" replot '$datafile' u 1:6 t 'blocking+unrolling, k=4, jik' w l EOT #!/usr/bin/perl -w use strict; use autodie; my $n = $ARGV[0]; my $datafile = "nb_mflops.$n.out"; my $outfile = "nb_mflops.$n.eps"; open my $GP, "|-", "gnuplot"; print $GP <<EOT; set term postscript color set xlabel 'NB set ylabel 'Mflops' set title "Mini MMM Performance ($n x $n Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "$outfile" plot '$datafile' u 1:2 t 'naive ijk' w l set out "$outfile" replot '$datafile' u 1:3 t 'blocking' w l set out "$outfile" replot '$datafile' u 1:4 t 'blocking+unrolling, k=2' w l set out "$outfile" replot '$datafile' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "$outfile" replot '$datafile' u 1:6 t 'blocking+unrolling, k=4, jik' w l EOT #!/usr/bin/perl -w use strict; use autodie; my $n = $ARGV[0]; my $datafile = "nb_mflops.$n.out"; my $outfile = "nb_mflops.$n.eps"; open my $GP, "|-", "gnuplot"; print $GP <<EOT; set term postscript color set xlabel 'NB set ylabel 'Mflops' set title "Mini MMM Performance ($n x $n Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "$outfile" plot '$datafile' u 1:2 t 'naive ijk' w l set out "$outfile" replot '$datafile' u 1:3 t 'blocking' w l set out "$outfile" replot '$datafile' u 1:4 t 'blocking+unrolling, k=2' w l set out "$outfile" replot '$datafile' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "$outfile" replot '$datafile' u 1:6 t 'blocking+unrolling, k=4, jik' w l EOT #!/usr/bin/perl -w use strict; use autodie; my $n = $ARGV[0]; my $datafile = "nb_mflops.$n.out"; my $outfile = "nb_mflops.$n.eps"; open my $GP, "|-", "gnuplot"; print $GP <<EOT; set term postscript color set xlabel 'NB set ylabel 'Mflops' set title "Mini MMM Performance ($n x $n Matrices)\nMflops vs NB" set grid set key bottom set xtics 16,4,80 set out "$outfile" plot '$datafile' u 1:2 t 'naive ijk' w l set out "$outfile" replot '$datafile' u 1:3 t 'blocking' w l set out "$outfile" replot '$datafile' u 1:4 t 'blocking+unrolling, k=2' w l set out "$outfile" replot '$datafile' u 1:5 t 'blocking+unrolling, k=4, ijk' w l set out "$outfile" replot '$datafile' u 1:6 t 'blocking+unrolling, k=4, jik' w l EOT see also Chart::Gnuplot see also Chart::Clicker SQLite SQLite’s different... embedded, in-process library very small very fast great for grad student projects simple types (like in Perl) simple security model database is a file check it into a vc scp it to a faster box email it to collaborators Research projects Very different from code for classes or jobs Grad student research is about getting results and writing papers The point is the results Code is a way to get results No one cares how beautiful your code is No one cares how shitty your code is Remember, this is research No one’s ever done it before Lots of things you try won’t work Optimize for flexibility This sounds a lot like agile programming, doesn’t it? Raw data Raw data SQLite Denton et al., 2004 Determining Canonical Set 1. Create complete graph where edge weights are similarity between the corresponding behaviors 2. Minimize weights of intra edges while simultaneously maximizing weights of cut edges 3. This optimization is known to be NP-Complete Minimize \[ \lambda_1 \left( \frac{1}{4} \sum_{i,j} W_{ij} (1 + y_i)(1 + y_j) \right) \] + \[ \lambda_2 \left( \frac{1}{2} \sum_{i,j} W_{ij} (y_i - y_j) \right) \] + \[ \lambda_3 \left( \frac{1}{2} \sum_{i=1}^{n} t_i (1 - y_i) \right) \] Subject to \[ \frac{1}{2} \sum_{i=1}^{n} (1 + y_i) - k_{\text{min}} \geq 0, \] \[ k_{\text{max}} - \frac{1}{2} \sum_{i=1}^{n} (1 + y_i) \geq 0, \] \[ y_i \in \{-1, +1\}, \quad \forall 1 \leq i \leq n \] quadprog() (in Optimization Toolbox) Running Matlab from Perl $res = `matlab -r ...` use Expect.pm to automate Matlab session Math::Matlab Earth Movers Distance Other useful modules Graph.pm Math::MatrixReal Set::Scalar TO PHD OR NOT TO PHD... THAT IS THE QUESTION. WHETHER 'TIS SANER IN THE MIND TO SUFFER THE SLIGHTS AND PUT-DOWNS OF OUTRAGEOUS FACULTY... OR TO TAKE DATA DESPITE ADVISOR GRUMBLING? AND BY GRADUATING, END THEM? TO GRADUATE: TO SLEEP; ONCE MORE; AND, BY A PHD TO SAY WE END THE BACKACHE AND THOUSAND FINANCIAL DEBTS THAT GRADS ARE HEIR TO. TIS A COMMENCEMENT DEVOUTLY TO BE WISH'D. TO GRADUATE, TO SLEEP... TO NAP: PERCHANCE TO DREAM... phd.stanford.edu THANKS! simulator (Beusmans & Rensink, 1995). The simulated environment was a three-lane highway in a construction zone with driving restricted to the center lane, as shown in Figure 1. The road alternated between segments of straight roadway and segments of various curvatures, all of which could be negotiated comfortably at highway speeds without braking. The driver followed three cars and was tailed by another car, which was visible in the simulated rear-view mirror. The speed of lead car varied from 5-35 m/s (11-78 mph) according to a sum of three sinusoids that resulted in an apparently random pattern. The rear car followed at distance of 9-21 m (29-68 ft) also varying as the sum of three sinusoids. Cones on either side of the center lane prevented drivers from passing other cars and emphasized the need for maintaining a central lane position. Thus, the cell-phone dialing scenario could be thought of in terms of the driver being caught up in a construction zone and needing to call several people to notify them of a delay. For the dialing task, we employed a commercially-available cell phone (Samsung SCH-3500 with Sprint PCS), shown in Figure 2. This phone (like many similar phones) allows for multiple methods of dialing. In order to examine differential effects of various dialing methods, we chose four of the phone's built-in methods, which can be described as follows: - **Manual**: dial the phone number and press Talk - **Speed**: dial the party's single-digit “speed number” and press Talk - **Menu**: press the up arrow to access menu, scroll down to the desired party with the down arrow, and press Talk - **Voice**: press and hold Talk, say the party's name when prompted, and wait for confirmation Table 1 shows examples of using each of these dialing methods to make a call. Note that two of the methods, speed and menu dialing, require that numbers be added to an internal phone book and associated with a unique “speed number.” The four methods thus serve well to illustrate our modeling approach for comparing effects of different dialing methods on driver performance. **The Integrated Dialing-Driving Model** The prediction of effects of dialing on driving centers on an integrated cognitive model that combines individual models for each task. To facilitate the development and integration of these models, we implemented the models in the ACT-R cognitive architecture (Anderson & Lebiere, 1998). ACT-R is a production-system architecture based on condition-action rules that execute the specified actions when the specified conditions are met. Like most cognitive architectures, ACT-R provides a rigorous framework for cognitive models as well as a set of built-in parameters and constraints on cognition and perceptual-motor behavior (when using ACT-R/PM: Byrne & Anderson, 1998); the parameters facilitate a priori predictions about behavior, while the constraints facilitate more psychologically (and neurally) plausible models. In addition, the architecture allows for straightforward integration of models of multiple tasks: generally speaking, the modeler can combine the models' rule sets and modify the rules to interleave the multiple tasks (see Salvucci, 2001). All these qualities of the architecture are essential to our ability to integrate models of dialing and driving to predict the effects of each task on the other. **Dialing Models** We first consider the development of the cognitive models for dialing the cell phone using each of the four methods. To this end we employed a straightforward task analysis and implemented a simple, minimal model for each method based on this analysis. The procedure required by the cell phone highly constrains the model in that it specifies the sequence of keypresses needed to dial. However, the model must also incorporate the cognitive and perceptual processes needed to execute the procedure.
{"Source-Url": "http://www.mawode.com/~waltman/talks/gradperl.pdf", "len_cl100k_base": 6231, "olmocr-version": "0.1.49", "pdf-total-pages": 111, "total-fallback-pages": 0, "total-input-tokens": 137186, "total-output-tokens": 10247, "length": "2e12", "weborganizer": {"__label__adult": 0.0005297660827636719, "__label__art_design": 0.0012140274047851562, "__label__crime_law": 0.0004417896270751953, "__label__education_jobs": 0.02508544921875, "__label__entertainment": 0.00022470951080322263, "__label__fashion_beauty": 0.00028586387634277344, "__label__finance_business": 0.00041103363037109375, "__label__food_dining": 0.0004949569702148438, "__label__games": 0.0007281303405761719, "__label__hardware": 0.0016689300537109375, "__label__health": 0.0007290840148925781, "__label__history": 0.0005207061767578125, "__label__home_hobbies": 0.00035643577575683594, "__label__industrial": 0.0005998611450195312, "__label__literature": 0.0013189315795898438, "__label__politics": 0.00033593177795410156, "__label__religion": 0.0006699562072753906, "__label__science_tech": 0.1236572265625, "__label__social_life": 0.0006017684936523438, "__label__software": 0.01934814453125, "__label__software_dev": 0.8193359375, "__label__sports_fitness": 0.00038909912109375, "__label__transportation": 0.000972270965576172, "__label__travel": 0.0002188682556152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22244, 0.02922]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22244, 0.25118]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22244, 0.83815]], "google_gemma-3-12b-it_contains_pii": [[0, 118, false], [118, 118, null], [118, 118, null], [118, 118, null], [118, 118, null], [118, 118, null], [118, 118, null], [118, 128, null], [128, 154, null], [154, 166, null], [166, 208, null], [208, 238, null], [238, 275, null], [275, 275, null], [275, 275, null], [275, 474, null], [474, 760, null], [760, 795, null], [795, 1060, null], [1060, 1075, null], [1075, 5874, null], [5874, 8751, null], [8751, 8751, null], [8751, 8751, null], [8751, 9300, null], [9300, 9340, null], [9340, 9358, null], [9358, 9365, null], [9365, 9370, null], [9370, 9377, null], [9377, 9381, null], [9381, 9381, null], [9381, 9405, null], [9405, 9429, null], [9429, 9447, null], [9447, 9469, null], [9469, 9485, null], [9485, 9511, null], [9511, 9808, null], [9808, 9824, null], [9824, 9839, null], [9839, 9847, null], [9847, 9874, null], [9874, 9874, null], [9874, 9874, null], [9874, 9910, null], [9910, 10075, null], [10075, 10233, null], [10233, 10400, null], [10400, 10400, null], [10400, 11003, null], [11003, 11612, null], [11612, 12221, null], [12221, 12824, null], [12824, 13330, null], [13330, 14044, null], [14044, 14757, null], [14757, 15471, null], [15471, 16184, null], [16184, 16208, null], [16208, 16232, null], [16232, 16239, null], [16239, 16261, null], [16261, 16290, null], [16290, 16301, null], [16301, 16311, null], [16311, 16311, null], [16311, 16311, null], [16311, 16343, null], [16343, 16371, null], [16371, 16393, null], [16393, 16412, null], [16412, 16431, null], [16431, 16454, null], [16454, 16480, null], [16480, 16498, null], [16498, 16543, null], [16543, 16609, null], [16609, 16634, null], [16634, 16663, null], [16663, 16703, null], [16703, 16740, null], [16740, 16767, null], [16767, 16796, null], [16796, 16830, null], [16830, 16855, null], [16855, 16909, null], [16909, 16909, null], [16909, 16918, null], [16918, 16935, null], [16935, 16935, null], [16935, 16935, null], [16935, 16955, null], [16955, 16955, null], [16955, 17215, null], [17215, 17663, null], [17663, 17663, null], [17663, 17700, null], [17700, 17725, null], [17725, 17748, null], [17748, 17789, null], [17789, 17802, null], [17802, 17824, null], [17824, 17824, null], [17824, 17845, null], [17845, 17854, null], [17854, 17871, null], [17871, 17883, null], [17883, 18344, null], [18344, 18352, null], [18352, 22244, null]], "google_gemma-3-12b-it_is_public_document": [[0, 118, true], [118, 118, null], [118, 118, null], [118, 118, null], [118, 118, null], [118, 118, null], [118, 118, null], [118, 128, null], [128, 154, null], [154, 166, null], [166, 208, null], [208, 238, null], [238, 275, null], [275, 275, null], [275, 275, null], [275, 474, null], [474, 760, null], [760, 795, null], [795, 1060, null], [1060, 1075, null], [1075, 5874, null], [5874, 8751, null], [8751, 8751, null], [8751, 8751, null], [8751, 9300, null], [9300, 9340, null], [9340, 9358, null], [9358, 9365, null], [9365, 9370, null], [9370, 9377, null], [9377, 9381, null], [9381, 9381, null], [9381, 9405, null], [9405, 9429, null], [9429, 9447, null], [9447, 9469, null], [9469, 9485, null], [9485, 9511, null], [9511, 9808, null], [9808, 9824, null], [9824, 9839, null], [9839, 9847, null], [9847, 9874, null], [9874, 9874, null], [9874, 9874, null], [9874, 9910, null], [9910, 10075, null], [10075, 10233, null], [10233, 10400, null], [10400, 10400, null], [10400, 11003, null], [11003, 11612, null], [11612, 12221, null], [12221, 12824, null], [12824, 13330, null], [13330, 14044, null], [14044, 14757, null], [14757, 15471, null], [15471, 16184, null], [16184, 16208, null], [16208, 16232, null], [16232, 16239, null], [16239, 16261, null], [16261, 16290, null], [16290, 16301, null], [16301, 16311, null], [16311, 16311, null], [16311, 16311, null], [16311, 16343, null], [16343, 16371, null], [16371, 16393, null], [16393, 16412, null], [16412, 16431, null], [16431, 16454, null], [16454, 16480, null], [16480, 16498, null], [16498, 16543, null], [16543, 16609, null], [16609, 16634, null], [16634, 16663, null], [16663, 16703, null], [16703, 16740, null], [16740, 16767, null], [16767, 16796, null], [16796, 16830, null], [16830, 16855, null], [16855, 16909, null], [16909, 16909, null], [16909, 16918, null], [16918, 16935, null], [16935, 16935, null], [16935, 16935, null], [16935, 16955, null], [16955, 16955, null], [16955, 17215, null], [17215, 17663, null], [17663, 17663, null], [17663, 17700, null], [17700, 17725, null], [17725, 17748, null], [17748, 17789, null], [17789, 17802, null], [17802, 17824, null], [17824, 17824, null], [17824, 17845, null], [17845, 17854, null], [17854, 17871, null], [17871, 17883, null], [17883, 18344, null], [18344, 18352, null], [18352, 22244, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22244, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22244, null]], "pdf_page_numbers": [[0, 118, 1], [118, 118, 2], [118, 118, 3], [118, 118, 4], [118, 118, 5], [118, 118, 6], [118, 118, 7], [118, 128, 8], [128, 154, 9], [154, 166, 10], [166, 208, 11], [208, 238, 12], [238, 275, 13], [275, 275, 14], [275, 275, 15], [275, 474, 16], [474, 760, 17], [760, 795, 18], [795, 1060, 19], [1060, 1075, 20], [1075, 5874, 21], [5874, 8751, 22], [8751, 8751, 23], [8751, 8751, 24], [8751, 9300, 25], [9300, 9340, 26], [9340, 9358, 27], [9358, 9365, 28], [9365, 9370, 29], [9370, 9377, 30], [9377, 9381, 31], [9381, 9381, 32], [9381, 9405, 33], [9405, 9429, 34], [9429, 9447, 35], [9447, 9469, 36], [9469, 9485, 37], [9485, 9511, 38], [9511, 9808, 39], [9808, 9824, 40], [9824, 9839, 41], [9839, 9847, 42], [9847, 9874, 43], [9874, 9874, 44], [9874, 9874, 45], [9874, 9910, 46], [9910, 10075, 47], [10075, 10233, 48], [10233, 10400, 49], [10400, 10400, 50], [10400, 11003, 51], [11003, 11612, 52], [11612, 12221, 53], [12221, 12824, 54], [12824, 13330, 55], [13330, 14044, 56], [14044, 14757, 57], [14757, 15471, 58], [15471, 16184, 59], [16184, 16208, 60], [16208, 16232, 61], [16232, 16239, 62], [16239, 16261, 63], [16261, 16290, 64], [16290, 16301, 65], [16301, 16311, 66], [16311, 16311, 67], [16311, 16311, 68], [16311, 16343, 69], [16343, 16371, 70], [16371, 16393, 71], [16393, 16412, 72], [16412, 16431, 73], [16431, 16454, 74], [16454, 16480, 75], [16480, 16498, 76], [16498, 16543, 77], [16543, 16609, 78], [16609, 16634, 79], [16634, 16663, 80], [16663, 16703, 81], [16703, 16740, 82], [16740, 16767, 83], [16767, 16796, 84], [16796, 16830, 85], [16830, 16855, 86], [16855, 16909, 87], [16909, 16909, 88], [16909, 16918, 89], [16918, 16935, 90], [16935, 16935, 91], [16935, 16935, 92], [16935, 16955, 93], [16955, 16955, 94], [16955, 17215, 95], [17215, 17663, 96], [17663, 17663, 97], [17663, 17700, 98], [17700, 17725, 99], [17725, 17748, 100], [17748, 17789, 101], [17789, 17802, 102], [17802, 17824, 103], [17824, 17824, 104], [17824, 17845, 105], [17845, 17854, 106], [17854, 17871, 107], [17871, 17883, 108], [17883, 18344, 109], [18344, 18352, 110], [18352, 22244, 111]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22244, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
af2798d8325e18196f4cf5cb18dbd28b71c62ed0
Automatic Compiler-Inserted Prefetching for Pointer-Based Applications Chi-Keung Luk and Todd C. Mowry Abstract—As the disparity between processor and memory speeds continues to grow, memory latency is becoming an increasingly important performance bottleneck. While software-controlled prefetching is an attractive technique for tolerating this latency, its success has been limited thus far to array-based numeric codes. In this paper, we expand the scope of automatic compiler-inserted prefetching to also include the recursive data structures commonly found in pointer-based applications. We propose three compiler-based prefetching schemes, and automate the most widely applicable scheme (greedy prefetching) in an optimizing research compiler. Our experimental results demonstrate that compiler-inserted prefetching can offer significant performance gains on both uniprocessors and large-scale shared-memory multiprocessors. Keywords—Caches, prefetching, pointer-based applications, recursive data structures, compiler optimization, shared-memory multiprocessors, performance evaluation. I. INTRODUCTION SOFTWARE-controlled data prefetching [1], [2] offers the potential for bridging the ever-increasing speed gap between the memory subsystem and today’s high-performance processors. In recognition of this potential, a number of recent processors have added support for prefetch instructions [3], [4], [5]. While prefetching has enjoyed considerable success in array-based numeric codes [6], its potential in pointer-based applications has remained largely unexplored. This paper investigates compiler-inserted prefetching for pointer-based applications—in particular, those containing recursive data structures. Recursive Data Structures (RDSs) include familiar objects such as linked lists, trees, graphs, etc., where individual nodes are dynamically allocated from the heap, and nodes are linked together through pointers to form the overall structure. For our purposes, “recursive data structures” can be broadly interpreted to include most pointer-linked data structures (e.g., mutually-recursive data structures, or even a graph of heterogeneous objects). From a memory performance perspective, these pointer-based data structures are expected to be an important concern for the following reasons. For an application to suffer a large memory penalty due to data replacement misses, it typically must have a large data set relative to the cache size. Aside from multi-dimensional arrays, recursive data structures are one of the most common and convenient methods of building large data structures (e.g., B-trees in database applications, octrees in graphics applications, etc.). As we traverse a large RDS, we may potentially visit enough intervening nodes to displace a given node from the cache before it is revisited; hence temporal locality may be poor. Finally, in contrast with arrays—where consecutive elements are at contiguous addresses—there is little inherent spatial locality between consecutively-accessed nodes in an RDS, since they are dynamically allocated at arbitrary addresses. To cope with the latency of accessing these pointer-based data structures, we propose three compiler-based schemes for prefetching RDSs, as described in Section II. We implemented the most widely-applicable of these schemes—greedy prefetching—in a modern research compiler (SUIF [7]), as discussed in Section III. To evaluate our schemes, we performed detailed simulations of their impact on both uniprocessor and multiprocessor systems in Sections IV and V, respectively. Finally, we present related work and conclusions in Sections VI and VII. II. SOFTWARE-CONTROLLED PREFETCHING FOR RDSs A key challenge in successfully prefetching RDSs is scheduling the prefetches sufficiently far in advance to fully hide the latency, while introducing minimal runtime overhead. In contrast with array-based codes, where the prefetching distance can be easily controlled using software pipelining [2], the fundamental difficulty with RDSs is that we must first dereference pointers to compute the prefetch addresses. Getting several nodes ahead in an RDS traversal typically involves following a pointer chain. However, the very act of touching these intermediate nodes along the pointer chain means that we cannot tolerate the latency of fetching more than one node ahead. To overcome this pointer-chasing problem [8], we propose three schemes for generating prefetch addresses without following the entire pointer chain. The first two schemes—greedy prefetching and history-pointer prefetching—use a pointer within the current node as the prefetching address; the difference is that greedy prefetching uses existing pointers, whereas history-pointer prefetching creates new pointers. The third scheme—data-linearization prefetching—generates prefetch addresses without pointer dereferences. A. Greedy Prefetching In a k-ary RDS, each node contains k pointers to other nodes. Greedy prefetching exploits the fact that when $k > 1$, only one of these $k$ neighbors can be immediately followed as the next node in the traversal, but there is often a good chance that other neighbors will be visited sometime in the future. Therefore by prefetching all $k$ pointers when a node is first visited, we hope that enough of these prefetches are successful that we can hide at least some fraction of the miss latency. To illustrate how greedy prefetching works, consider the pre-order traversal of a binary tree (i.e. \( k = 2 \)), where Figure 1(a) shows the code with greedy prefetching added. Assuming that the computation in `process()` takes half as long as the cache miss latency \( L \), we would want to prefetch two nodes ahead to fully hide the latency. Figure 1(b) shows the caching behavior of each node. We obviously suffer a full cache miss at the root node (node 1), since there was no opportunity to fetch it ahead of time. However, we would only suffer half of the miss penalty (\( \frac{2}{3} \)) when we visit node 2, and no miss penalty when we eventually visit node 3 (since the time to visit the subtree rooted at node 2 is greater than \( L \)). In this example, the latency is fully hidden for roughly half of the nodes, and reduced by 50% for the other half (minus the root node). Greedy prefetching offers the following advantages: (i) it has low runtime overhead, since no additional storage or computation is needed to construct the prefetch pointers; (ii) it is applicable to a wide variety of RDSs, regardless of how they are accessed or whether their structure is modified frequently; and (iii) it is relatively straightforward to implement in a compiler—in fact, we have implemented it in the SUIF compiler, as we describe later in Section III. The main disadvantage of greedy prefetching is that it does not offer precise control over the prefetching distance, which is the motivation for our next algorithm. ### B. History-Pointer Prefetching Rather than relying on existing pointers to approximate prefetch addresses, we can potentially synthesize more accurate pointers based on the observed RDS traversal patterns. To prefetch \( d \) nodes ahead under the history-pointer prefetching scheme [8], we add a new pointer (called a `history-pointer`) to a node \( n_i \) to record the observed address of \( n_{i+d} \) (the node visited \( d \) nodes after \( n_i \)) on a recent traversal of the RDS. On subsequent traversals of the RDS, we prefetch the nodes pointed to by these history-pointers. This scheme is most effective when the traversal pattern does not change rapidly over time. To construct the history-pointers, we maintain a FIFO queue of length \( d \) which contains pointers to the last \( d \) nodes that have just been visited. When we visit a new node \( n_i \), the oldest node in the queue will be \( n_{i-d} \) (i.e. the node visited \( d \) nodes earlier), and hence we update the history-pointer of \( n_{i-d} \) to point to \( n_i \). After the first complete traversal of the RDS, all of the history-pointers will be set. In contrast with greedy prefetching, history-pointer prefetching offers no improvement on the first traversal of an RDS, but can potentially hide all of the latency on subsequent traversals. While history-pointer prefetching offers the potential advantage of improved latency tolerance, this comes at the expense of (i) execution overhead to construct the history-pointers, and (ii) space overhead for storing these new pointers. To minimize execution overhead, we can potentially update the history-pointers less frequently, depending on how rapidly the RDS structure changes. In one extreme, if the RDS never changes, we can set the history-pointers just once. The problem with space overhead is that it potentially worsens the caching behavior. The desire to eliminate this space overhead altogether is the motivation for our next prefetching scheme. ### C. Data-Linearization Prefetching The idea behind *data-linearization prefetching* [8] is to map heap-allocated nodes that are likely to be accessed close together in time into contiguous memory locations. With this mapping, one can easily generate prefetch addresses and launch them early enough. Another advantage of this scheme is that it improves spatial locality. The major challenge, however, is how and when we can generate this data layout. In theory, one could dynamically remap the data even after the RDS has been initially constructed, but doing so may result in large runtime overheads and may also violate program semantics. Instead, the easiest time to map the nodes is at creation time, which is appropriate if either the creation order already matches the traversal order, or if it can be safely reordered to do so. Since dynamic remapping is expensive (or impossible), this scheme obviously works best if the structure of the RDS changes only slowly (or not at all). If the RDS does change radically, the program will still behave correctly, but prefetching will not improve performance. III. IMPLEMENTATION OF GREEDY PREFETCHING Of the three schemes that we propose, greedy prefetching is perhaps the most widely applicable since it does not rely on traversal history information, and it requires no additional storage or computation to construct prefetch addresses. For these reasons, we have implemented a version of greedy prefetching within the SUIF compiler [7], and we will simulate the other two algorithms by hand. Our implementation consists of an analysis phase to recognize RDS accesses, and a scheduling phase to insert prefetches. ### A. Analysis: Recognizing RDS Accesses To recognize RDS accesses, the compiler uses both type declaration information to recognize which data objects are RDSs, and control structure information to recognize when these objects are being traversed. An RDS type is a record type \( r \) containing at least one pointer that points either directly or indirectly to a record type \( s \). (Note that \( r \) and \( s \) are not restricted to be the same type, since RDSs may be comprised of heterogeneous nodes.) For example, the type declarations in Figure 2(a) and Figure 2(b) would be recognized as RDS types, whereas Figure 2(c) would not. After discovering data structures with the appropriate types, the compiler then looks for control structures that are used to traverse the RDSs. In particular, the compiler looks for loops or recursive procedure calls such that during each new loop iteration or procedure invocation, a pointer p to an RDS is assigned a value resulting from a dereference of p—we refer to this as a recurrent pointer update. This heuristic corresponds to how RDS codes are typically written. To detect recurrent pointer updates, the compiler propagates pointer values using a simplified (but less precise) version of earlier pointer analysis algorithms [9], [10]. Figure 3 shows some example program fragments that our compiler treats as RDS accesses. In Figure 3(a), 1 is updated to 1→next→next inside the while-loop. In Figure 3(b), n is assigned the result of the function call g(n) inside the for-loop. (Since our implementation does not perform interprocedural analysis, it assumes that g(n) results in a value n→...→next.) In Figure 3(c), two dereferences of the function argument t are passed as the parameters to two recursive calls. Figure 3(d) is similar to Figure 3(c), except that a record (rather than a pointer) is passed as the function argument. Ideally, the next step would be to analyze data locality across RDS nodes to eliminate unnecessary prefetches. Although we have not automated this step in our compiler, we evaluated its potential benefits in an earlier study [8]. B. Scheduling Prefetches Once RDS accesses have been recognized, the compiler inserts greedy prefetches as follows. At the point where an RDS object is being traversed—i.e., where the recurrent pointer update occurs—the compiler inserts prefetches of all pointers within this object that point to RDS-type objects at the earliest points where these addresses are available within the surrounding loop or procedure body. The availability of prefetch addresses is computed by propagating the earliest generation points of pointer values along with the values themselves. Two examples of greedy prefetch scheduling are shown in Figure 4. Further details of our implementation can be found in Luk's thesis [11]. IV. Prefetching RDSs on Uniprocessors In this section, we quantify the impact of our prefetching schemes on uniprocessor performance. Later, in Section V, we will turn our attention to multiprocessor systems. A. Experimental Framework We performed detailed cycle-by-cycle simulations of the entire Olden benchmark suite [12] on a dynamically-scheduled, superscalar processor similar to the MIPS R10000 [5]. The Olden benchmark suite contains ten pointer-based applications written in C, which are briefly summarized in Table I. The rightmost column in Table I shows the amount of memory dynamically allocated to RDS nodes. Our simulation model varies slightly from the actual MIPS R10000 (e.g., we model two memory units, and we... TABLE II UNIPROCESSOR SIMULATION PARAMETERS. <table> <thead> <tr> <th>Pipeline Parameters</th> <th>Issue Width</th> <th>4</th> </tr> </thead> <tbody> <tr> <td>Functional Units</td> <td>2 Int, 2 FP, 2 Memory, 1 Branch</td> <td></td> </tr> <tr> <td>Reorder Buffer Size</td> <td>32</td> <td></td> </tr> <tr> <td>Integer Multiply</td> <td>12 cycles</td> <td></td> </tr> <tr> <td>Integer Divide</td> <td>66 cycles</td> <td></td> </tr> <tr> <td>All Other Integer</td> <td>1 cycle</td> <td></td> </tr> <tr> <td>FP Divide</td> <td>15 cycles</td> <td></td> </tr> <tr> <td>FP Square Root</td> <td>20 cycles</td> <td></td> </tr> <tr> <td>All Other FP</td> <td>2 cycles</td> <td></td> </tr> <tr> <td>Branch Prediction Scheme</td> <td>2-bit Counters</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Memory Parameters</th> <th>Primary Inst and Data Caches 16KB, 2-way set-associative</th> </tr> </thead> <tbody> <tr> <td>Unified Secondary Cache</td> <td>512KB, 2-way set-associative</td> </tr> <tr> <td>Line Size</td> <td>32B</td> </tr> <tr> <td>Primary-to-Secondary Miss</td> <td>12 cycles</td> </tr> <tr> <td>Primary-to-Memory Miss</td> <td>76 cycles</td> </tr> <tr> <td>Data Cache Miss Handlers</td> <td>8</td> </tr> <tr> <td>Data Cache Banks</td> <td>2</td> </tr> <tr> <td>Data Cache Fill Time</td> <td>4 cycles</td> </tr> <tr> <td>(Requires Exclusive Access)</td> <td></td> </tr> <tr> <td>Main Memory Bandwidth</td> <td>1 access per 20 cycles</td> </tr> </tbody> </table> assume that all functional units are fully-pipelined, but we do model the rich details of the processor including the pipeline, register renaming, the reorder buffer, branch prediction, instruction fetching, branching penalties, the memory hierarchy (including contention), etc. Table II shows the parameters of our model. We use pixie [13] to instrument the optimized MIPS object files produced by the compiler, and pipe the resulting trace into our simulator. To avoid misses during the initialization of dynamically-allocated objects, we used a modified version of the IRIX mallopt routine [14] whereby we prefetch allocated objects before they are initialized. Determining these prefetch addresses is straightforward, since objects of the same size are typically allocated from contiguous memory. This optimization alone led to over twofold speedups relative to using malloc for the majority of the applications—particularly those that frequently allocate small objects. B. Performance of Greedy Prefetching Figure 5 shows the results of our uniprocessor experiments. The overall performance improvement offered by greedy prefetching is shown in Figure 5(a), where the two bars correspond to the cases without prefetching (N) and with greedy prefetching (G). These bars represent execution time normalized to the case without prefetching, and they are broken down into four categories explaining what happened during all potential graduation slots. (The number of graduation slots is the issue width—4 in this case—multiplied by the number of cycles.) The bottom section (busy) is the number of slots when instructions actually graduate, the top two sections are any non-graduating slots that are immediately caused by the oldest instruction suffering either a load or store miss, and the inst stall section is all other slots where instructions do not graduate. Note that the load stall and store stall sections are only a first-order approximation of the performance loss due to cache misses, since these delays also exacerbate subsequent data dependence stalls. As we see in Figure 5(a), half of the applications enjoy a speedup ranging from 4% to 45%, and the other half are within 2% of their original performance. For the applications with the largest memory stall penalties—i.e., health, perimeter, and treeadd—much of this stall time has been eliminated. In the cases of bisort and mat, prefetching overhead more than offset the reduction in memory stalls (thus resulting in a slight performance degradation), but this was not a problem in the other eight applications. To understand the performance results in greater depth, Figure 5(b) breaks down the original primary cache misses into three categories: (i) those that are prefetched and subsequently hit in the primary cache (pf hit), (ii) those that are prefetched but remain primary misses (pf miss), and (iii) those that are not prefetched (nopf miss). The sum of the pf hit and pf miss cases is also known as the coverage factor, which ideally should be 100%. For em3d, power, and vorono1, the coverage factor is quite low (under 20%) because most of their misses are caused by array or scalar references—hence prefetching RDSs yields little improvement. In all other cases, the coverage factor is above 60%, and in four cases we achieve nearly perfect coverage. If the pf miss category is large, this indicates that prefetches were not scheduled effectively—either they were issued too late to hide the latency, or else they were too early and the prefetched data was displaced from the cache before it could be referenced. This category is most prominent in mat, where the compiler is unable to prefetch early enough during the traversal of very short linked lists within a hash table. Since greedy prefetching offer little control over prefetching distance, it is not surprising that scheduling is imperfect—in fact, it is encouraging that the pf miss fractions are this low. To help evaluate the costs of prefetching, Figure 5(c) shows the fraction of dynamic prefetches that are unnecessary because the data is found in the primary cache. For each application, we show four different bars indicating the total (dynamic) unnecessary prefetches caused by static prefetch instructions with hit rates up to a given threshold. Hence the bar labeled “100” corresponds to all unnecessary prefetches, whereas the bar labeled “99” shows the total unnecessary prefetches if we exclude prefetch instructions with hit rates over 99%, etc. This breakdown indicates the potential for reducing overhead by eliminating static prefetch instructions that are clearly of little value. For example, eliminating prefetches with hit rates over 99% would eliminate over half of the unnecessary prefetches in perimeter, thus decreasing overhead significantly. In contrast, reducing overhead with a flat distribution (e.g., bh) is more difficult since prefetches that sometimes hit also miss at least 10% of the time; therefore, eliminating them may sacrifice some latency-hiding benefit. We found that eliminating prefetches with hit rates above 95% improves performance by 1-7% for these applications [8]. Finally, we measured the impact of greedy prefetching on memory bandwidth consumption. We observe that on av- average, greedy prefetching increases the traffic between the primary and secondary caches by 12.7%, and the traffic between the secondary cache and main memory by 7.8%. In our experiments, this has almost no impact on performance. Hence greedy prefetching does not appear to be suffering from memory bandwidth problems. In summary, we have seen that automatic compiler-inserted prefetching can result in significant speedups for uniprocessor applications containing RDSs. We now investigate whether the two more sophisticated prefetching schemes can offer even larger performance gains. C. Performance of History-Pointer Prefetching and Data-Linearization Prefetching We applied history-pointer prefetching and data-linearization prefetching by hand to several of our applications. History-pointer prefetching is applicable to health because the list structures that are accessed by a key procedure remain unchanged across the over ten thousand times that it is called. As a result, history-pointer prefetching achieves a 40% speedup over greedy prefetching through better miss coverage and fewer unnecessary prefetches. Although history-pointer prefetching has fewer unnecessary prefetches than greedy prefetching, it has significantly higher instruction overhead due to the extra work required to maintain the history-pointers. Data-linearization prefetching is applicable to both perimeter and treeadd, because the creation order is identical to the major subsequent traversal order in both cases. As a result, data linearization does not require changing the data layout in these cases (hence spatial locality is unaffected). By reducing the number of unnecessary prefetches (and hence prefetching overhead) while maintaining good coverage factors, data-linearization prefetching results in speedups of 0% and 18% over greedy prefetching for perimeter and treeadd, respectively. Overall, we see that both schemes can potentially offer significant improvements over greedy prefetching when applicable. V. PREFETCHING RDSs ON MULTIPROCESSORS Having observed the benefits of automatic prefetching of RDSs on uniprocessors, we now investigate whether the compiler can also accelerate pointer-based applications running on multiprocessors. In earlier studies, Mowry demonstrated that the compiler can successfully prefetch parallel matrix-based codes [2], [15], but the compiler used in those studies did not attempt to prefetch pointer-based access patterns. However, through hand-inserted prefetching, Mowry was able to achieve a significant speedup in BARNES [15], which is a pointer-intensive shared-memory parallel application from the SPLASH suite [16]. BARNES performs a hierarchical n-body simulation of the evolution of galaxies. The main computation consists of a depth-first traversal of an octree structure to compute the gravitational force exerted by the given body on all other bodies in the tree. This is repeated for each body in the system, and the bodies are statically assigned to processors for the duration of each time step. Cache misses occur whenever a processor visits a part of the octree that is not already in its cache, either due to replacements or communication. To insert prefetches by hand, Mowry used a strategy similar to greedy prefetching: upon first arriving at a node, he prefetched all immediate children before descending depth-first into the first child. To evaluate the performance of our compiler-based implementation of greedy prefetching on a multiprocessor, we compared it with hand-inserted prefetching for BARNES. For the sake of comparison, we adopted the same simulation environment used in Mowry’s earlier study [15], which we now briefly summarize. We simulated a cache-coherent, shared-memory multiprocessor that resembles the DASH multiprocessor [17]. Our simulated machine consists of 16 processors, each of which has two levels of direct-mapped caches, both using 16 byte lines. Table III shows the latency for servicing an access to different levels of the memory hierarchy, in the absence of contention (our simulations did model contention, however). To make simulations feasible, we scaled down both the problem size and cache sizes accordingly (we ran 8192 bodies through 3 times steps on an 8K/64K cache hierarchy), as was done (and explained in more detail) in the original study [2]. Figure 6 shows the impact of both compiler-inserted greedy prefetching (G) and hand-inserted prefetching (H) on BARNES. The execution times in Figure 6(a) are broken down as follows: the bottom section is the amount of time spent executing instructions (including any prefetching instruction overhead), and the middle and top sections are synchronization and memory stall times, respectively. As we see in Figure 6(a), the compiler achieves nearly identical performance to hand-inserted prefetching. The compiler prefetches 90% of the original cache misses with only 15% of these misses being unnecessary, as we see in Figures 6(b) and 6(c), respectively. Of the prefetched misses, the latency was fully hidden in half of the cases ($pf_{hit}$), and partially hidden in the other cases ($pf_{miss}$). By eliminating roughly half of the original memory stall time, the compiler was able to achieve a 16% speedup. The compiler’s greedy strategy for inserting prefetches is quite similar to what was done by hand, with the following exception. In an effort to minimize unnecessary prefetches, the compiler’s default strategy is to prefetch only the first 64 bytes within a given RDS node. In the case of BARNES, the nodes are longer than 64 bytes, and we discovered that hand-inserted prefetching achieves better performance when we prefetch the entire nodes. In this case, the improved miss coverage of prefetching the entire nodes is worth the additional unnecessary prefetches, thereby resulting in a 1% speedup over compiler-inserted prefetching. Overall, however, we are quite pleased that the compiler was able to do this well, nearly matching the best performance that we could achieve by hand. ### Table III <table> <thead> <tr> <th>Destination of Access</th> <th>Read</th> <th>Write</th> </tr> </thead> <tbody> <tr> <td>Primary Cache</td> <td>1 cycle</td> <td>1 cycle</td> </tr> <tr> <td>Secondary Cache</td> <td>15 cycles</td> <td>4 cycles</td> </tr> <tr> <td>Local Node</td> <td>20 cycles</td> <td>17 cycles</td> </tr> <tr> <td>Remote Node</td> <td>101 cycles</td> <td>89 cycles</td> </tr> <tr> <td>Dirty Remote, Remote Home</td> <td>132 cycles</td> <td>120 cycles</td> </tr> </tbody> </table> #### Figure 6 - **(a)** Execution Time - **(b)** Coverage Factor - **(c)** Unnecessary Prefetches Although prefetching has been studied extensively for array-based numeric codes [6], [18], relatively little work has been done on non-numeric applications. Chen et al. [19] used global instruction scheduling techniques to move address generation back as early as possible to hide a small cache miss latency (10 cycles), and found mixed results. In contrast, our algorithms focus only on RDS accesses, and can issue prefetches much earlier (across procedure and loop iteration boundaries) by overcoming the pointer-chasing problem. Zhang and Torrellas [20] proposed a hardware-assisted scheme for prefetching irregular applications in shared-memory multiprocessors. Under their scheme, programs are annotated to bind together groups of data (e.g., fields in a record or two records linked by a pointer), which are then prefetched under hardware control. Compared with our compiler-based approach, their scheme has two shortcomings: (i) annotations are inserted manually, and (ii) their hardware extensions are not likely to be applicable in uniprocessors. Joseph and Grunwald [21] proposed a hardware-based Markov prefetching scheme which prefetches multiple predicted addresses upon a primary cache miss. While Markov prefetching can potentially handle chaotic miss patterns, it requires considerably more hardware support and has less flexibility in selecting what to prefetch and controlling the prefetch distance than our compiler-based schemes. To our knowledge, the only compiler-based pointer prefetching scheme in the literature is the SPAID scheme proposed by Lipasti et al. [22]. Based on an observation that procedures are likely to dereference any pointers passed to them as arguments, SPAID inserts prefetches for the objects pointed to by these pointer arguments at the call sites. Therefore this scheme is only effective if the interval between the start of a procedure call and its dereference of a pointer is comparable to the cache miss latency. In an earlier study [8], we found that greedy prefetching offers substantially better performance than SPAID by hiding more latency while paying less overhead. ### VII. Conclusions While automatic compiler-inserted prefetching has shown considerable success in hiding the memory latency of array-based codes, the compiler technology for successfully prefetching pointer-based data structures has thus far been lacking. In this paper, we propose three prefetching schemes which overcome the pointer-chasing problem, we automate the most widely applicable scheme (greedy prefetching) in the compiler, and we evaluate its performance on both a modern superscalar uniprocessor (similar to the MIPS R10000) and on a large-scale shared-memory multiprocessor. Our uniprocessor experiments show that automatic compiler-inserted prefetching can accelerate pointer-based applications by as much as 45%. In addition, the more sophisticated algorithms (which we currently simulate by hand) can offer even larger performance gains. Our multiprocessor experiments demonstrate that the compiler can potentially provide equivalent performance to hand-inserted prefetching even on parallel applications. These encouraging results suggest that the latency problem for pointer-based codes may be addressed largely through the prefetch instructions that already exist in many recent microprocessors. **ACKNOWLEDGMENTS** This work is supported by a grant from IBM Canada’s Centre for Advanced Studies. Chi-Keung Luk is partially supported by a Canadian Commonwealth Fellowship. Todd C. Mowry is partially supported by a Faculty Development Award from IBM. **REFERENCES** Chi-Keung Luk is a Ph.D. candidate in the Department of Computer Science at the University of Toronto, and is currently a visiting scholar at Carnegie Mellon University. He received his B.Sc. (First Class Honors) and M.Phil. degrees in computer science, both from The Chinese University of Hong Kong. His research interests are computer architecture, compiler optimizations, and programming languages, with a focus on the memory performance of non-numeric applications. He has been awarded a Canadian Commonwealth Fellowship, an IBM CAS Fellowship, and a Croucher Foundation Fellowship. Further information about his current research activities can be found at http://www.cs.cmu.edu/~luk. Todd C. Mowry received his B.S.E.E. from the University of Virginia in 1988, and his M.S.E.E. and Ph.D. from Stanford University in 1989 and 1994, respectively. From 1994 through 1997, he was an assistant professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto. Since 1997, he has been an associate professor in the Computer Science Department at Carnegie Mellon University. Dr. Mowry’s research interests span architecture, compilers, and operating systems. Most recently, he has been focusing on automatically tolerating the latency of accessing and communicating large data, and on automatically extracting thread-level parallelism from non-numeric applications. Further information about his current research activities can be found at http://www.cs.cmu.edu/~tc.m.
{"Source-Url": "http://www.ecst.csuchico.edu/~bjuliano/csci380/Papers/cLuk1999prefetching.pdf", "len_cl100k_base": 6861, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 41399, "total-output-tokens": 8451, "length": "2e12", "weborganizer": {"__label__adult": 0.000579833984375, "__label__art_design": 0.0006723403930664062, "__label__crime_law": 0.0005598068237304688, "__label__education_jobs": 0.0010013580322265625, "__label__entertainment": 0.00012969970703125, "__label__fashion_beauty": 0.0003020763397216797, "__label__finance_business": 0.0003170967102050781, "__label__food_dining": 0.0005364418029785156, "__label__games": 0.0011386871337890625, "__label__hardware": 0.0089111328125, "__label__health": 0.0010833740234375, "__label__history": 0.0004992485046386719, "__label__home_hobbies": 0.00019109249114990232, "__label__industrial": 0.0010318756103515625, "__label__literature": 0.00033402442932128906, "__label__politics": 0.00044035911560058594, "__label__religion": 0.0009560585021972656, "__label__science_tech": 0.2005615234375, "__label__social_life": 8.893013000488281e-05, "__label__software": 0.006866455078125, "__label__software_dev": 0.771484375, "__label__sports_fitness": 0.0005311965942382812, "__label__transportation": 0.0013341903686523438, "__label__travel": 0.0003159046173095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36654, 0.02478]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36654, 0.38866]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36654, 0.91399]], "google_gemma-3-12b-it_contains_pii": [[0, 5333, false], [5333, 11090, null], [11090, 14184, null], [14184, 20445, null], [20445, 23850, null], [23850, 29399, null], [29399, 36654, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5333, true], [5333, 11090, null], [11090, 14184, null], [14184, 20445, null], [20445, 23850, null], [23850, 29399, null], [29399, 36654, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36654, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36654, null]], "pdf_page_numbers": [[0, 5333, 1], [5333, 11090, 2], [11090, 14184, 3], [14184, 20445, 4], [20445, 23850, 5], [23850, 29399, 6], [29399, 36654, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36654, 0.23577]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
26ac2a38ed00c2819129dcff3c75938d25529f6d
[REMOVED]
{"Source-Url": "http://dl.ifip.org/db/conf/dsom/dsom2009/WickboldtBLACBGGTB09.pdf", "len_cl100k_base": 8091, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 36276, "total-output-tokens": 9657, "length": "2e12", "weborganizer": {"__label__adult": 0.00037288665771484375, "__label__art_design": 0.0007295608520507812, "__label__crime_law": 0.0008363723754882812, "__label__education_jobs": 0.0027141571044921875, "__label__entertainment": 0.00015687942504882812, "__label__fashion_beauty": 0.00021898746490478516, "__label__finance_business": 0.00875091552734375, "__label__food_dining": 0.0004253387451171875, "__label__games": 0.0008306503295898438, "__label__hardware": 0.0019626617431640625, "__label__health": 0.0011959075927734375, "__label__history": 0.000400543212890625, "__label__home_hobbies": 0.00022351741790771484, "__label__industrial": 0.0014181137084960938, "__label__literature": 0.0004091262817382813, "__label__politics": 0.00038552284240722656, "__label__religion": 0.00039768218994140625, "__label__science_tech": 0.31640625, "__label__social_life": 0.00016021728515625, "__label__software": 0.10150146484375, "__label__software_dev": 0.5595703125, "__label__sports_fitness": 0.00023543834686279297, "__label__transportation": 0.0005841255187988281, "__label__travel": 0.0002834796905517578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41296, 0.02325]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41296, 0.4454]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41296, 0.92]], "google_gemma-3-12b-it_contains_pii": [[0, 2350, false], [2350, 5846, null], [5846, 9473, null], [9473, 11774, null], [11774, 15466, null], [15466, 19060, null], [19060, 22052, null], [22052, 24361, null], [24361, 27538, null], [27538, 30933, null], [30933, 33190, null], [33190, 36150, null], [36150, 39650, null], [39650, 41296, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2350, true], [2350, 5846, null], [5846, 9473, null], [9473, 11774, null], [11774, 15466, null], [15466, 19060, null], [19060, 22052, null], [22052, 24361, null], [24361, 27538, null], [27538, 30933, null], [30933, 33190, null], [33190, 36150, null], [36150, 39650, null], [39650, 41296, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41296, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41296, null]], "pdf_page_numbers": [[0, 2350, 1], [2350, 5846, 2], [5846, 9473, 3], [9473, 11774, 4], [11774, 15466, 5], [15466, 19060, 6], [19060, 22052, 7], [22052, 24361, 8], [24361, 27538, 9], [27538, 30933, 10], [30933, 33190, 11], [33190, 36150, 12], [36150, 39650, 13], [39650, 41296, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41296, 0.11602]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
232db8c0969ef0c572be93f54ee43e552a2ae762
The Semantics of Gringo and Proving Strong Equivalence Amelia Harrison Department of Computer Science The University of Texas at Austin 2317 Speedway, 2.302 Austin, Texas 78712 Internal Mail Code: D9500 E-mail: ameliaj@cs.utexas.edu submitted 9 April 2013; revised TBD; accepted TBD Abstract Manuals written by the designers of answer set solvers usually describe the semantics of the input languages of their systems using examples and informal comments that appeal to the user’s intuition, without references to any precise semantics. We would like to describe a precise semantics for a large subset of the input language of the solver GRINGO based on representing GRINGO rules as infinitary propositional formulas. To prove strong equivalence of programs in this language we need a system of natural deduction for infinitary formulas, similar to intuitionistic propositional logic. KEYWORDS: answer set programming, strong equivalence, semantics of aggregates. 1 Introduction Answer set programming (ASP) is a powerful declarative paradigm for the design and implementation of knowledge intensive applications. It has been used in many areas of science and technology (Lifschitz 2008; Brewka et al. 2011). Its success is largely due to the expressivity of its modeling language and its efficient computation methods. The first ASP solvers were created more than ten years ago. One of their attractive features was that their input language had a simple, mathematically elegant semantics, based on the concept of a stable model (Gelfond and Lifschitz 1988). Unfortunately, this cannot be said about the best, most powerful and efficient ASP solvers available today. Many constructs added over the years to the language of ASP because programmers found them useful cannot be explained in terms of stable models in the sense of the original definition of this concept and its straightforward generalizations. Consider for instance a rule with a conditional literal in the body: \[ p \leftarrow q: r \] It can be viewed as the nested implication \[ (r \rightarrow q) \rightarrow p, \] written in logic programming notation. Stable models for such formulas can be defined using equilibrium logic (Pearce 1997) or in terms of reducts in the sense of (Ferraris 2005). However, both of these definitions are quite different from the original definition of the stable model (Gel-fond and Lifschitz 1988). When we look at manuals written by the designers of ASP solvers\footnote{See, for instance, http://sourceforge.net/projects/potassco/files/potassco\_guide/ and http://www.dlvsystem.com/dlvsystem/html/DLV\_Jaer\_Manual.html.}, we see that they explain the meaning of ASP programs using examples and informal comments that appeal to the user’s intuition, without references to any precise semantics. Without such a semantics, it is impossible to put the study of many important issues, such as the correctness of ASP solvers, programs, and optimization methods, on a firm scientific foundation. The absence of a precise semantics makes it difficult also to verify the correctness of ASP-based implementations of action languages and the relationship between input languages of different ASP solvers. This note is a preliminary report regarding our work in the direction of describing a precise semantics for a large subset of the input language of the solver GRINGO. Our approach is based on representing GRINGO rules as infinitary propositional formulas (Truszczynski 2012). We say that ASP programs $A$ and $B$ are strongly equivalent if for any set $R$ of rules, the programs obtained by adding $R$ to $A$ and by adding $R$ to $B$ have the same stable models (Lifschitz et al. 2001). We would like to develop methods for proving strong equivalence of GRINGO programs. To this end, we define and study a system of natural deduction for infinitary formulas, similar to intuitionistic propositional logic. ## 2 Background: Stable Models of Infinitary Formulas One of the reasons why infinitary formulas are an attractive formalism for defining the semantics of ASP languages is that they can be used to describe the semantics of aggregates. The semantics of aggregates proposed in (Ferraris 2005, Section 4.1) treats a ground aggregate as shorthand for a propositional formula. An aggregate with variables has to be grounded before that semantics can be applied to it. For instance, to explain the precise meaning of the expression $\{p(X)\}$ (“there exists at least one object with the property $p$”) in the body of an ASP rule we first rewrite it as $$\{p(t_1), \ldots, p(t_n)\},$$ where $t_1, \ldots, t_n$ are all ground terms in the language of the program, and then turn it into the propositional formula $$p(t_1) \lor \cdots \lor p(t_n). \tag{1}$$ But this description of the meaning of $\{p(X)\}$ implicitly assumes that the Herbrand universe of the program is finite. If the program contains function symbols then an infinite disjunction has to be used instead of (1). \footnote{This is not to say that there is anything exotic or noncomputable about ASP programs containing both aggregates and function symbols, however. For instance, the program $$p(f(a)) q \leftarrow \{p(X)\}$$ has simple intuitive meaning, and its stable model $\{p(f(a)), q\}$ can be computed by existing solvers.} \footnote{References to grounding in other theories of aggregates suffer from the same problem. For instance, the definition of a ground instance of a rule in Section 2.2 of the ASP Core document (https://www.mat.unical.it/aspcomp2013/files/ASP–CORE–2.0.pdf, Version 2.02) talks about replacing the expression $\{e_1; \ldots; e_n\}$ in a rule with a set denoted by $\text{inst}(\{e_1; \ldots; e_n\})$. But that set can be infinite.} The definitions of infinitary formulas and their stable models given below are equivalent to the definitions proposed in (Truszczynski 2012). Let \( \sigma \) be a propositional signature, that is, a set of propositional atoms. The sets \( \mathcal{F}_0^\sigma, \mathcal{F}_1^\sigma, \ldots \) are defined as follows: - \( \mathcal{F}_0^\sigma = \sigma \cup \{ \bot \} \). - \( \mathcal{F}_{i+1}^\sigma \) is obtained from \( \mathcal{F}_i^\sigma \) by adding expressions \( \mathcal{H}^\land \) and \( \mathcal{H}^\lor \) for all subsets \( \mathcal{H} \) of \( \mathcal{F}_i^\sigma \), and expressions \( F \rightarrow G \) for all \( F, G \in \mathcal{F}_i^\sigma \). The elements of \( \bigcup_{i=0}^\infty \mathcal{F}_i^\sigma \) are called (infinitary) formulas over \( \sigma \). The definition of satisfaction familiar from classical propositional logic is extended to infinitary propositional formulas in a natural way. The reduct \( F^I \) of a formula \( F \) with respect to an interpretation \( I \) is defined as follows: - \( \bot^I = \bot \). - For \( p \in \sigma \), \( p^I = \bot \) if \( I \not\models p \); otherwise \( p^I = p \). - \( (\mathcal{H}^\land)^I = \{ G^I \mid G \in \mathcal{H} \}^\land \). - \( (\mathcal{H}^\lor)^I = \{ G^I \mid G \in \mathcal{H} \}^\lor \). - \( (F \rightarrow G)^I = \bot \) if \( I \not\models F \rightarrow G \); otherwise \( (F \rightarrow G)^I = G^I \rightarrow H^I \). The reduct \( \mathcal{H}^I \) of a set \( \mathcal{H} \) of formulas is the set consisting of the reducts of the elements of \( \mathcal{H} \). An interpretation \( I \) is a stable model of a set \( \mathcal{H} \) of formulas if it is minimal w.r.t. set inclusion among the interpretations satisfying \( \mathcal{H}^I \). ### 3 Defining Semantics for Gringo Programs In this note, we use Gringo to denote the input language of the solver GRINGO. The basis of this language is the language of logic programs with negation as failure, with the syntax and semantics defined in (Gelfond and Lifschitz 1988). In (Harrison et al. 2013b) we extend that semantics to a larger subset of Gringo. Specifically, we cover arithmetical functions and comparisons, conditions, and aggregates. Our proposal is based on the informal and sometimes incomplete description of the language in User’s Guide, on the discussion of ASP programming constructs in (Gebser et al. 2012), on experiments with GRINGO, and on the clarifications provided in response to our questions by its designers. The key element of the semantics is a translation \( \tau \) from Gringo into the language of infinitary propositional formulas described above. Like grounding in the original definition of a stable model (Gelfond and Lifschitz 1988), this translation is modular, in the sense that it applies to the program rule by rule. The translation \( \tau \) is defined first for literals, then for conditional literals, then for aggregate expressions, and then for rules. For example, the result of applying \( \tau \) to the conditional literal \[ \text{available}(X) : \text{person}(X), \] where \( X \) is a local variable, is the conjunction of the formulas \[ \text{person}(r) \rightarrow \text{available}(r) \] over all ground terms \( r \). If the Herbrand universe of the program is infinite then the result of applying \( \tau \) to a conditional literal will be an infinite conjunction. A stable model of a Gringo program $\Pi$ is a stable model, in the sense of the paper (Truszczynski 2012) reviewed in Section 2 above, of the set consisting of the formulas $\tau R$ for all rules $R$ of $\Pi$. Instead of infinitary propositional formulas, we could have used first-order formulas with generalized quantifiers.\(^4\) The advantage of propositional formulas as the target language is that properties of these formulas, and of their stable models, are better understood. We may be able to prove, for instance, that two Gringo programs have the same stable models by observing that their corresponding infinitary formulas are equivalent in a natural deduction system, as discussed below. ### 4 Strong Equivalence of Infinitary Formulas In (Harrison et al. 2013a) we define a basic infinitary system of natural deduction, analogous to an intuitionistic finite system of natural deduction. This system includes infinitary versions of the introduction and elimination rules for propositional connectives. For example, the rules \[ \begin{align*} (\land I) & \quad \frac{\Gamma \Rightarrow H}{\Gamma \Rightarrow \land H} \quad \text{for all } H \in \mathcal{H}, \\ (\land E) & \quad \frac{\Gamma \Rightarrow \mathcal{H} \land \Gamma \Rightarrow H}{\Gamma \Rightarrow H} \quad (H \in \mathcal{H}), \end{align*} \] where $\mathcal{H}$ is a set of infinitary formulas, serve as infinitary analogs to the conjunction introduction and elimination rules of a finite system of natural deduction. **Theorem.** For any set $\mathcal{H}$ of (infinitary) formulas, (a) if a formula $F$ is provable in the basic system then $\mathcal{H} \cup \{F\}$ has the same stable models as $\mathcal{H}$; (b) if $F$ is equivalent to $G$ in the basic system then $\mathcal{H} \cup \{F\}$ and $\mathcal{H} \cup \{G\}$ have the same stable models. This is a generalization of a well-known property of stable models for finite propositional formulas: intuitionistically equivalent formulas have the same stable models. The proof of this property presented in (Ferraris 2005), however, is based on ideas from equilibrium logic (Pearce 1997) and is very different from the proof for the infinitary case given in (Harrison et al. 2013a). This theorem is useful because infinitary formulas can be used to precisely define the semantics of aggregates in ASP when the Herbrand universe is infinite. The following examples demonstrate how the theory described in (Harrison et al. 2013a) can be applied to prove equivalences between programs involving aggregates. **Example 1.** The rule \[ p(Y) \leftarrow \text{card}\{X, Y \: q(X, Y)\} \geq 1 \] (2) says, informally speaking, that we can conclude $p(Y)$ once we have established that there exists at least one $X$ such that $q(X, Y)$. Replacing this rule with \[ p(Y) \leftarrow q(X, Y) \] (3) \(^4\) Stable models of formulas with generalized quantifiers are defined in (Lee and Meng 2012a; Lee and Meng 2012b; Lee and Meng 2012c). within any program does not affect the set of stable models. To prove this claim, we need to calculate the result of applying $\tau$ to rule (2) and rule (3). The result of applying $\tau$ to (2) is $$\bigwedge_{t} \left( \bigvee_{u} q(u,t) \rightarrow p(t) \right),$$ where $t$ and $u$ range over all ground terms. On the other hand, the result of applying $\tau$ to (3) is $$\bigwedge_{t,u} \left( q(u,t) \rightarrow p(t) \right).$$ These formulas are equivalent in the basic system. **Example 2.** The rule $$\text{order}(X,Y) \leftarrow p(X), p(Y), X < Y, \neg p(Z) : p(Z), X < Z, Z < Y$$ (4) can be used for sorting. It can be replaced by either of the following two simpler rules within any program without changing that program’s stable models: $$\text{order}(X,Y) \leftarrow p(X), p(Y), X < Y, \bot : p(Z), X < Z, Z < Y,$$ (5) $$\text{order}(X,Y) \leftarrow p(X), p(Y), X < Y, \neg p(Z) : X < Z, Z < Y.$$ (6) If we wish to prove this claim for rule (5), for example, by the theorem stated above, it is sufficient to show that the result of applying $\tau$ to (4) is equivalent to the result of applying $\tau$ to (5) in the basic system. The result of applying $\tau$ to (4) is the conjunction of the formulas $$p(i) \land p(j) \land i < j \land \bigwedge_{k} (\neg p(k) \land i < k \land k < j \rightarrow p(k)) \rightarrow \text{order}(i,j)$$ for all numerals $i, j$. The result of applying $\tau$ to (5) is the conjunction of the formulas $$p(i) \land p(j) \land i < j \land \bigwedge_{k} (\neg p(k) \land i < k \land k < j \rightarrow \bot) \rightarrow \text{order}(i,j).$$ It is sufficient to observe that $$p(k) \land i < k \land k < j \rightarrow \neg p(k)$$ is intuitionistically equivalent to $$p(k) \land i < k \land k < j \rightarrow \bot.$$ The proof for rule (6) is similar. Rule (5), like rule (4), is safe; rule (6) is not. **Example 3.** Consider the following rule from Example 3.7 of the User’s Guide (see Footnote 1): $$\text{weekdays} \leftarrow \text{day}(X) : \text{day}(X), \neg \text{weekend}(X).$$ (7) Using the theorem above, we can show that (7) is strongly equivalent to weekdays. In other words, replacing this rule with the fact weekdays within any program would not affect its stable models. --- 5 This rule was communicated to us by Roland Kaminski on October 21, 2012. 5 Conclusion Strong equivalence is an important notion in ASP. If a programmer knows that two rules are strongly equivalent then she may replace one rule with another, even in the context of a large program, and be assured that this change will not affect the stable models of the program. Two finite propositional formulas are strongly equivalent if and only if they are equivalent in the logic of here-and-there (Ferraris 2005, Proposition 2). The results in (Harrison et al. 2013a) are similar to the if part of that theorem. However, we don’t know how to extend the only if part to infinitary formulas. It appears that axioms or inference rules not mentioned in that paper may be required, and identifying them is a topic for future work. The project described in this note is directed towards defining a precise semantics for a subset of the input language of the solver GRINGO based on representing GRINGO rules as infinitary propositional formulas. A system of natural deduction for infinitary formulas, similar to intuitionistic propositional logic, allows us to prove strong equivalence of programs in this language. Acknowledgements My sincere thanks to Vladimir Lifschitz for the time he dedicated to editing this note and all the time he dedicates to teaching me. Thank you also to the anonymous reviewers who suggested a number of helpful improvements to a draft of this note. References \(^6\)http://www.cs.utexas.edu/users/vl/papers/etinf.pdf \(^7\)http://www.cs.utexas.edu/users/vl/papers/gringo.pdf The Semantics of Gringo and Proving Strong Equivalence
{"Source-Url": "http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http://www.cs.utexas.edu/~ameliaj/pubs/dc_ajh.pdf&pubid=127375", "len_cl100k_base": 4148, "olmocr-version": "0.1.48", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23238, "total-output-tokens": 5437, "length": "2e12", "weborganizer": {"__label__adult": 0.0005340576171875, "__label__art_design": 0.0003862380981445313, "__label__crime_law": 0.0007014274597167969, "__label__education_jobs": 0.0017843246459960938, "__label__entertainment": 0.00013065338134765625, "__label__fashion_beauty": 0.0002732276916503906, "__label__finance_business": 0.0004270076751708984, "__label__food_dining": 0.0007333755493164062, "__label__games": 0.0008015632629394531, "__label__hardware": 0.0008440017700195312, "__label__health": 0.0014371871948242188, "__label__history": 0.0003609657287597656, "__label__home_hobbies": 0.00017905235290527344, "__label__industrial": 0.0008344650268554688, "__label__literature": 0.0011491775512695312, "__label__politics": 0.0004973411560058594, "__label__religion": 0.0007829666137695312, "__label__science_tech": 0.129638671875, "__label__social_life": 0.00020015239715576172, "__label__software": 0.00803375244140625, "__label__software_dev": 0.8486328125, "__label__sports_fitness": 0.0004336833953857422, "__label__transportation": 0.0009551048278808594, "__label__travel": 0.00023555755615234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18525, 0.03392]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18525, 0.5564]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18525, 0.85212]], "google_gemma-3-12b-it_contains_pii": [[0, 2283, false], [2283, 5748, null], [5748, 9151, null], [9151, 12127, null], [12127, 14469, null], [14469, 17769, null], [17769, 18525, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2283, true], [2283, 5748, null], [5748, 9151, null], [9151, 12127, null], [12127, 14469, null], [14469, 17769, null], [17769, 18525, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18525, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18525, null]], "pdf_page_numbers": [[0, 2283, 1], [2283, 5748, 2], [5748, 9151, 3], [9151, 12127, 4], [12127, 14469, 5], [14469, 17769, 6], [17769, 18525, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18525, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
b2e8864c24c98f6eba812d90055e8cf485ad5848
Ethics and Professional Responsibility in Computing Michael C. Loui Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Keith W. Miller Department of Computer Science University of Illinois at Springfield August 23, 2007 Abstract. Computing professionals have ethical obligations to clients, employers, other professionals, and the public, in fulfilling their professional responsibilities. These obligations are expressed in codes of ethics, which can be used to make decisions about ethical problems. Key Words: ethics, profession, moral responsibility, liability, trust, informed consent, peer review, whistle-blowing, code of ethics, ethical decision-making 1. Introduction Computing professionals perform a variety of tasks: they write specifications for new computer systems, they design instruction pipelines for superscalar processors, they diagnose timing anomalies in embedded systems, they test and validate software systems, they restructure the back-end database of an inventory system, they analyze packet traffic in a local area network, and they recommend security policies for a medical information system. Computing professionals are obligated to perform these tasks conscientiously, because their decisions affect the performance and functionality of computer systems, which in turn affect the welfare of the systems’ users directly and that of other people less directly. For example, the software that controls the automatic transmission of an automobile should minimize gasoline consumption, and more important, ensure the safety of the driver, any passengers, other drivers, and pedestrians. The obligations of computing professionals are similar to the obligations of other technical professionals, such as civil engineers. Taken together, these professional obligations are called professional ethics. Ethical obligations have been studied by philosophers and articulated by religious leaders for many years. Within the discipline of philosophy, ethics encompasses the study of the actions that a responsible individual ought to choose, the values that an honorable individual ought to espouse, and the character that a virtuous individual ought to have. For example, everyone ought to be honest, fair, kind, civil, respectful, and trustworthy. Besides these general obligations that everyone shares, professionals have additional obligations 1 The views, opinions, and conclusions expressed in this article are not necessarily those of the University of Illinois or the National Science Foundation. 2 Address for correspondence: Coordinated Science Laboratory, 1308 W. Main St., Urbana, IL 61801, USA. Telephone: (217) 333-2595. E-mail: loui AT uiuc DOT edu. Supported by the National Science Foundation under Grant EEC-0628814. 3 Address for correspondence: UIS, CSC, UHB 3100; One University Plaza; Springfield, IL 62703, USA. Telephone: (217) 206-7327. E-mail: miller DOT keith AT uis DOT edu. that arise from the responsibilities of their professional work and their relationships with clients, employers, other professionals, and the public. The ethical obligations of computing professionals go beyond complying with laws or regulations; laws often lag behind advances in technology. For example, before the passage of the Electronic Communications Privacy Act of 1986 in the United States, government officials did not require a search warrant to collect personal information transmitted over computer communication networks. Nevertheless, even in the absence of a privacy law before 1986, computing professionals should have been aware of the obligation to protect the privacy of personal information. 2. What Is a Profession? Computing professionals include hardware designers, software engineers, database administrators, system analysts, and computer scientists. In what ways do these occupations resemble recognized professions such as medicine, law, engineering, counseling, and accounting? In what ways do computing professions resemble occupations that are not traditionally thought of as professions, such as plumbers, fashion models, and sales clerks? Professions that exhibit certain characteristics are called strongly differentiated professions (1). These are the professions such as physicians and lawyers, who have special rights and responsibilities. The defining characteristics of a strongly differentiated profession are specialized knowledge and skills, systematic research, professional autonomy, a robust professional association, and a well defined social good associated with the profession. Members of a strongly differentiated profession have specialized knowledge and skills, often called a “body of knowledge,” gained through formal education and practical experience. Although plumbers also have special knowledge and skills, education in the trades such as plumbing emphasizes apprenticeship training rather than formal education. An educational program in a professional school teaches students the theoretical basis of a profession, which is difficult to learn without formal education. A professional school also socializes students to the values and practices of the profession. Engineering schools teach students to value efficiency and to reject shoddy work. Medical schools teach students to become physicians, and law schools teach future attorneys. Because professional work has a significant intellectual component, entry into a profession often requires a post-baccalaureate degree such as the M.S.W. (Master of Social Work) or the Psy.D. (Doctor of Psychology). Professionals value the expansion of knowledge through systematic research—they do not rely exclusively on the transmission of craft traditions from one generation to the next. Research in a profession is conducted by academic members of the profession, and sometimes by practitioner members too. Academic physicians, for example, conduct medical research. Because professionals understand that professional knowledge always advances, professionals should also engage in continuing education by reading publications and attending conferences. Professionals should share general knowledge of their fields, rather than keeping secrets of a guild. Professionals are obligated, however, to keep specific information about clients confidential. Professionals tend to have clients, not customers. Whereas a sales clerk should try to satisfy the customer’s desires, the professional should try to meet the client’s needs (consistent with the welfare of the client and the public). For example, a physician should not give a patient a prescription for barbiturates just because the patient wants the drugs, but only if the patient’s medical condition warrants the prescription. Because professionals have specialized knowledge, clients cannot fully evaluate the quality of services provided by professionals. Only other members of a profession, the professional’s peers, can sufficiently determine the quality of professional work. The principle of peer review underlies accreditation and licensing activities: members of a profession evaluate the quality of an educational program for accreditation, and they set the requirements for the licensing of individuals. For example, in the United States, a lawyer must pass a state’s bar exam to be licensed to practice in that state. (Most states have reciprocity arrangements—a professional license granted by one state is recognized by other states.) The license gives professionals legal authority and privileges that are not available to unlicensed individuals. For example, a licensed physician may legitimately prescribe medications and perform surgery, activities that should not be performed by people who are not medical professionals. Through accreditation and licensing, the public cedes control over a profession to members of the profession. In return for this autonomy, the profession promises to serve the public good. Medicine is devoted to advancing human health, law to the pursuit of justice, engineering to the economical construction of safe and useful objects. As an example of promoting the public good over the pursuit of self interest, professionals are expected to provide services to some indigent clients without charge. For instance, physicians volunteer at free clinics, and they serve in humanitarian missions to developing countries. Physicians and nurses are expected to render assistance in cases of medical emergency—for instance, when a train passenger suffers a heart attack. In sum, medical professionals have special obligations that those who are not medical professionals do not have. The purposes and values of a profession, including its commitment to a public good, are expressed by its code of ethics. A fortiori, the creation of a code of ethics is one mark of the transformation of an occupation into a profession. A profession’s code of ethics is developed and updated by a national or international professional association. This association publishes periodicals and hosts conferences to enable professionals to continue their learning and to network with other members of the profession. The association typically organizes the accreditation of educational programs and the licensing of individual professionals. Do computing professions measure up to these criteria for a strongly differentiated profession? To become a computing professional, an individual must acquire specialized knowledge about discrete algorithms and relational database theory, and specialized skills such as software development techniques and digital system design. Computing professionals usually learn this knowledge and acquire these skills by earning a baccalaureate degree in computer science, computer engineering, information systems, or a related field. As in engineering, a bachelor’s degree currently suffices for entry to the computing professions. The knowledge base for computing expands through research in computer science conducted in universities and in industrial and government laboratories. Like electrical engineers, most computing professionals work for employers, who might not be the professionals’ clients. For example, a software engineer might develop application software that controls a kitchen appliance; the engineer’s employer might be different from the appliance manufacturer. Furthermore, the software engineer should prevent harm to the ultimate users of the appliance, and others who might be affected by the appliance. Thus, the computing professional’s relationship with a client and with the public might be indirect. The obligations of computing professionals to clients, employers, and the public are expressed in several codes of ethics. Section 5 below reviews two codes that apply to computing professionals. Although the computing professions meet many criteria of other professions, they are deficient in significant ways. Unlike academic programs in engineering, relatively few academic programs in computing are accredited. Furthermore, in the United States, computing professionals can not be licensed, except that software engineers can be licensed in Texas. As of this writing, the Association for Computing Machinery (ACM) has reaffirmed its opposition to state-sponsored licensing of individuals (2). Computing professionals may earn proprietary certifications offered by corporations such as Cisco, Novell, Sun, and Microsoft. In the United States, the American Medical Association dominates the medical profession, and the American Bar Association dominates the legal profession, but no single organization defines the computing profession. Instead, there are multiple distinct organizations, including the ACM, the Institute of Electrical and Electronics Engineers (IEEE) Computer Society, and the Association of Information Technology Professionals (AITP). Although these organizations cooperate on some projects, they remain largely distinct, with separate publications and codes of ethics. Regardless of whether computing professions are strongly differentiated, computing professionals have important ethical obligations, as explained in the remainder of this article. In the early 1980s, Atomic Energy of Canada Limited (AECL) manufactured and sold a cancer radiation treatment machine called the Therac-25, which relied on computer software to control its operation. Between 1985 and 1987, the Therac-25 caused the deaths of three patients and serious injuries to three others (3). Who was responsible for the accidents? The operator who administered the massive radiation overdoses, which produced severe burns? The software developers who wrote and tested the control software, which contained several serious errors? The system engineers who neglected to install the backup hardware safety mechanisms that had been used in previous versions of the machine? The manufacturer, AECL? Government agencies? We can use the Therac-25 case to distinguish between four different kinds of responsibility (4, 5). Causal responsibility. Responsibility can be attributed to causes: for example, “the tornado was responsible for damaging the house.” In the Therac-25 case, the proximate cause of each accident was the operator, who started the radiation treatment. But just as the weather cannot be blamed for a moral failing, the Therac-25 operators cannot be blamed because they followed standard procedures, and the information displayed on the computer monitors was cryptic and misleading. Role responsibility. An individual who is assigned a task or function is considered the responsible person for that role. In this sense, a foreman in a chemical plant may be responsible for disposing of drums of toxic waste, even if a forklift operator actually transfers the drums from the plant to the truck. In the Therac-25 case, the software developers and system engineers were assigned the responsibility of designing the software and hardware of the machine. Insofar as their designs were deficient, they were responsible for those deficiencies because of their roles. Even if they had completed their assigned tasks, however, their role responsibility may not encompass the full extent of their professional responsibilities. Legal responsibility. An individual or an organization can be legally responsible, or liable, for a problem. That is, the individual could be charged with a crime, or the organization could be liable for damages in a civil lawsuit. Similarly, a physician can be sued for malpractice. In the Therac-25 case, AECL could have been sued. One kind of legal responsibility is *strict liability*: if a product injures someone, then manufacturer of the product can be found liable for damages in a lawsuit, even if the product met all applicable safety standards and the manufacturer did nothing wrong. The principle of strict liability encourages manufacturers to be careful, and it provides a way to compensate victims of accidents. *Moral responsibility*. Causal, role, and legal responsibilities tend to be exclusive: if one individual is responsible, then another is not. In contrast, moral responsibility tends to be shared: many engineers are responsible for the safety of the products that they design, not just a designated safety engineer. Furthermore, rather than assign blame for a past event, moral responsibility focuses on what individuals should do in the future. In the moral sense, responsibility is a virtue: a “responsible person” is careful, considerate, and trustworthy; an “irresponsible person” is reckless, inconsiderate, and untrustworthy. Responsibility is shared whenever multiple individuals collaborate as a group, such as a software development team. When moral responsibility is shared, responsibility is *not* atomized to the point at which no one in the group is responsible. Rather, each member of the group is accountable to the other members of the group and to those whom the group’s work might affect, both for the individual’s own actions and for the effects of their collective effort. For example, suppose a computer network monitoring team has made mistakes in a complicated statistical analysis of network traffic data, and these mistakes have changed the interpretation of the reported results. If the team members do not reanalyze the data themselves, they have an obligation to seek the assistance of a statistician who can analyze the data correctly. Different team members might work with the statistician in different ways, but they should hold each other accountable for their individual roles in correcting the mistakes. Finally, the team has a collective moral responsibility to inform readers of the team’s initial report about the mistakes and the correction. Moral responsibility for recklessness and negligence is not mitigated by the presence of good intentions or by the absence of bad consequences. Suppose a software tester neglects to sufficiently test a new module for a telephone switching system, and the module fails. Although the subsequent telephone service outages are not intended, the software tester is morally responsible for the harms caused by the outages. Suppose a hacker installs a keystroke logging program in a deliberate attempt to steal passwords at a public computer. Even if the program fails to work, the hacker is still morally responsible for attempting to invade the privacy of users. An individual can be held morally responsible both for acting and for failing to act. For example, a hardware engineer might notice a design flaw that could result in a severe electrical shock to someone who opens a personal computer system unit to replace a memory chip. Even if the engineer is not specifically assigned to check the electrical safety of the system unit, the engineer is morally responsible for calling attention to the design flaw, and the engineer can be held accountable for failing to act. Computing systems often obscure accountability (5). In particular, in an embedded system such as the Therac-25, the computer that controls the device is hidden. Computer users seem resigned to accepting defects in computers and software that cause intermittent crashes and losses of data. Errors in code are called “bugs,” regardless of whether they are minor deficiencies or major mistakes that could cause fatalities. In addition, because computers appear to act autonomously, people tend to blame the computers themselves for failing, instead of the professionals who designed, programmed, and produced the computers. 4. What Are the Responsibilities of Computing Professionals? **Responsibilities to Clients and Users** Whether a computing professional works as a consultant to an individual or as an employee in a large organization, the professional is obligated to perform assigned tasks competently, according to professional standards. These professional standards include not only attention to technical excellence but also concern for the social effects of computers on operators, users, and the public. When assessing the capabilities and risks of computer systems, the professional must be candid: the professional must report all relevant findings honestly and accurately. When designing a new computer system, the professional must consider not only the specifications of the client, but also how the system might affect the quality of life of users and others. For example, a computing professional who designs an information system for a hospital should allow speedy access by physicians and nurses, yet protect patients’ medical records from unauthorized access; the technical requirement to provide fast access may conflict with the social obligation to ensure patients’ privacy. Computing professionals enjoy considerable freedom in deciding how to meet the specifications of a computer system. Provided that they meet the minimum performance requirements for speed, reliability, and functionality, within an overall budget, they may choose to invest resources to decrease the response time rather than to enhance a graphical user interface, or vice versa. Because choices involve tradeoffs between competing values, computing professionals should identify potential biases in their design choices (6). For example, the designer of a search engine for an online retailer might choose to display the most expensive items first. This choice might favor the interest of the retailer, to maximize profit, over the interest of the customer, to minimize cost. Even moderately large software artifacts (computer programs) are inherently complex and error-prone. Furthermore, software is generally becoming more complex. It is therefore reasonable to assume that all software artifacts have errors. Even if a particular artifact does not contain errors, it is extremely difficult to prove its correctness. Faced with these realities, how can a responsible software engineer release software that is likely to fail sometime in the future? Other engineers confront the same problem, because all engineering artifacts eventually fail. Whereas most engineering artifacts fail because physical objects wear out, however, software artifacts are most likely to fail because of faults designed into the original artifact. The intrinsically faulty nature of software distinguishes it from light bulbs and I-beams, for example, whose failures are easier to predict statistically. To acknowledge responsibilities for the failure of software artifacts, software developers should exercise due diligence in creating software, and they should be as candid as possible about both known and unknown faults in the software—particularly software for safety-critical systems, in which a failure can threaten the lives of people. Candor by software developers would give software consumers a better chance to make reasonable decisions about software before they buy it (7). Following an established tradition in medicine, Miller (8) advocates “software informed consent” as a way to formalize an ethical principle that requires openness from software developers. Software informed consent requires software developers to reveal, using explanations that are understandable to their customers, the risks of their software, including the likelihoods of known faults and the probabilities that undiscovered faults still exist. The idea of software informed consent motivates candor, and also requires continuing research into methods of discovering software faults and measuring risk. **Responsibilities to Employers** Most computing professionals work for employers. The employment relationship is contractual: the professional promises to work for the employer in return for a salary and benefits. Professionals often have access to the employer’s proprietary information such as trade secrets, and the professional must keep this information confidential. Besides trade secrets, the professional must also honor other forms of *intellectual property* owned by the employer: the professional does not have the right to profit from independent sale or use of this intellectual property, including software developed with the employer’s resources. Every employee is expected to work loyally on behalf of the employer. In particular, professionals should be aware of potential conflicts of interest, in which loyalty might be owed to other parties besides the employer. A conflict of interest arises when a professional is asked to render a judgment, but the professional has personal or financial interests that may interfere with the exercise of that judgment. For instance, a computing professional may be responsible for ordering computing equipment, and an equipment vendor owned by the professional’s spouse might submit a bid. In this case, others would perceive that the marriage relationship might bias the professional’s judgment. Even if the spouse’s equipment would be the best choice, the professional’s judgment would not be trustworthy. In a typical conflict of interest situation, the professional should recuse herself: that is, the professional should remove herself and ask another qualified person to make the decision. Many computing professionals have managerial duties, and some are solely managers. Managerial roles complicate the responsibilities of computing professionals because managers have administrative responsibilities and interests within their organizations, in addition to their professional responsibilities to clients and the public. **Responsibilities to Other Professionals** While everyone deserves respect from everyone else, when professionals interact with each other, they should demonstrate a kind of respect called *collegiality*. For example, when one professional uses the ideas of a second professional, the first should credit the second. In a research article, an author gives credit by properly citing the sources of ideas due to other authors in previously published articles. Using these ideas without attribution constitutes plagiarism. Academics consider plagiarism unethical because it represents the theft of ideas and the misrepresentation of those ideas as the plagiarist’s own. Because clients cannot adequately evaluate the quality of professional service, individual professionals know that their work must be evaluated by other members of the same profession. This evaluation, called *peer review*, occurs in both practice and research. Research in computing is presented at conferences and published in scholarly journals. Before a manuscript that reports a research project can be accepted for a conference or published in a journal, the manuscript must be reviewed by peer researchers who are experts in the subject of the manuscript. Because computing professionals work together, they must observe professional standards. These standards of practice are created by members of the profession, or within organizations. For example, in software development, one standard of practice is a convention for names of variables in code. By following coding standards, a software developer can facilitate the work of a software maintainer who subsequently modifies the code. For many important issues for which standards would be theoretically appropriate, however, “standards” in software engineering are controversial, informal, or non-existent. An example of this problem is the difficulties encountered when the IEEE and the ACM attempted to standardize a body of knowledge for software engineering, to enable the licensing of software engineers. Senior professionals have an obligation to mentor junior professionals in the same field. Although professionals are highly educated, junior members of a profession require further learning and experience to develop professional judgment. This learning is best accomplished under the tutelage of a senior professional. In engineering, to earn a P.E. license, a junior engineer must work under the supervision of a licensed engineer for at least four years. More generally, professionals should assist each other in continuing education and professional development, which are generally required for maintaining licensure. Professionals can fulfill their obligations to contribute to the profession by volunteering. The peer review of research publications depends heavily on volunteer reviewers and editors, and the activities of professional associations are conducted by committees of volunteers. Responsibilities to the Public According to engineering codes of ethics, the engineer’s most important obligation is to ensure the safety, health, and welfare of the public. Although everyone must avoid endangering others, engineers have a special obligation to ensure the safety of the objects that they produce. Computing professionals share this special obligation to guarantee the safety of the public, and to improve the quality of life of those who use computers and information systems. As part of this obligation, computing professionals should enhance the public’s understanding of computing. The responsibility to educate the public is a collective responsibility of the computing profession as a whole; individual professionals might fulfill this responsibility in their own ways. Examples of such public service include advising a church on the purchase of computing equipment, and writing a letter to the editor of a newspaper about technical issues related to proposed legislation to regulate the Internet. It is particularly important for computing professionals to contribute their technical knowledge to discussions about public policies regarding computing. Many communities are considering controversial measures such as the installation of Web filtering software on public access computers in libraries. Computing professionals can participate in communities’ decisions by providing technical facts. Technological controversies involving the social impacts of computers are covered in a separate article of this encyclopedia. When a technical professional’s obligation of loyalty to the employer conflicts with the obligation to ensure the safety of the public, the professional may consider whistle-blowing, that is, alerting people outside the employer’s organization to a serious, imminent threat to public safety. Computer engineers blew the whistle during the development of the Bay Area Rapid Transit (BART) system near San Francisco (9). In the early 1970s, three BART engineers became alarmed by deficiencies in the design of the electronics and software for the automatic train control system, deficiencies that could have endangered passengers on BART trains. The engineers raised their concerns within the BART organization without success. Finally, they contacted a member of the BART board of directors, who passed their concerns to Bay Area newspapers. The three engineers were immediately fired for disloyalty. They were never reinstated, even when an accident proved their concerns were valid. When the engineers sued the BART managers, the IEEE filed an amicus curiae brief on the engineers’ behalf, stating that engineering codes of ethics required the three engineers to act to protect the safety of the public. The next section describes codes of ethics for computing professionals. 5. Codes of Ethics For each profession, the professional’s obligations to clients, employers, other professionals, and the public are stated explicitly in the profession’s code of ethics or code of professional conduct. For computing professionals, such codes have been developed by the Association for Computing Machinery (ACM), the British Computer Society (BCS), the Computer Society of the Institute of Electrical and Electronics Engineers (IEEE-CS), the Association of Information Technology Professionals (AITP), the Hong Kong Computer Society, the Systems Administrators Special Interest Group of USENIX (SAGE), and other associations. Two of these codes will be described briefly here: the ACM code and the Software Engineering Code jointly approved by the IEEE-CS and the ACM. ACM is one of the the largest nonprofit scientific and educational organizations devoted to computing. In 1966 and 1972, the ACM published codes of ethics for computing professionals. In 1992, the ACM adopted the current Code of Ethics and Professional Conduct (10), which appears in Appendix #1. Each statement of the code is accompanied by interpretive guidelines. For example, the guideline for statement 1.8, Honor confidentiality, indicates that other ethical imperatives such as complying with a law may take precedence. Unlike ethics codes for other professions, one section of the ACM code states the ethical obligations of “organizational leaders,” who are typically technical managers. The ACM collaborated with IEEE-CS to produce the Software Engineering Code of Ethics and Professional Practice (11). Like the ACM code, the Software Engineering Code also includes the obligations of technical managers. This code is notable in part because it was the first code to focus exclusively on software engineers, not other computing professionals. This code is broken into a short version and a long version. The short version comprises a preamble and eight short principles; this version appears in Appendix #2. The long version expands on the eight principles with multiple clauses that apply the principles to specific issues and situations. Any code of ethics is necessarily incomplete—no document can address every possible situation. In addition, a code must be written in general language; each statement in a code requires interpretation to be applied in specific circumstances. Nevertheless, a code of ethics can serve multiple purposes (12, 13). A code can inspire members of a profession to strive for the profession’s ideals. A code can educate new members about their professional obligations, and tell nonmembers what they may expect members to do. A code can set standards of conduct for professionals and provide a basis for expelling members who violate these standards. Finally, a code may support individuals in making difficult decisions. For example, because all engineering codes of ethics prioritize the safety and welfare of the public, an engineer can object to unsafe practices not merely as a matter of individual conscience, but with the full support of the consensus of the profession. The application of a code of ethics for making decisions is highlighted in the next section. 6. Ethical Decision-Making for Computing Professionals Every user of e-mail has received unsolicited bulk commercial e-mail messages, known in a general way as spam. (A precise definition of “spam” has proven elusive and is controversial; most people know spam when they see it, but legally and ethically a universally accepted definition has not yet emerged.) A single spam broadcast can initiate millions of messages. Senders of spam claim that they are exercising their free speech rights, and few laws have been attempted to restrict it. In the United States, no federal law prohibited spamming before the CAN-SPAM Act of 2003. Even now, the CAN-SPAM law does not apply to spam messages that originate in other countries. Although some prosecutions have occurred using the CAN-SPAM Act, most people still receive many e-mail messages that they consider spam. Some spam messages may be deceptive—they may appear genuine—but others are completely accurate. Although most spamming is not illegal, even honest spamming is considered unethical by many people, for the following reasons. First, spamming has bad consequences: it wastes the time of recipients who must delete junk e-mail messages, and these messages waste space on computers; in addition, spamming reduces users’ trust in e-mail. Second, spamming is not reversible: senders of spam do not want to receive spam. Third, spamming could not be allowed as a general practice: if everyone attempted to broadcast spam messages to wide audiences, computer networks would become clogged with unwanted e-mail messages, and no one would be able to communicate at all. The three reasons advanced against spam correspond to three ways in which the morality of an action can be evaluated: first, whether on balance the action results in more good consequences than bad consequences; second, whether the actor would be willing to trade places with someone affected by the action; third, whether everyone (in a similar situation) could choose the same action as a general rule. These three kinds of moral reasons correspond to three traditions in philosophical ethics: consequentialism, Golden Rule, and duty-based ethics. Ethical issues in the use of computers can also be evaluated through the use of analogies to more familiar situations. For example, a hacker may try to justify gaining unauthorized access to unsecured data by reasoning that because the data are not protected, anyone should be able to read it. But by analogy, someone who finds the front door of a house unlocked is not justified in entering the house and snooping around. Entering an unlocked house is trespassing, and trespassing violates the privacy of the house’s occupants. When making ethical decisions, computing professionals can rely not only on general moral reasoning but also on specific guidance from codes of ethics, such as the ACM Code of Ethics (10). Here is a fictional example of that approach. **Scenario:** XYZ Corporation plans to secretly monitor the Web pages visited by its employees, using a data mining program to analyze the access records. Chris, an engineer at XYZ, recommends that XYZ purchase a data mining program from Robin, an independent contractor, without mentioning that Robin is Chris’s domestic partner. Robin had developed this program while previously employed at UVW Corporation, without awareness of anyone at UVW. **Analysis:** First, the monitoring of Web accesses intrudes on employees’ privacy; it is analogous to eavesdropping on telephone calls. Professionals should respect the privacy of individuals (ACM Code 1.7, *Respect the privacy of others*, and 3.5, *Articulate and support policies that protect the dignity of users and others affected by a computing system*). Second, Chris has a conflict of interest because the sale would benefit Chris’s domestic partner. By failing to mention this relationship, Chris was disingenuous (ACM Code 1.3, *Be honest and trustworthy*). Third, because Robin developed the program while working at UVW, some and perhaps all of the property rights belong to UVW. Robin probably signed an agreement that software developed while employed at UVW belongs to UVW. Professionals should honor property rights and Applying a code of ethics might not yield a clear solution of an ethical problem because different principles in a code might conflict. For instance, the principles of honesty and confidentiality conflict when a professional who is questioned about the technical details of the employer’s forthcoming product must choose between answering the question completely and keeping the information secret. Consequently, more sophisticated methods have been developed for solving ethical problems. Maner (14) has studied and collected what he calls “procedural ethics, step-by-step ethical reasoning procedures … that may prove useful to computing professionals engaged in ethical decision-making.” Maner’s list includes a method specialized for business ethics (15), a paramedic method (16), and a procedure from the U.S. Department of Defense (17). These procedures appeal to the problem-solving ethos of engineering, and they help professionals avoid specific traps that might otherwise impair a professional’s ethical judgment. No procedural ethics method should be interpreted as allowing complete objectivity or providing a mechanical algorithm for reaching a conclusion about an ethical problem, however, because all professional ethics issues of any complexity require subtle and subjective judgments. 7. Computing and the Study of Ethics: The Ethical Challenges of Artificial Intelligence and Autonomous Agents Many ethical issues, such as conflict of interest, are common to different professions. In computing and engineering, however, unique ethical issues arise from the creation of machines whose outward behaviors resemble human behaviors that we consider “intelligent.” As machines become more versatile and sophisticated, and as they increasingly take on tasks that were once assigned only to humans, computing professionals and engineers must rethink their relationship to the artifacts they design, develop, and then deploy. For many years, ethical challenges have been part of discussions of artificial intelligence. Indeed, two classic references in the field are by Norbert Wiener in 1965 (18) and by Joseph Weizenbaum in 1976 (19). Since the 1990s, the emergence of sophisticated “autonomous agents,” including Web “bots” and physical robots, has intensified the ethical debate. Two fundamental issues are of immediate concern: the responsibility of computing professionals who create these sophisticated machines, and the notion that the machines themselves will, if they have not already done so, become sufficiently sophisticated so that they will be considered themselves moral agents, capable of ethical praise or blame independent of the engineers and scientists who developed them. This area of ethics is controversial and actively researched. A full discussion of even some of the nuances is beyond the scope of this article. Recent essays by Floridi and Sanders (20), and Himma (21) are two examples of recent influential ideas in the area. References **Reading List** **Appendix 1: ACM Code of Ethics and Professional Conduct** http://www.acm.org/about/code-of-ethics **Appendix 2: Software Engineering Code of Ethics and Professional Practice (short version)** http://www.acm.org/about/se-code/
{"Source-Url": "https://www.onlineethics.org/File.aspx?id=88823", "len_cl100k_base": 7316, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29429, "total-output-tokens": 9279, "length": "2e12", "weborganizer": {"__label__adult": 0.0009899139404296875, "__label__art_design": 0.0012407302856445312, "__label__crime_law": 0.01105499267578125, "__label__education_jobs": 0.213134765625, "__label__entertainment": 0.00019788742065429688, "__label__fashion_beauty": 0.0005269050598144531, "__label__finance_business": 0.0054779052734375, "__label__food_dining": 0.0007977485656738281, "__label__games": 0.0020008087158203125, "__label__hardware": 0.003078460693359375, "__label__health": 0.00382232666015625, "__label__history": 0.0010614395141601562, "__label__home_hobbies": 0.000446319580078125, "__label__industrial": 0.0019025802612304688, "__label__literature": 0.006259918212890625, "__label__politics": 0.0031757354736328125, "__label__religion": 0.0023555755615234375, "__label__science_tech": 0.34326171875, "__label__social_life": 0.0008840560913085938, "__label__software": 0.0252685546875, "__label__software_dev": 0.371337890625, "__label__sports_fitness": 0.00043892860412597656, "__label__transportation": 0.001041412353515625, "__label__travel": 0.00022995471954345703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44777, 0.0267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44777, 0.81758]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44777, 0.94014]], "google_gemma-3-12b-it_contains_pii": [[0, 2975, false], [2975, 6766, null], [6766, 10624, null], [10624, 14482, null], [14482, 18601, null], [18601, 22400, null], [22400, 26225, null], [26225, 30148, null], [30148, 33937, null], [33937, 37751, null], [37751, 41103, null], [41103, 44325, null], [44325, 44777, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2975, true], [2975, 6766, null], [6766, 10624, null], [10624, 14482, null], [14482, 18601, null], [18601, 22400, null], [22400, 26225, null], [26225, 30148, null], [30148, 33937, null], [33937, 37751, null], [37751, 41103, null], [41103, 44325, null], [44325, 44777, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44777, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44777, null]], "pdf_page_numbers": [[0, 2975, 1], [2975, 6766, 2], [6766, 10624, 3], [10624, 14482, 4], [14482, 18601, 5], [18601, 22400, 6], [22400, 26225, 7], [26225, 30148, 8], [30148, 33937, 9], [33937, 37751, 10], [37751, 41103, 11], [41103, 44325, 12], [44325, 44777, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44777, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
707815c648b7be47650c66a13609659086a3627a
[REMOVED]
{"Source-Url": "https://oxygen.informatik.tu-cottbus.de/publications/wagner/CAiSE2001-LNCS2224.pdf", "len_cl100k_base": 6280, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32848, "total-output-tokens": 8106, "length": "2e12", "weborganizer": {"__label__adult": 0.000431060791015625, "__label__art_design": 0.0009446144104003906, "__label__crime_law": 0.0005974769592285156, "__label__education_jobs": 0.002872467041015625, "__label__entertainment": 0.00013649463653564453, "__label__fashion_beauty": 0.0002665519714355469, "__label__finance_business": 0.006549835205078125, "__label__food_dining": 0.00045680999755859375, "__label__games": 0.0008130073547363281, "__label__hardware": 0.000823974609375, "__label__health": 0.0006585121154785156, "__label__history": 0.0004520416259765625, "__label__home_hobbies": 0.00016558170318603516, "__label__industrial": 0.0009021759033203124, "__label__literature": 0.0006909370422363281, "__label__politics": 0.00051116943359375, "__label__religion": 0.0004246234893798828, "__label__science_tech": 0.10784912109375, "__label__social_life": 0.0001373291015625, "__label__software": 0.0214385986328125, "__label__software_dev": 0.8515625, "__label__sports_fitness": 0.0002682209014892578, "__label__transportation": 0.000949859619140625, "__label__travel": 0.000255584716796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35431, 0.01828]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35431, 0.47947]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35431, 0.92463]], "google_gemma-3-12b-it_contains_pii": [[0, 2383, false], [2383, 5156, null], [5156, 8424, null], [8424, 11479, null], [11479, 14625, null], [14625, 17440, null], [17440, 20468, null], [20468, 22628, null], [22628, 23996, null], [23996, 25059, null], [25059, 27416, null], [27416, 30410, null], [30410, 32940, null], [32940, 35431, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2383, true], [2383, 5156, null], [5156, 8424, null], [8424, 11479, null], [11479, 14625, null], [14625, 17440, null], [17440, 20468, null], [20468, 22628, null], [22628, 23996, null], [23996, 25059, null], [25059, 27416, null], [27416, 30410, null], [30410, 32940, null], [32940, 35431, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35431, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35431, null]], "pdf_page_numbers": [[0, 2383, 1], [2383, 5156, 2], [5156, 8424, 3], [8424, 11479, 4], [11479, 14625, 5], [14625, 17440, 6], [17440, 20468, 7], [20468, 22628, 8], [22628, 23996, 9], [23996, 25059, 10], [25059, 27416, 11], [27416, 30410, 12], [30410, 32940, 13], [32940, 35431, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35431, 0.05405]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
49cb6b59ad6d87c4abfed348407d91112364e26a
FINITE SEGMENTATION FOR XML CACHING Adelhard Türling and Stefan Böttcher Faculty of electrical engineering, computer science and mathematic, Fürstenallee 11, D-33102 Paderborn, Germany, Email: Adelhard.Tuerling@uni-paderborn.de, stb@uni-paderborn.de Abstract: XML data processing often relies on basic relations between two XML fragments like containment, subset, difference and intersection. Fast calculation of such relations based only on the representing XPath expression is known to be a major challenge. Recently XML patterns have been introduced to model and identify handy subclasses of XPath. We present the concept of ST-pattern segments that uses sets of adapted tree patterns in order to describe a finite and complete partitioning of the XML document’s data space. Based on such segmentations, we present a fast evaluation of XML relations and show how to compute a set of patterns for an optimal segmentation based on frequent XPath queries. Key words: mobile databases; XML; query patterns; XPath; caching. 1. INTRODUCTION Whenever XML data is exchanged, processed and cached on computers within a network, data management meets new challenges. For example, in networks of resource-limited mobile devices, efficient usage of data storage and data transportation over a wireless network is a key requirement. In such a network, a common situation is that a client queries for data of a dedicated source. Within such a network, it may be of considerable advantage to share and exchange cached XML data among several neighboring clients, compared to a solution where data is only transferred between each requesting client and a dedicated server. One of the main new challenges in such a data sharing scenario is the organization of the data space which is shared among the clients. This includes specifying how the data space can be divided into handy segments, how to profit from distributed data according to these segments, and how cooperative usage in a network can enhance data processing. A basic challenge of fragmentation is to identify a finite set of atomic XML fragments for cooperative usage. Whether or not data segments have to be requested in order to fulfill an operation, must be decided by data processing components on the fly, without losing time for extensive intersection tests and difference fragment computations on XML data. To enable collaborative use of a so called segmentation, we identify two requirements for the segmentation's atomic data units, namely the segments. Firstly, segments can be easily (re-)joined and identified (minimal operating costs). Secondly, most query results can be represented by such segments or joins of such segments with little or no dispensable offset (fitting granularity). Obviously there is a conflict between the requirement of a fitting granularity and the need of a finite and collaboratively accepted segmentation. We address this area of conflict and show how to find an optimal segmentation based on access frequency analysis of XML patterns. The remainder of our paper is organized as follows. In Section 2, we propose to expand the common definition of patterns towards what we call ST-pattern and give a short introduction in the main features and properties. In Section 3, we show in detail how to use the most frequent patterns as a base to decompose the data space into disjointed segments. In Section 4, we discuss related work. And within section 5, we present the summary and conclusion of our contribution. ```xml <ELEMENT car EMPTY> <ATTLIST car name CDATA #REQUIRED year CDATA #REQUIRED price CDATA #REQUIRED type (truck | convert | limo) #REQUIRED > <ELEMENT contact EMPTY> <ATTLIST contact name CDATA #REQUIRED image CDATA #REQUIRED > <ELEMENT offer (seller, car*)> <ELEMENT offers (offer*)> <ELEMENT seller (contact*)> <ATTLIST seller town CDATA #REQUIRED > Figure 1. Example DTD ``` ```xml Figure 2. Example for ST-pattern ``` 2. FOUNDATION AND ST-PATTERN In this section, we shortly review the concept of DTD graphs and XML patterns. We introduce search-tree patterns (short ST-pattern) based on additional nodes namely split nodes that partition a node’s child set. We use ST-patterns as logical data descriptions for data processing that are easy to handle and that allow a good degree of granularity. We here withhold the formal and complete definition of ST-pattern and their operations due to page limitation and refer to future publications. Instead, we give some examples and an overview of properties. 2.1 Definition of the DTD graph DTDs are schema definitions for XML documents. As long as the DTD is acyclic, such a DTD can be rolled out and represented as a tree. Each element, text-node and attribute occurring in such a DTD is converted to a node in the DTD graph. The parent-child relation (and the attribute-relation) between the elements and the attributes of a DTD are represented by directed edges within the DTD graph. A DTD graph for the DTD of Figure 1 can be seen in Figure 3. In a DTD graph, a '*' is concatenated to a node’s label to indicate that the DTD allows the occurrence of that node at that position in an arbitrary quantity, e.g. for car, offer and contact. Ignoring the special annotation '*', a DTD graph can also be seen as an XML pattern. ![Figure 3: Example DTD graph](image) 2.2 XML patterns XML tree patterns are used in the context of XML as expressions that describe XML fragments. These patterns can be seen as tree models for XML queries. Nodes of a pattern can be labeled with any tag name, the wildcard '*' or the relative paths '///', where '*' indicates any label and '///' represents a node sequence of zero or more interconnected nodes. Directed edges represent parent → child relations. These edges must correspond to relations defined in a DTD, e.g. fulfill the restriction of a single incoming edge for each node, to be valid according to the given DTD. Furthermore, we use the same terminology for patterns as used for XML documents. For example, we call all nodes that can be reached by outgoing edges the node’s children, the incoming edge leads to the node’s parent, all children of a node are in sibling relation and the transitive closure of all nodes reached by outgoing (incoming) edges is called the set of descendent (ancestor) nodes. Figure 4. Pattern c) is the intersection of the two ST-patterns a) and b). Figure 5. The 3 most frequently accessed patterns. 2.3 ST-patterns In contrast to basic XML patterns, ST-patterns are restricted to rooted patterns because they describe XML fragments that correspond to absolute XPath expressions. In addition, we introduce a new node type called split node which contains simple selection information as used in XPath filters. As an example in Figure 2, the additional nodes labeled ‘truck’ and ‘type’ restrict the pattern to cars of type truck. Such patterns support a minimal subset of possible filter expressions known from XPath, just enough to describe the granularity required. With the DTD given in Figure 1, an XPath query which asks for car-offers of type ‘truck’ and which is interested in car-names and available car-sellers could be: \[ \text{offers/offer/*[@type='truck' or self::name()='seller']//@name.} \] Figure 2 shows the corresponding ST-pattern. We call an XML data-fragment that is selected by an ST-pattern the fragment related to a pattern or for short, pattern fragment. 2.4 Operations and properties of ST-patterns Split nodes are of a specific comparison type and contain specific decision criteria. We distinguish between two types of split nodes: range-based and equality-based split nodes. See Figures 4a and 4b. Split nodes are related to two nodes in the ST-pattern. The two related nodes are called split parent and reference node (short ref. node). The ref. node must be a leaf node in the pattern. In our examples throughout the paper, we visualize a split node and its ref. node with an identical texture where the split node is gray and the ref. node is white, indicating that the ref. node must fulfill the split node's decision criteria. The split parent is the first node on the ref. node's ancestor-axis in the pattern that is marked as multiple occurring in the DTD graph. This relation indicates that the sub fragment represented by this split parent is constrained by the split node. The contact, offer or car node might be split parents in our example. A split parent's sub decision tree can have multiple split nodes, which follow a predefined tree-level-based order. We call the complete sub tree of a split parent, its sub decision tree. We define two ST-patterns to be space equal, if they describe the same pattern fragment for a given DTD and for any XML document valid to that DTD. The two operations compress and extend are used to space equally transform ST-patterns e.g. for normalization purposes. The three operations union, intersection and difference map two given ST-patterns onto a resulting ST-pattern. For any valid XML document, the resulting ST-pattern describes a pattern fragment that is equal to the result of the given operation applied to the pattern fragments of the two operands. See Figure 4c as an example for an intersection of the patterns 4a and 4b. We say that two ST-patterns are disjointed, if for every pair of corresponding leaf nodes that they have in common, (1) the two nodes must be ref. nodes and (2) the split nodes they belong to have no overlapping decision criteria. Evaluating operations on ST-patterns can be done by adapting fast XML match algorithms. Similar to the more complex XPath expressions, ST-patterns are used to select fragments of an underlying XML document and thereby address the document with a fine granularity. For example, any ST-pattern can be split into two patterns, where each of the resulting patterns addresses a fragment with about half the size of the fragment the original patterns addressed. Thus, any fragmentation granularity can be achieved. 3. SEGMENTATION In Section 2, the definition and some operations for ST-patterns have been introduced. Now we show how these patterns can be used to organize fast XML data processing. Therefore we reconsider that every ST-pattern (based on the DTD) represents a pattern fragment in an XML document (usually in an underlying XML database). We use pattern fragments as atomic data items in any data processing. In addition, a pattern fragment belongs to a specific pattern segmentation. A pattern segmentation represents a complete decomposition of the whole underlying schema S and is based on the given DTD tree. Beyond the DTD tree detail level, a schema S might even be decomposed by additional equations or ranges on specified node values to support specific requirements. In this section, we shape the requirements for segmentations and show how they are constructed relying on ST-patterns. 3.1 Requirements for a fitting segmentation It is elementary for the success of data processing to find the appropriate set of patterns which represents the segmentation. Their corresponding pattern fragments are the atomic data units our XML processing is based on. Thus, the patterns shall represent fragments that are handy in the following sense: For a given XPath request, it shall be easy to find the optimal set of patterns where the union U of those patterns relates to an XML fragment that is the smallest possible superset of the XML fragment represented by the XPath request. The parts of the fragment related to U, that are not needed to answer the XPath request, should be minimal or none for frequent requests. We call these parts clipping offsets of patterns corresponding to an XPath request. Data transfer overhead caused by the segmentation must be minimal and come out as a clear advantage compared with savings based e.g. on caching. 3.2 Pattern segmentation Formally, we define: A pattern segmentation $S$ is a set of pair-wise disjointed ST-patterns $\{p_1, p_2, \ldots, p_n\}$ where the union of all $p_i$ results in a pattern $P_{\text{complete}}(S)$ representing the whole data space, e.g. given by the corresponding DTD. The graph representation of $P_{\text{complete}}(S)$ is called the segmentation's schema graph. Notice that the DTD graph is a valid schema graph. In general, there are many different valid segmentations for a single schema graph. For the DTD graph, the DTD's pattern itself as well as $S = \{/**\}$ are valid segmentations with $|S| = 1$. Figure 8 shows a segmentation with five patterns, Figure 6 shows the corresponding schema graph. To encode the specific segmentation in a schema graph, we introduce colored schema graphs. For example in Figure 6, the numbers inside the nodes represent their colors. All leaf nodes that are not ref. nodes have an associated color identifying the pattern in the segmentation they belong to. The following has to be proven to verify that a set of ST-patterns $S$ is a segmentation: - For each pair of patterns in $S$, the intersection test shows that they are disjointed. - $P_{\text{complete}}(S)$ must be space equivalent to the underlying DTD's pattern. 3.3 Glue nodes and the ID constraint As the set of ST-patterns of a segmentation are pair-wise disjointed, the pattern fragments they describe, describe a pair-wise disjointed partitioning of the XML document's leaf nodes. For a given segmentation, a major requirement is to guarantee that the union, intersection and difference of any two patterns of the segmentation can also be applied to their related XML fragments. Thus, we have to assure to track the pattern fragments' relationships to each other. In the context of XML trees, one-to-one and one-to-many relationships are supported. For example, in the segmentation of Figure 8, the segment defined by pattern \( p_3 \) has a one-to-many relationship to all other segments. The segments defined by the patterns \( p_1, p_2, p_4, p_5 \) all have a one-to-one relationship to each other. We define some multiple occurring nodes from the DTD to be glue nodes which we require to have a unique key attribute. If the DTD doesn’t support the required IDs they can be added in a preprocessing step. We don’t have to apply the union, intersection or difference operation to the pattern fragments directly. It is enough to calculate the operations on the corresponding pattern and to use the resulting pattern as a filter for a joined pattern fragment. To guarantee that any two pattern fragments can be joined, we use pairs of IDs and references. In our small example, the car node and the offers node are both glue nodes and as the DTD does not support ID attributes for them, we have to provide additional ID attributes. To identify a segmentation’s glue nodes, each pair of patterns of the segmentation is tested. A pair’s glue node is the first multiple occurring node they have in common in the colored schema graph starting at the patterns’ leaf nodes. Based on the glue node’s ID, any relationship between XML fragments that correspond to a segmentation’s patterns can be joined. In our example, the fragment corresponding to the pattern \( p_3 \) can be joined with any fragment corresponding to pattern \( p_i \) up to \( p_5 \) (one-to-many relationship), by using the offers ID as a join criterion. Fragments corresponding to the patterns \( p_1, p_2, p_4, p_5 \) can be joined by using the car ID as a join criterion (one-to-one relationship). As we see, the number of glue nodes is bounded by the amount of multiple occurring nodes, but can be smaller and is segmentation dependent. For example, the multiple occurring contact node is not a glue node, since it is not needed to join pattern fragments. **Input:** given DTD graph Sorted list of most frequent query patterns \( L = \{q_1, \ldots, q_n\} \) **Initialize:** - Ref. node order: \( O = \emptyset \) - Max. node index: \( I_{\text{max}} = \text{const. (e.g. 2)} \) - Clip tolerance \( T = \text{const. (e.g. 0.9)} \) - Max \( |S|: |S|_{\text{max}} = \text{const. (e.g. 100)} \) 10 For each \( q_i \) in \( L \) do 11 For each \( p_j \) in \( S \) do --- \( ^4 \) XML supports special id/id_ref attributes to support n to n relations. Our techniques support such relationships but are not optimized for them. If intersect \((q_0, p_j) \neq \emptyset\) \{ \begin{align*} p_{\text{temp}1} &= \text{compress (intersect} \ (q_0, p_j)) \\ p_{\text{temp}2} &= \text{compress (difference} \ (q_0, p_j)) \\ \text{if} \ (\max_{\text{amount_split_node\_series}}(p_{\text{temp}1}) < I_{\text{max}}) \ \text{and} \\ \text{if} \ (\max_{\text{amount_split_node\_series}}(p_{\text{temp}2}) < I_{\text{max}}) \ \text{and} \\ ((\text{size(pattern\_fragment} (p_{\text{temp}1}) / \text{size(pattern\_fragment} (p_j < T))) \text{ or} \\ (\text{size(pattern\_fragment} (p_{\text{temp}2}) / \text{size(pattern\_fragment} (p_j < T)))) \} \\ \text{remove} \ p_j \ \text{from} \ S \\ \text{contained}(S_{\text{temp}1, \text{newRefNode, O}}) \\ \text{not} \\ \text{add}(S_{\text{temp}1, \text{newRefNode, O}}) \\ \text{add}(S_{\text{temp}2, \text{newRefNode, O}}) \\ \text{add}(S_{\text{temp}1, S}) \\ \text{add}(S_{\text{temp}2, S}) \\ \} \\ \text{break if} \ |S| \geq |S|_{\text{max}} \\ \} \\ \} \] Figure 7. Segmentation algorithm 3.4 Construction of the finite pattern segmentation The amount of different patterns corresponding to a non-recursive DTD is already finite, if split nodes are not used, because there is a finite set of possible patterns for each set of edges. A pattern with the maximum amount of edges and nodes is the DTD graph itself. As we introduce split nodes, we have to constrain the size of segmentations by a threshold \(|S|_{\text{max}}\). The value of \(|S|_{\text{max}}\) correlates with the granularity of the segmentation and must be adjusted application-context dependent, considering DTD complexity and the amount of represented data. In order to keep the pattern set of a segmentation finite, we restrict the amount of segments in a segmentation to a fix maximum \(|S|_{\text{max}}\). For example, for the DTD graph given in Figure 3, a valid segmentation with \(|S| = 5\) is shown in Figure 8. Additionally we might constrain the depth of sub decision trees to limit the segmentation’s schema graph complexity and the amount of ref. nodes in a single pattern. A good solution to establish a fitting segmentation is to analyze the access frequency to certain tree patterns and to build a segmentation according to the most frequently accessed pattern. Our algorithm is based on that concept and takes a sorted list \(L\) of the most frequent requests as input. The resulting segmentation can guarantee that any of the requests in \(L\) can be answered exactly by joined pattern fragments of the segmentation. The algorithm of Figure 7 creates such a segmentation. Starting with an initial segmentation \(S = \{/**\}\), it splits patterns until \(|S| = |S|_{\text{max}}\). For each frequently requested pattern \( q_1 \) of \( L \), it has to be checked with which of the exiting patterns \( p_i \) in \( S \) it intersects (line 12) and for each intersecting pattern \( p_j \) the intersection and difference has to be calculated (lines 13, 14). Thereafter, each such \( p_j \) is removed from \( S \), and the segments intersect\( (q_1, p_j) \) and difference\( (q_1, p_j) \) are added to the segmentation (lines 19, 22, 23). The ref. node order simply correlates with the sequence in which they are first referenced by a split node (lines 20, 21). Iterating the above steps, we keep the set of patterns in the segmentation disjointed and thus the segmentation valid. Figure 8 shows a possible resulting segmentation for \( |S|_{\max}=5 \). Constructing a segmentation according to the presented algorithm, patterns that are used to answer frequent requests are in general very specific and represent a small segment of the XML document. In comparison, infrequently requested patterns are in general more unspecific in the sense of conglomerations and are related to bigger segments in the XML document. ![Diagram](https://example.com/diagram.png) *Figure 8: A pattern segmentation for the given DTD* ### 3.5 Thresholds In addition, the algorithm provides the two thresholds \( I_{\max} \) and \( T \) to adjust the basic segmentation algorithm (lines 15 to 18). The threshold \( I_{\max} \) constrains the amount of descendent sequenced split nodes in the resulting patterns. Using this threshold can compensate two drawbacks. First, such sequences can expand a schema graph’s sub-tree exponential. Since the schema graph is the ‘construction plan’ for any further data processing, transmitting and fast processing of the schema graph are important operations and a rather compact graph is preferred. Second, such series result in related XML fragments that are in a one-to-one relation which need to have ID nodes as introduced in 3.3. \( T \) is a threshold that corresponds to the degree of acceptable clipping tolerance. Since the transmission of a slightly bigger pattern fragment is acceptable, this threshold can prevent unneeded granularity. For this threshold, it is important to take the related pattern fragment's size in the XML document into account, rather than just to stick to the pattern's relative fraction of the schema graph. As mentioned, both thresholds must be set up according to the application's domain. The algorithm of Figure 11 creates the valid segmentation shown in Figure 8 for the input \( L = \{5a, 5b, 5c\}, T = 1, I_{\text{max}} = 2 \) and the given DTD. 3.6 Mapping from and to XPath In general, applications based on XML access XML fragments by XPath expressions. Mapping such an XPath query to patterns of a finite segmentation is easy. Since all patterns of the segmentation are disjointed, we just have to identify which patterns contribute to the result. Therefore, we represent the colored schema graph as an XML document and query it with the given XPath expression. Each color in the result represents a contributing pattern and the join of the related pattern fragments is the minimal superset of the XML fragment selected by the given XPath expression. If after joining the exact result is needed, the obtained superset can be queried by the standard XPath evaluation engine, e.g. a SAX filter. Mapping from ST-patterns to XPath is even simpler. Each node of a pattern represents a node-test, each edge a child-axe and each split node a conjunctive-filter-criterion of its split parent in a corresponding XPath expression. 3.7 Segmentation in application context As seen in Section 3.5, a fitting segmentation, e.g. for the system introduced in Section 2, can be calculated in a preprocessing step. Thereafter, the colored schema graph, e.g. in number scheme representation form, can be published. Even more, we suggest adapting the segmentation continuously according to query behavior, if the overall clients' focus changes. For example, think of a train schedule where the focus changes naturally with elapsing time. In such cases, an update of the colored schema graph and the information about some invalid pattern segments has to be distributed. A centralised technique, that keeps track of focus changes, is to let the client send the original XPath expression with the request towards the server. The server or an intermediate caching server can analyze the query, can match it with the requested pattern, and can calculate the amount of data that is not needed in the answer-pattern fragment. This information can be used in order to identify inefficient segmentations and can thereby lead to an adjustment of the segmentation and a reduced response. Cache management for server and client can use the colored schema graph as index structures for lookups and store the pattern fragments in joined form in their memory. Thus, finding the set of missing and locally available segments can be done fast by querying the locally available colored schema graph. As the originator of a request joins the pattern fragments as they arrive, the IDs introduced in Section 3.3 are used for accurate matching of any two pattern fragments using join optimization concepts\cite{7, 18}. Any cache server contributing more than one segment can even send its pattern fragments in joined form to reduce calculation overhead and transmission of redundant ref. nodes in one-to-one relations. An alternative solution to the usage of filters to obtain the exact answer to the last XPath request as introduced in Section 3.7, is to mark nodes during the join process as ‘not belonging to the current request’ and thus defining a temporary view. As well as changes in the segmentation, updates in XML documents must be propagated. A simple solution could be a master server that coordinates updates and distributes the list of effected pattern fragments to indicate that they are outdated. Since ST-patterns guarantee that their decision criterions (the ref. nodes) are included in the pattern fragment, finding and updating affected patterns can be performed in a decentralized manner and thus only a moderate amount of communication between master server and any cache server is needed. For example, after updating an outdated value, the client can decide whether the changed node still belongs to the original pattern or belongs to a different pattern and can publish that information. 3.8 Properties of finite segmentations Besides the properties of ST-patterns discussed in Section 2, patterns of a finite segmentation have some properties which make them perfectly suitable for XML data processing. As seen in Section 3.6, it is easy to find the set of needed patterns to answer any query. The found set is optimal, since all patterns in the segmentation are disjointed. The algorithm of Figure 7 finds a finite segmentation providing pattern fragments that answer frequent queries with no or minimal clipping offset. With finite segmentations, we have the instrument to build fast data processing modules for XML data. As discussed in Section 3.6, the use of ID nodes in coexisting pattern fragments with one-to-one relations turn out as no disadvantage, since they are transferred and stored redundancy free, if accessed in common. The additional IDs introduced to manage union-, intersection- or difference-joining of pattern fragments are an acceptable overhead compared to the achievable savings, e.g. with finite segmentation caching. 4. RELATED WORK Tree patterns are well known in the context of XML data processing and especially used to improve query response times. To search frequent XML tree patterns in XML documents is a widely adapted technique and is used for various applications, ranging from indexing optimal access paths to the formulation of various classes of XML queries. We follow these approaches, as we use frequent access tree patterns to achieve optimization goals. With the latter two approaches, we have in common to use tree patterns to specify subclasses of queries. Tree patterns represent the tree-structure of XML query languages like XPath or XQuery, and are treated separately from regular expressions also found in such queries. In the context of querying and maintaining incomplete data, Abiteboul shows a solution for XML data. The presented incomplete data trees are similar to our colored schema graphs, in that they use conditions on the elements’ data values and are based on DTDs. Different to our approach, their incomplete tree is used for fast calculation of missing parts in a single client. In comparison to all these approaches, we use tree patterns to identify sets of pattern fragments and include some information also found in regular expressions and handle them in a search tree manner. A caching strategy based on frequently accessed tree patterns is introduced in Yang. We extend the approach of classical patterns presented in Yang to ST-patterns including predicate filters, which enable us to express finer XML granularity. Our approach also differs in that we support cooperative caching by sets of pair-wise disjointed patterns. A different approach for XML caching is to check whether cached data can contribute to a new request by testing the intersection of cache entries and an XPath query and thereafter compute difference fragments as partial results. Such tests are known to be NP-hard for XPath expressions and difference computations are known to be resource consuming. In comparison, our approach focuses on efficient computation and thereby requires only minimal resource consumption. 5. SUMMARY AND CONCLUSIONS We expect finite pattern segmentation to be a solution for splitting a huge XML document into handy atomic units to support fast data processing based on simple and fast intersection and containment decisions, e.g. in the area of caching, replication or query processing. The drawback of using normalized data units is a clipping offset caused by answering a request by a slightly bigger superset. This is acceptable, since frequent requests can be answered with minimal or no clipping offset based on a well adjusted preprocessed segmentation. Especially in the area of mobile data processing, it is important to minimize communication costs and preserve the mobile client’s resources. Besides communication resources, we keep shared CPU resources to a minimum because costly intersection or containment tests are reduced to simple lookups. In the context of collaborative data processing, it is important that participating clients interact and interchange data based on a set of predefined data units. Otherwise, advantages of collaboration will be consumed by adjusting and comparing (slightly) different data objects. Currently we implement a mobile peer-to-peer approach which will use finite segmentation caching for any data exchange. In our further research, we address the challenge of segmentation adoption and update propagation for the overall system. Adapting ST-patterns towards dependent patterns, not containing the decision criteria, and distributed query processing\textsuperscript{17, 18} based on ST-patterns, seem to be further promising steps. We use these search tree patterns (ST-patterns) to model virtual schema expansion which we intend to discuss in detail in future publications. Our solution is especially tailored to adapt to context switches in query behavior supporting, e.g., a fine granularity in hot spot areas. REFERENCES 3. RagHAV Kaushik, Philip Bohannon, Jeffrey F. Naughton, Henry F. Korth: Covering indexes for branching path queries. SIGMOD Conference 2002: 133-144 Finite Segmentation for XML Caching
{"Source-Url": "http://dl.ifip.org/db/conf/ifip8/mobis2004/TurlingB04.pdf", "len_cl100k_base": 6647, "olmocr-version": "0.1.48", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 17145, "total-output-tokens": 8227, "length": "2e12", "weborganizer": {"__label__adult": 0.00032520294189453125, "__label__art_design": 0.0004608631134033203, "__label__crime_law": 0.0004284381866455078, "__label__education_jobs": 0.0012798309326171875, "__label__entertainment": 0.00012862682342529297, "__label__fashion_beauty": 0.00021028518676757812, "__label__finance_business": 0.000591278076171875, "__label__food_dining": 0.0003790855407714844, "__label__games": 0.0004489421844482422, "__label__hardware": 0.0013275146484375, "__label__health": 0.0007648468017578125, "__label__history": 0.0004360675811767578, "__label__home_hobbies": 0.00011289119720458984, "__label__industrial": 0.0006747245788574219, "__label__literature": 0.0004489421844482422, "__label__politics": 0.0003361701965332031, "__label__religion": 0.00051116943359375, "__label__science_tech": 0.26806640625, "__label__social_life": 0.00013375282287597656, "__label__software": 0.037200927734375, "__label__software_dev": 0.6845703125, "__label__sports_fitness": 0.00026154518127441406, "__label__transportation": 0.0005822181701660156, "__label__travel": 0.00025391578674316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33299, 0.02127]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33299, 0.50409]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33299, 0.88955]], "google_gemma-3-12b-it_contains_pii": [[0, 1899, false], [1899, 4003, null], [4003, 5853, null], [5853, 7493, null], [7493, 10069, null], [10069, 11922, null], [11922, 13692, null], [13692, 16353, null], [16353, 19053, null], [19053, 21034, null], [21034, 23725, null], [23725, 26555, null], [26555, 29155, null], [29155, 32014, null], [32014, 33299, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1899, true], [1899, 4003, null], [4003, 5853, null], [5853, 7493, null], [7493, 10069, null], [10069, 11922, null], [11922, 13692, null], [13692, 16353, null], [16353, 19053, null], [19053, 21034, null], [21034, 23725, null], [23725, 26555, null], [26555, 29155, null], [29155, 32014, null], [32014, 33299, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33299, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33299, null]], "pdf_page_numbers": [[0, 1899, 1], [1899, 4003, 2], [4003, 5853, 3], [5853, 7493, 4], [7493, 10069, 5], [10069, 11922, 6], [11922, 13692, 7], [13692, 16353, 8], [16353, 19053, 9], [19053, 21034, 10], [21034, 23725, 11], [23725, 26555, 12], [26555, 29155, 13], [29155, 32014, 14], [32014, 33299, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33299, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
66ab6cba0ecc79bbfa32a0bcf783ca12a0a09578
Comparative Evaluation of Packet Classification Algorithms for Implementation on Resource Constrained Systems Original Availability: This version is available at: 11583/1494576 since: Publisher: IEEE Published DOI:10.1109/CONTEL.2005.185835 Terms of use: openAccess This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright (Article begins on next page) Comparative Evaluation of Packet Classification Algorithms for Implementation on Resource Constrained Systems Gianluca Varenni*, Federico Stirano***, Elisa Alessio**, Mario Baldi*, Loris Degioanni*, Fulvio Risso* * Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy ** Telecom Italia Labs - System On Chip, Torino, Italy *** Istituto Superiore Mario Boella, Torino, Italy {gianluca.varenni,mario.baldi,loris.degioanni,fulvio.risso}@polito.it; stirano@ismb.it; elisa.alessio@tilab.com Abstract – This paper provides a comparative evaluation of a number of known classification algorithms that have been considered for both software and hardware implementation. Differently from other sources, the comparison has been carried out on implementations based on the same principles and design choices. Performance measurements are obtained by feeding the implemented classifiers with various traffic traces in the same test scenario. The comparison also takes into account implementation feasibility of the considered algorithms in resource constrained systems (e.g. embedded processors on special purpose network platforms). In particular, the comparison focuses on achieving a good compromise between performance, memory usage, flexibility and code portability to different target platforms. I. INTRODUCTION A vast literature on classification algorithms and their performance does exist, but our work is necessary, hence relevant since existing evaluations do not allow a significant comparison based on real-life data. In fact, a comparison based on existing literature could be carried out only according to analytical worst-case bounds. Even though figures on the performance of classification algorithm implementations in real-life scenarios can be found, they are part of studies on a single algorithm: the measurement scenarios are different and the implementations are not uniform, consequently the results are not comparable. This work studies known classification algorithms with respect to their suitability for being (i) deployed for common networking applications (i.e., not optimized for a specific one), and (ii) implemented in embedded systems, i.e., systems with strict requirements, limited resource availability, and no specific hardware support, such as content addressable memories. A (packet) classifier is a collection of rules — usually called ruleset — that is used to partition network traffic into different groups, sometimes called flows or buckets. Every rule specifies a subset of the network traffic, for example “IP traffic”, or “traffic sent from host 1.2.3.4”, thus somehow characterizing packets grouped into that flow. When a packet satisfies a rule, the packet is said to match the given rule. A classification algorithm determines whether a packet matches at least one rule of a classifier. Packet classifiers are widely used in IP networking where rules usually involve one or more packet header fields (e.g. IP source address, TCP destination port). Each rule $R$ is composed of $i$ components, so that each component $R[i]$ applies to a specific header field. When more than one field is considered, the classifier is said to be multifield. As an example, Table 1 shows a small multifield ruleset that includes value/mask rules on the source and destination IP addresses. Packet classifiers are widely used for various network applications, many of which related to quality of service (QoS) provision, and consequently in several types of network devices that might be implemented as or composed of embedded systems. Examples of QoS related applications of packet classifiers are: - Traffic conditioning and shaping appliances; they use multifield classifiers, usually on session tuples, to separate traffic flows in order to be able to apply on them admission, marking and shaping policies. Traffic conditioning appliances or functionality are fundamental whenever in the deployment of both the IntServ [1] and DiffServ [2][3] approach. - IntServ routers; they use multifield classifiers, usually on session tuples, to separate traffic flows in order to store packets in different queues on which scheduling algorithms suitable to provide the required QoS are applied. - DiffServ routers; they use single field classifiers based with a limited ruleset concerning the value of the DS (Differentiated Services) field [3] to separate packets belonging to different traffic classes in order to handle them according to the corresponding per-hop behavior (PHB). This work aims at identifying classification algorithms that can be effectively implemented on embedded systems and deployed in any of the above listed applications. Execution in embedded systems imposes strict limits on the characteristics of the algorithms, such as simple (static) memory management, limited code size, limited CPU usage requirements, limited data storage necessities, <table> <thead> <tr> <th>TABLE 1. SAMPLE MULTIFIELD RULESET</th> </tr> </thead> <tbody> <tr> <td><strong>IP source</strong></td> </tr> <tr> <td>Rule</td> </tr> <tr> <td>1</td> </tr> <tr> <td>2</td> </tr> <tr> <td>3</td> </tr> </tbody> </table> 3 adaptability to various hardware platforms and architectures. Our work, and this paper describing it, was organized as follows. The various algorithms proposed in the literature (Section B) as well as the metrics commonly deployed to evaluate them (Section A) are first surveyed. The implementation objectives and the guidelines followed to develop software for embedded systems are then shown in Section III. Based on this, selection criteria are formulated and used to identify a limited set of algorithms on which to perform a more detailed and targeted comparative evaluation. Section IV provides the results of the comparative evaluation conducted with real-life traffic traces and final conclusive remarks are provided in Section V. II. THEORETICAL ANALYSIS OF CLASSIFICATION ALGORITHMS Among the others [5], the comparative survey of classification algorithms by Gupta and McKeown [4] provides a detailed comparison of the most important known algorithms for multifield classification. Even though this work represents a complete and interesting tutorial on classification algorithms, it does not present any performance comparison based on real-life network traffic. Our work leverages off some of the criteria and results presented by Gupta and McKeown to select a reduced set of classification algorithms that best fit to be implemented in embedded systems. Another contribution of our work lies in the detailed and homogeneous evaluation of such selected algorithms that have been implemented with common criteria and evaluated in a common test bed using real traffic captures. A. Evaluation metrics and parameters The metrics adopted are the ones commonly used by various authors [6][7][8][9][11][12] in literature, including Gupta and McKeown in [4]: search time, memory consumption, and update time. Search time (T), i.e. the amount of time needed to classify a packet, is the most obvious metric; in order to devise a measurement (at least partially) independent from the particular test bed, the search time is measured in terms of CPU clock cycles. Memory consumption (M) is the amount of memory needed to store the ruleset in some specific data structure in memory, computed either at instantiation or run time. Memory consumption is an excellent indicator of the compression capability of the algorithm measured as the ratio between the ruleset size (i.e. number of rules and number of fields) and its footprint in memory. The update time (U) is the amount of time necessary to insert, delete, or modify a rule in the running ruleset. An interesting metric is represented by the number of memory accesses performed by the algorithm, but it is not widely used because getting this data is far from being trivial. The three metrics previously described generally depend on the following parameters: - The number of rules \( N \) in the ruleset - The number of fields \( d \) globally used within the \( R[i] \) components of each rule - The length of each field, in bits, called \( W_i \). In order to simplify the evaluation of the algorithms, we will use a new fictitious parameter \( W \), defined as \( W = \max(W_i) \) Section A will provide some insight in the implications of such simplification on the comparative evaluation presented later. B. Theoretical complexity of some well-known algorithms In order to have a first general comparison of the classification algorithms and select which to adopt for a more thorough analysis, the theoretical worst-case bounds for the metrics identified in Section A were taken into consideration. Table 2 shows the formulas expressing the bound for each of the metrics. Such formulas were either taken directly from the literature, when available, or inferred from a paper describing the corresponding algorithm. <table> <thead> <tr> <th>Algorithm</th> <th>Search time (T)</th> <th>Memory usage (M)</th> <th>Update time (U)</th> </tr> </thead> <tbody> <tr> <td>Linear search</td> <td>N</td> <td>N</td> <td>1</td> </tr> <tr> <td>Hierarchical tries [4]</td> <td>( W )</td> <td>( N d W )</td> <td>( D^W )</td> </tr> <tr> <td>Set pruning tries [11]</td> <td>( d W )</td> <td>( N^2 )</td> <td>( N^2 )</td> </tr> <tr> <td>Heap-on-Trie [6]</td> <td>( W )</td> <td>( N W )</td> <td>( W^2 \log N )</td> </tr> <tr> <td>Binary search-on-Trie [6]</td> <td>( W \log N )</td> <td>( N W )</td> <td>( W^2 \log N )</td> </tr> <tr> <td>Cross producing [7]</td> <td>( d W )</td> <td>( N^2 )</td> <td>N/A</td> </tr> <tr> <td>Hierarchical Cuttings [9]</td> <td>D</td> <td>N</td> <td>N/A</td> </tr> <tr> <td>Tuple Space Search [8]</td> <td>N</td> <td>N</td> <td>N</td> </tr> <tr> <td>Recursive Flow Classification [12]</td> <td>D</td> <td>N</td> <td>N/A</td> </tr> </tbody> </table> Hardware based [14] and ad-hoc algorithms [10] were not included in this evaluation since either the selected metrics cannot be applied to them, or a comparison based on them is meaningless due to the particular nature of such algorithms. Instead, the linear algorithm was included because it is widely used by software based firewalls (e.g. Linux netfilter/iptables [13]) and it is an excellent baseline against which other algorithms can be compared to, especially in the implementation and testing part of this work. The bound on the update time is not shown for some of the algorithms since they do not explicitly support dynamic updates to the running ruleset. This stems from the fact that these algorithms preprocess the ruleset into a specific custom data structure that does not support insertion or removal of rules. Instead, in order to cope with ruleset changes the whole ruleset must be re-processed thus yielding a new data structure. Such an approach is usually inefficient, since the preprocessing time is typically quite high. C. Practical issues with the theoretical complexity The worst cases in Table 2 show quite clearly that the linear search algorithm outperforms the other algorithms in terms of memory consumption and update time. Its search time performance is comparable to the other algorithms when the number of rules is not large; for example, when classifying UDP flows or TCP connections ($d=5$ and $W=32$) the break point is one or two hundreds rules. In fact, the search time of the other algorithms depends on the total number of bits $dW$ of the various fields in each rule because the classification algorithm processes the classification fields bit by bit—in particular, this is the approach used by all the algorithms based on tries. Consequently, the linear algorithm might be particularly interesting in cases, IPv6 addresses, in which the total number of bits $dW$ is high. As a matter of fact, the theoretical analysis previously conducted is limited by several factors: - The performance of many classification algorithms when used with real traffic might be very different from the theoretical results shown in Table 2; this is particularly true for heuristics, that are engineered to achieve good performances in the average case, and not in the worst case. - The theoretical complexities shown in Table 2 have been devised assuming that all fields used for the classification have the same length, equal to the length of the largest one; this simplification can bring to unrealistic theoretical results (e.g. in the case of IPv6 session identifiers, we consider the length of a TCP/UDP port to be 128 bit, and this is completely misleading). A solution to this problem could be to re-formulate each metric taken into consideration using the various fields’ lengths $W_i$, but this out of the scope of this paper. ### III. IMPLEMENTATION An objective of this work is to identify and evaluate the packet classification algorithms that are more suitable for an implementation on resource constrained systems. When writing software for an embedded system, specific constraints have to be taken into account in order to grant good performances and flexibility in terms of code portability to different target platforms: hence, several aspects have been considered while implementing the above mentioned algorithms. First of all, the main goal of our work was to write a code portable to different target platforms, independent from the processor and the operating system used. To accomplish this objective, we developed a software library made up of pure ANSI C, trying to avoid any use of OS/compiler support functions that could not be available on special purpose processors. The crucial point in generating portable code is to separate the coding of functional modules from the one related to the specific target environment. This can be achieved by defining some sort of API, which avoids the use of platform dependent functions directly inside the code. A second consideration is that the code should use static memory allocation, since a dynamic allocation infrastructure is not granted to be present on all the target platforms. Another requirement is that the code should avoid the use of explicit pointers in the raw data structures containing the ruleset; in fact, sometimes the code creating and initializing the data structure and the code that classifies packets using this structure run either on different processors (e.g. network processors using multiple processing units) or within different address spaces (e.g. code running partially at kernel level and partially at user level on a general purpose PC). A commonly used solution to the problem is to make use of indirect addressing, using only displacement pointers in the data structure, and the base pointer outside it. In a network embedded system we can distinguish among data-plane functions (related to packet processing functionalities, with high performance requirements) that usually run on specific processor engines and control-plane functions (for data structure initialization and configuration, usually with high memory requirements) that may run on a general purpose processor. Thus, one general issue is to modularize the code as deeply as possible, trying to separate the main algorithm functionalities, which may have high performance requirements, from the control and configuration functions that may run on a different processor. #### A. Selecting the algorithms to be implemented Given previous considerations and taking into account the practical issues enlightened in Section 2, we decided which algorithms to implement to meet our objectives. 1. We excluded Cross-Producing and Set-Pruning Tries, because their memory consumption grows as $N^d$, which is extremely critical even with rather low values of $N$ and $d$ (e.g. with $N=100$ rules and $d=4$ fields memory consumption is about $10^5$). While RFC and Höflinger's Cuts have the same worst case memory consumption, if they are heuristic algorithms, therefore this value is not enough to get rid of them. 2. We excluded Heap on Tries and Binary trees on Tries, because their memory consumption and search time is proportional to $W^d$ which is too large (e.g. this value is larger that $10^5$, when the maximum field size $W$ is 128 bits and the number of fields $d$ is 5); moreover the paper presenting these algorithms does not give any hint about any working implementation of them. Although the Hierarchical Tries algorithm has the same search time as the two previous ones, it has not been excluded because of its excellent characteristics referred to memory consumption. 3. We excluded Höflinger's Cuts, because this algorithm is patent pending. 4. Tuple Space Search was excluded essentially because it was decided that the comparative study would include a single heuristic algorithm and from the information we gathered in the literature the implementation details of RFC seemed clearer. In summary, we decided to implement the Linear algorithm, to be used as a baseline for the comparison, the Hierarchical Tries algorithm (the only remaining non-heuristic algorithm after the screening described above), and the Recursive Flow Classification algorithm. ### IV. PERFORMANCE EVALUATION Although our implementation is targeted to both general and special purpose platforms, so far it has been validated through extensive tests only on a standard personal computer. We did not consider tests on special purpose platforms in the context of this work since it specifically aims at giving a homogeneous comparison between the implementation of various algorithms by measuring their performance in real-life working conditions. Moreover, the obtained experimental results are compared against the theoretical worst-case results. However, tests on special purpose platforms will be carried out as a future work in an effort to evaluate the performance disparities on different platforms. A. Testbed The tests were conducted using a network trace taken from our university link to the Italian trans-university backbone. This trace has the following characteristics: - duration: 6 hours - total packets: 24 millions - total bytes: 13 GBytes - average traffic: 5 MBps, 1100 pps. The implemented algorithms have been compiled with the Microsoft Visual C++ 6.0 SP 5 compiler. We used an Intel Pentium IV 2GHz workstation with 1GB RAM, running Microsoft Windows XP. The measurements were taken with the x86 assembler instruction RDTSC (Read TimeStamp Counter), which gives the number of CPU clock ticks from the machine bootstrap. We used the ruleset running on the router connected to the same link on which we captured the network trace (the packets were captured immediately before the router classifier); this ruleset is formed of 349 rules, each rule working on these fields: - source / destination IPv4 address - Layer 4 protocol (TCP/UDP/ICMP/any) - source / destination TCP/UDP port. In order to evaluate the algorithms with rulesets of different size, we extrapolated some fictitious ruleset from the original one. These are the new rulesets we defined: - 2 rulesets formed of 50 rules (rules 1-50 and 51-100 of the original ruleset) - 2 rulesets formed of 100 rules (rules 1-100 and 101-200 of the original ruleset) - 1 ruleset formed of 200 rules (rules 1-200 of the original ruleset). B. Search time test results This test aims at measuring the average packet classification time for the various rulesets; the results are shown in Table 3. The results of this test show that the mean search time grows linearly with the number of rules in the case of the linear algorithm; in the case of the Hierarchical Tries algorithm, the search time seems to grow linearly, too, but the trend is much lower than the linear one. The RFC algorithm, instead, shows a mean search time that is independent on the number of rules in the ruleset. By comparing the results in Table 3 and the worst cases in Table 3, we can note that: - the linear algorithm performs worse than the other two algorithms in our tests, compared to the theoretical results; - the Hierarchical Tries algorithm seems to be loosely dependent on the number of rules N, while its worst case is independent from this parameter. This behavior could be due to the fact that the number of recursive visits of the tries grows with the number of rules N. C. Memory consumption test results We measured the amount of memory needed to store the raw data structure containing the ruleset for each algorithm. The results of this test are shown in Table 4. D. Preprocessing time test results The last test attempts to measure the amount of time needed to process the various rulesets and create the internal data structures used by each classification algorithm. The results of this test are shown in Table 5. <table> <thead> <tr> <th>TABLE 3. AVERAGE SEARCH TIME (IN CLOCK TICKS) FOR THE GIVEN RULESET</th> </tr> </thead> <tbody> <tr> <td>Ruleset 1-50</td> </tr> <tr> <td>--------------</td> </tr> <tr> <td>50</td> </tr> <tr> <td>Ruleset 51-100</td> </tr> <tr> <td>Ruleset 1-100</td> </tr> <tr> <td>Ruleset 101-200</td> </tr> <tr> <td>Ruleset 1-200</td> </tr> <tr> <td>Ruleset 1-349</td> </tr> </tbody> </table> <table> <thead> <tr> <th>TABLE 4. MEMORY CONSUMPTION (IN BYTES) FOR THE GIVEN RULESETS</th> </tr> </thead> <tbody> <tr> <td>Ruleset 1-50</td> </tr> <tr> <td>--------------</td> </tr> <tr> <td>50</td> </tr> <tr> <td>Ruleset 51-100</td> </tr> <tr> <td>Ruleset 1-100</td> </tr> <tr> <td>Ruleset 101-200</td> </tr> <tr> <td>Ruleset 1-200</td> </tr> <tr> <td>Ruleset 1-349</td> </tr> </tbody> </table> The outcome of this test shows that the trend is roughly linear in the number of rules for the linear and Hierarchi- cal Tries algorithm; moreover the latter is about 100 times slower than the former one, but the overall time to process the original ruleset containing 349 rules seems to be acceptable (less than 10 ms on the test platform). The RFC algorithm shows instead a rather interesting behavior: the trend is roughly linear on the number of rules up to 200 rules, with a cost that is about three orders of magnitude more expensive than the Hierarchical Tries algorithm; when we compute the data structure with the entire ruleset of 349 rules, the preprocessing time literally explodes to about 20 minutes. This explosion is generally due to two main factors: 1. It is a heuristic algorithm, so each metric normally depends on the particular ruleset used for the test. 2. Some experiments on this algorithm have shown that this behavior is largely due to rules containing a large number of “any” values in their components. V. CONCLUSIONS A continuously growing number of network appliances are deploying packet classifiers to implement Quality of Service, security, traffic engineering functionalities. As a consequence, in the last years several authors have proposed novel algorithms to achieve better results in terms of classification time and memory consumption. Many works provided case studies of such algorithms applied to a large number of real-life rulesets and network traffic traces. However, a fair comparison with common criteria and test cases has not yet been provided. Our main contribution in this work is filling this gap, by providing a homogeneous evaluation of three classification algorithms that have been implemented following the same criteria. Our tests have shown that the Recursive Flow Classification algorithm outperforms, as expected, the other two algorithms in terms of search time. In fact, its heuristics is able to effectively exploit the characteristics of the real-life rulesets considered. However, it is known that this algorithm does not support dynamic updates, and our tests have shown that its preprocessing time is unpredictable. The Hierarchical Tries algorithm shows acceptable performance in terms of classification time, being less than one order of magnitude worse that RFC. Instead it features low memory consumption, outperforming RFC for more than one order of magnitude. In practice, we have shown that the Hierarchical Tries algorithm is preferable over RFC when memory consumption and preprocessing time are more critical than classification time alone. Finally, our tests confirm that the linear algorithm, despite the worst classification time with large rulesets, is the one that assures the lowest memory consumption, the fastest preprocessing phase, and the most flexible support for dynamic updates. VI. REFERENCES http://www.ietf.org/html.charters/diffserv charter.html
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/1494576/e384c42d-f05e-d4b2-e053-9f05fe0a1d67/05Contel-Classifiers.pdf", "len_cl100k_base": 5312, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18947, "total-output-tokens": 6644, "length": "2e12", "weborganizer": {"__label__adult": 0.0004630088806152344, "__label__art_design": 0.0002892017364501953, "__label__crime_law": 0.0007390975952148438, "__label__education_jobs": 0.0004897117614746094, "__label__entertainment": 0.00016760826110839844, "__label__fashion_beauty": 0.00021922588348388672, "__label__finance_business": 0.00036835670471191406, "__label__food_dining": 0.0004453659057617187, "__label__games": 0.0010385513305664062, "__label__hardware": 0.006122589111328125, "__label__health": 0.0009775161743164062, "__label__history": 0.0005321502685546875, "__label__home_hobbies": 0.00010508298873901369, "__label__industrial": 0.0008745193481445312, "__label__literature": 0.00031948089599609375, "__label__politics": 0.0005145072937011719, "__label__religion": 0.0005803108215332031, "__label__science_tech": 0.410888671875, "__label__social_life": 0.0001094341278076172, "__label__software": 0.0264129638671875, "__label__software_dev": 0.54638671875, "__label__sports_fitness": 0.0005211830139160156, "__label__transportation": 0.0011701583862304688, "__label__travel": 0.0002808570861816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27770, 0.06082]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27770, 0.48927]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27770, 0.903]], "google_gemma-3-12b-it_contains_pii": [[0, 886, false], [886, 6163, null], [6163, 12255, null], [12255, 18801, null], [18801, 23134, null], [23134, 27770, null]], "google_gemma-3-12b-it_is_public_document": [[0, 886, true], [886, 6163, null], [6163, 12255, null], [12255, 18801, null], [18801, 23134, null], [23134, 27770, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27770, null]], "pdf_page_numbers": [[0, 886, 1], [886, 6163, 2], [6163, 12255, 3], [12255, 18801, 4], [18801, 23134, 5], [23134, 27770, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27770, 0.2375]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
5757a3cb372d378c2efd5fcf3c89467bb274a3fa
Programming Generic Graphical User Interfaces Peter Achten, Marko v. Eekelen, Rinus Plasmeijer, and Arjen v. Weelden Institute for Computing and Information Sciences, Radboud University Nijmegen Keywords: Graphical User Interfaces, Functional Programming, Generic Programming Submission Category: Full paper Pages: 12 Corresponding Author: Peter Achten, Toernooiveld 1, 6525ED Nijmegen. Phone: (+31)(0)24-3652483; Fax: (+31)(0)3652525; E-mail: P.Achten@cs.ru.nl. Abstract. The GEC Toolkit offers to programmers a high-level, generic style of programming Graphical User Interfaces (GUIs). Programmers are not concerned with low-level widget plumbing. Instead, they use mathematical data models that reflect both the application logic and the visualisation. The data models and the logic are expressed as standard functional style data types and functions over these data types. This significantly brings down the learning effort. In this paper we present an improved programming method of this toolkit and illustrate it by means of a complicated case study: that of a family tree editor. The new programming method brings GUI programming into the reach of every novice functional programmer. 1 Introduction In this paper we present an improved programming method for the GEC Toolkit [4-7]. The GEC Toolkit is a high level toolkit for the construction of Graphical User Interfaces (GUIs) in terms of mathematical data models and pure functions. Its main features are: - Automation: for every conceivable data model, a graphical editor component is automatically derived that allows users to edit values of that type. - Compositional: for free because automation works for all (composite) types. - Abstract: Programmers do not need to know anything about conventional widget based GUI APIs and their management. Instead, only data models are manipulated with pure functions. The GEC Toolkit is based on the pure and lazy functional programming language Clean [21, 22]. Functional programming languages such as Clean and Haskell [20], have a sound theoretical foundation: the \( \lambda \)-calculus. One of the main goals of the Clean project has been to demonstrate that the elegance and succinctness of functional programs does not hamper their efficient execution. Contributions of the Clean project in this respect are its strictness analysis, uniqueness type system, and high quality code generator. In the Clean project, there is about 13 years of research and experience with GUI programming, resulting in the Object I/O library [2,3] which is also available for Haskell [1]. We have constructed large GUI applications with Clean and Object I/O. Two examples are the Integrated Development Environment of Clean itself and the proof tool assistant Sparkle [12]. Although Object I/O offers a high level of abstraction, there is still a steep learning curve for programmers to become proficient. The GEC Toolkit attempts to tackle this problem by taking a radical point of view: the programmer should exclusively model his GUI instead of realizing it in a widget based style. The model is expressed using standard functional data types, and the behaviour is expressed using functions over these domains. This is standard material in any functional programming course. Hence, the GEC Toolkit brings GUI programming within easy reach of every functional programmer. In this paper we describe the programming method of GUI programming using the GEC Toolkit. Part of this material has been presented earlier [5-7]. This is covered in Section 2. The contribution of this paper consists of two parts. (i) We present a programming method for the GEC Toolkit. This method has been realized by means of an improved abstraction mechanism. This is presented in Section 3. (ii) We illustrate the improved programming method by means of a complicated case study: a family tree editor in Section 4. We discuss related work in Section 5 and conclude in Section 6. 2 The GEC Toolkit The key technology on which the GEC Toolkit has been built is generic programming [16,15]. With this technique, the programmer defines a kind-indexed family of functions that have a uniform type scheme. Generic programming has been built in in Generic Haskell [11] and Clean [8]. The main features of this style of generic programming are: - Only a few function definitions suffice to specify an algorithm for any conceivable custom data type. These function definitions typically correspond with the inductive structural elements of types. - Besides this minimal number of function definitions, the programmer is allowed to specialize the algorithm for specific types. This feature gives generic programming its flexibility, which we use extensively in this paper. The GEC Toolkit uses generic programming to automatically create a Graphical Editor Component (GEC) for any conceivable data type t. A GEC is a GUI component that always has a value v :: t, and that can be edited by the user. By editing, we mean any user manipulation of the presented value. This can be keyboard input for strings or numbers, but we also consider button presses to be value-editing actions. Editing is type-safe: the value of a GEC can only be changed in such a way that any new value is of type t. The generic (kind-indexed family of) function(s) gGEC that creates GECs has type (GECFunction t (PSt ps)). In Clean, this is declared as follows: The type synonym \((\text{GECFunction \, t \, env})\) is a function that takes two arguments, \(t\) and \(env\). It creates a \(\text{GEC}_t\) in the environment of type \(env\). It returns the updated environment, but also the \text{methods} (of type \((\text{GECMethods \, t \, env})\)) that a programmer can invoke to obtain access to the \(\text{GEC}_t\) in the environment. We will not use the \(\text{GEC}\) methods in this paper. \[ \begin{align*} \text{GECFunction \, t \, env} &::= t \, env \rightarrow (\text{GECMethods \, t \, env}, \text{env}) \\ \end{align*} \] The environment parameter is instantiated with \((\text{PSt \, ps})\). This is an \text{Object I/O type} that represents the explicit GUI environment that is passed along all GUI callback functions. In pure functional languages, side-effects are modelled by passing environments around, either explicitly as in \text{Clean}, or implicitly as for instance \text{state monads} \cite{haskell} do in \text{Haskell}. \(\text{gGEC}\) is a generic function, and hence it can create a \text{GEC} for any conceivable type. Figure 1 shows the \text{GECs} of two values of basic type \((\text{Int} \text{ and } \text{String})\), and two composite types \((\text{Int, String})\) and \([\text{Int}]^3\): ![Fig. 1. Values \(v\) of type \(t\) and their corresponding \text{GEC}_t.](image) GUIs typically consist of traditional elements such as buttons, edit fields, radio, and check buttons. These have been provided in the \text{GEC Toolkit} using the \text{specialization} mechanism of generic programming. This means that for these GUI elements new data types have been introduced that \text{model} these GUI elements. Figure 2 gives the types of some of them and also shows what they look like when applied to \text{gGEC}. Another issue that needs to be addressed with GUIs is the \text{layout} of elements. The \text{default layout strategy} of the \text{GEC Toolkit} is to arrange data constructor arguments below each other, with the top element right to the data constructor itself. A number of specialized data types have been defined to influence the --- 1 \(\text{generic} \, f \, t :: (T \, t)\) introduces the generic function \(f\) that is generic in type argument \(t\). \((T \, t)\) is the type of \(f\). 2 \(:: T_1 ::= T_2\) introduces the type synonym \(T_1\) for type \(T_2\). 3 \([T]\) is the type \text{list of} \(T\). 4 \(:: T = C_1 | \ldots | C_n\) introduces the \text{type constructor} \(T\) with data constructors \(C_i\). layout of elements. Let \(v_1 : : t_1\) and \(v_2 : : t_2\) be given. Then \((v_1 \lll v_2) : : (t_1 \lll t_2)\) puts \(v_2\) below \(v_1\), with their left edges aligned. Analogously, the combinators \(<\|>\) and \(<\|\ll\) align the centers and right edges. \((v_1 \ll\ll v_2) : : (t_1 \ll\ll t_2)\) puts \(v_2\) right to \(v_1\), with their top edges aligned. Analogously, the combinators \(<\|\) and \(<\|\ll\) align the centers and bottom edges. The GEC Toolkit is provided with an abstraction mechanism that allows the creation of GECs with the same data model type \(d\), but with different view model types \(v\) [6]. Such an abstraction is created by converting values of type \(d\) to \(v\) and vice versa. In many cases this conversion is a bijection of type \((\text{Bimap } d \rightarrow v)\)^5: \[ :: \text{Bimap } d \rightarrow v = \{ \text{map\_to} : : d \rightarrow v, \text{map\_from} : : v \rightarrow d \} \] The generic \text{gGEC} function is specialized for the abstract data type \((\text{AGEC } d)\). It is created with the constructor function \text{mkAGEC} given a bijection of type \((\text{Bimap } d \rightarrow v)\) and an initial value of type \(d\). The generic function is specialized in such a way that it creates a \(\text{GEC}\) that is encapsulated within the \((\text{AGEC } d)\) value, and that works as a \(\text{GEC}\) in the data domain of which it is part. \[ \text{mkAGEC} : : (\text{Bimap } d \rightarrow v) \rightarrow (\text{AGEC } d) \rightarrow (\text{GEC } d)\] Given \(g : : (\text{AGEC } d)\), then \((^\sim)^\sim g\) is the current value of type \(d\), and \((g \*new)\) is a new value of type \((\text{AGEC } d)\) with current value \(\text{new} : : d\). These operations obey the simple law \((^\sim)^\sim (g \*new) = \text{new}\). \[ (\sim) :: (\text{AGEC } d) \rightarrow d \] \[ (\sim) :: (\text{AGEC } d) \rightarrow d \] Abstraction is crucial to obtain easily customizable domain data models. As an example, consider the following \(\text{GEC}_{(\text{AGEC } \text{Int})^8}\) that can be used, and freely exchanged, within the very same domain data model: \(\text{int\GEC}_{(\text{Int})^8}\) is an integer value editor; \(\text{dynamic\GEC}_{(\text{Int})^8}\) is an integer expression editor [7] in which only those Clean expressions can be edited that yield an \(\text{Int}\) type; \(\text{counter\GEC}_{(\text{Int})^8}\) is a spin-button. We have developed the following programming method to effectively construct GUI applications with the GEC Toolkit: 1. Develop the pure domain data model \(d\) without any abstraction. --- 5 We use infix type constructors here for clarity, although Clean does not allow this. 6 In fact, we allow a more general conversion relation between domain and view, but that is outside the scope of this paper. Please consult [6] for the more general version. 7 In a type definition of a function, the used overloaded and generic functions are listed behind \(|\). 2. Develop another view data model \( V \) that uses abstraction in the right places. 3. Create \((\text{Bimap} \ D \ V)\) which contains the transformations between \( D \) and \( V \). 4. Create the abstract editor \((\text{AGEC} \ D)\) using the \((\text{Bimap} \ D \ V)\). We illustrate the programming method by means of the following code fragment: ```haskell :: D = ...[Int]... :: V = ... (AGEC [Int])... :: ListV = ListV (Maybe\(^s\) (Int <~*> ListV)) convertList :: ([Int] -> (AGEC [Int])) convertList = mkAGEC { map_to = toView, map_from = toDomain } where toView :: [Int] -> ListV toView [] = ListV Nothing toView [x : xs] = ListV (Just (x <~*> toView xs)) toDomain :: ListV -> [Int] toDomain (ListV Nothing) = [] toDomain (ListV (Just (x <~*> xs))) = [x : toDomain xs] ``` The domain data model \( D \) has an integer list component which elements need to be rendered horizontally. Therefore, the view data model \( V \) uses abstraction over the integer list. The conversions between \( D \) and \( V \) need to transform \([\text{Int}]\) values to \((\text{AGEC} \ [\text{Int}])\) values, and vice versa. This is defined by \( \text{convertList} \), which implements the view of the abstract element as \( \text{ListV} \). \( \text{ListV} \) must be a new type because list is a recursive data type. This is also reflected in the recursive structure of the conversion functions \( \text{toView} \) and \( \text{toDomain} \). 3 The Improved GEC Toolkit Programming Method In the previous section we have introduced the GEC Toolkit and its programming method. The programming method relies on the abstraction mechanism of the GEC Toolkit. We identify the following issues with this mechanism: 1. The upside of abstraction is that the programmer does not need to change her code for those (sub)types \( v \) that have been abstracted to \((\text{AGEC} \ v)\) when switching between abstract components. The downside is that she does have to change her code for those (sub)types that she decides about afterwards to become either abstract or concrete. This is a normal consequence of using abstraction. 2. Recursive data domain (sub)types can only be made abstract by introducing new types and recursive conversion functions. It should be noted that these issues do not decrease the expressive power essentially, but only stylistically. The improvement that we propose is the following. Instead of handling the complete transformation from \( D \) values to \( V \) values and vice versa in one go, we \(^s::\) \text{Maybe a} = \text{Just a} | \text{Nothing}. This type is useful for handling optional values. should identify those (sub)types $D_i$ of $D$ for which we want to apply abstraction, so replace with $(\text{AGEC } D_i)$. This leads to a family of functions $f_i :: D_i \rightarrow (\text{AGEC } D_i)$. Now we can specialize each member of this family as follows: $$\text{gGEC}(D_i) \ldots dv env \Rightarrow \text{specialize } f_i \ dv \ env$$ and we are done! The technical breakthrough to this apparently simple procedure has been accomplished with the new and complex GEC Toolkit function `specialize :: (d \rightarrow (\text{AGEC } d)) \rightarrow (\text{GECFunction } d \ (\text{PSt } ps))`. Its task is to create the GEC, that is encapsulated inside the $(d \rightarrow (\text{AGEC } d))$ function in such a way that it can be addressed with the GEC methods for a GEC. Its implementation is beyond the scope of this paper. Instead, we focus on the consequences for the programming method. The new programming method is as follows: 1. Develop the pure data domain model $D$ without any abstraction. 2. Develop $f_i :: D_i \rightarrow (\text{AGEC } D_i)$ for those (sub)types of $D$ that need to be specialized. 3. Specialize each $D_i$ as described above with the function `specialize`. This improves the old method in the following ways: (i) It is modular: instead of one $(\text{Bimap } D \ V)$ the programmer writes several conversions $D_i \rightarrow (\text{AGEC } D_i)$. These functions are easier to understand and can be reused in arbitrary many data domain models $D$. (ii) The view data model $V$ has been eliminated. This implies that the programmer does not have to change her code when switching (sub)types of the pure domain data model to become abstract or not. (iii) The new way of handling abstraction merges the abstraction mechanism with the generic programming scheme. Because the generic programming scheme is inherently recursive, this eliminates the issue of programming recursive conversion functions. (iv) An early experiment with a large application suggests that the new method reduces the number of lines of code with 30%. Before we move to the case study, we illustrate the new programming method with the list example at the end of Section 2. The essential code fragment is: ```haskell :: ListV ::= Maybe (Int <*> [Int]) gGEC([Int]) t pSt \Rightarrow \text{specialize } horlistAGEC t pSt where horlistAGEC = mkAGEC {map_to = toView, map_from = toDomain} toView :: [Int] \rightarrow ListV toView [] = Nothing toView [x : xs] = Just (x <*> xs) toDomain :: ListV \rightarrow [Int] toDomain Nothing = [] toDomain (Just (x <*> xs)) = [x : xs] ``` The important differences to observe are: (i) ListV is not a new type anymore, but a type synonym. We have eliminated the need for a new type. (ii) The conversion functions toView and toDomain are not recursive. (iii) Already this very small example shows that the specification becomes shorter and clearer. 4 Case Study: a Family Tree Editor In this section we demonstrate how to program a GUI using the GEC Toolkit. The case study that we consider is that of a family tree editor. This case study is interesting because of the following reasons: - It has dynamic behaviour: when edited, (sub) family trees may expand or decrease in size. This causes recalculation of the layout of the remaining (sub) family trees. - This program can not be created with a visual editor because it has dynamic behavior. Instead, it must be programmed. - It has logical behaviour: in this case study we want to impose the restrictions that marriage occurs only between two persons of opposite gender and only married couples have children. - Family trees are usually rendered from top to bottom, which contrasts the default layout strategy of the GEC Toolkit. This is a good test case how well customization of layout works. We follow the steps of the programming method of Section 3. Step 1. Develop the Pure Data Domain Model. In the first step we develop the pure data domain model $D$ of the family tree editor. In this case, $D$ is the recursive tree-like data type Family. Its nodes contain information about a person (gender and name), civil status (married or single). Its subtrees are the person’s offspring. Because a person might not be married, the spouse and children are encoded with a Maybe type. The corresponding data types should not be surprising for people familiar with functional programming: :: Family = Family Person CivilStatus (Maybe (Person,Kids)) :: Person = Person Gender String :: Gender = Male | Female :: CivilStatus = Married | Single :: Kids = Kids [Family] Although this type definition is rather compact, its automatically derived GECFamily is not. The background window in Fig. 3 gives the screenshot of a small family constructed with the editor, that consists of parents Peter and Mirjam and their boys Tijnem and Arjen. It should be clear that this editor is uninformative even to an informed programmer. It also does not implement the logic behaviour requirements. In contrast, the editor in the foreground window is much more compact, uses a more appealing layout scheme, displays redundant information such as number of children, and implements the behaviour requirements. Step 2. Design the Abstract Types. The next step is to decide what (sub)types to specialize. If we compare the two GUIs in Fig. 3 we conclude that Person, Kids, and Family have to be specialized. A Person has to be displayed as \texttt{M ale} instead of \texttt{Peter}. Expressed as a function: \[ \text{toView} (\text{Person} \text{ gender} \text{ name}) = \text{name} \bowtie \text{ gender}. \] This puts the gender information below the name and right-aligned. The inverse function is trivial: \[ \text{toDomain} (\text{name} \bowtie \text{ gender}) = \text{Person} \text{ gender} \text{ name}. \] The full specialization is defined by: \[ \text{personAGEC} :: (\text{Person} \rightarrow \text{AGEC} \text{ Person}) \] \[ \text{personAGEC} = \text{mkAGEC} \{ \text{map\_to} = \text{toView}, \text{map\_from} = \text{toDomain} \} \] The next type to specialize is Kids. Because Kids are defined with a list, the default rendering uses the default list rendering (see also Fig. 1) which is inadequate for our purposes. Instead, we want to display the children next to each other. We use the library function \text{hor2listAGEC} :: \text{a} \rightarrow \text{AGEC} \text{ [a]}; \text{hor2listAGEC} a \text{ [a1..an]} creates an interactive horizontal list with initial elements \text{[a1..an]} \; (n \geq 0). New list elements have default value \text{a}. Above this list, we want to display the number of children. This is expressed as: \[ \text{KidsView} := \text{Display} \text{ String} \bowtie \text{ AGEC} \text{ [Family]} \] \[ \text{toView} :: \text{Kids} \rightarrow \text{KidsView} \] \[ \text{toView} (\text{Kids} \text{ ks}) = \text{nrOfKids} \text{ (length} \text{ ks)} \bowtie \text{ hor2listAGEC} \text{ default ks} \] \[ \text{where} \text{ nrOfKids} n = \text{Display} \text{ (toString} n \text{ +++ " Child" +++ if } n=1 \text{ " ren ")} \] \[ \text{default} = \text{Family} \text{ (Person Male "")} \text{ Single Nothing} \] Converting edited values back to the domain model type is straightforward: \[ \text{toDomain} :: \text{KidsView} \rightarrow \text{Kids} \] \[ \text{toDomain} (_ <|*|> \text{list}) = \text{case } \text{list of } \text{ks }\quad \text{Kids ks} \] Putting everything together proceeds as Person: \[ \text{kidsAGEC} :: (\text{Kids} \rightarrow \text{AGEC Kids}) \] \[ \text{kidsAGEC} = \text{mkAGEC \{map\_to = toView, map\_from = toDomain\}} \] The Family specialization requires more attention because it needs to implement both a pleasant visualization and the logic behaviour requirements. The visualization is as follows: the partners in a couple are placed next to each other (<**>); below them and to the left (<*|>) the civil status is shown; and below that and centered (<|*|>) the children are shown. We use the Maybe type in the view model to display nothing at all in case of Nothing values, and (gGEC x) in case of (Just x) values. Therefore, the view data domain has type: \[ :: \text{FamilyView} ::= \text{Person <**} \text{Maybe Person <*|} \text{CivilStatus <|*|} \text{Maybe Kids} \] Mapping data domain model values to view domain model values and vice versa is done with toView and toDomain. These functions implement the visualization and logic behaviour. Their definitions are: \[ \text{toView} :: \text{Family} \rightarrow \text{FamilyView} \] \[ \text{toView} (\text{Family p1 Single _}) = p1 <**> \text{Nothing <*|} \text{Single <|*|} \text{Nothing} \] \[ \text{toView} (\text{Family p1 cs (Just (p2,kids)))} = p1 <**> \text{Just p2 <*|} \text{cs <|*|} \text{Just kids} \] \[ \text{toView} (\text{Family p1 cs Nothing}) = p1 <**> \text{Just (other p1) <|*|} \text{cs <|*|} \text{Just [ ]} \] \[ \text{where other :: Person} \rightarrow \text{Person} \] \[ \text{other (Person Female _)} = \text{Person Male ""} \] \[ \text{other (Person Male _)} = \text{Person Female ""} \] \[ \text{toDomain} :: \text{FamilyView} \rightarrow \text{Family} \] \[ \text{toDomain} (p1 <**> \text{Nothing <*|} \text{cs <|*|} _) = \text{Family p1 cs Nothing} \] \[ \text{toDomain} (p1 <**> \text{Just p2 <*|} \text{cs <|*|} (\text{Just kids})) = \text{Family p1 cs (Just (p2,kids))} \] The logic behaviour requirement that singles have no children is imposed by the first alternative of toView and toDomain. The requirement that marriage is between persons of opposite gender is imposed by the last alternative of toView, using the local function other :: Person → Person. Again, the specialization is assembled analogously to Person and Kids. \[ \text{familyAGEC} :: (\text{Family} \rightarrow \text{AGEC Family}) \] \[ \text{familyAGEC} = \text{mkAGEC \{map\_to = toView, map\_from = toDomain\}} \] **Step 3. Specialize Abstract Types.** As said earlier in Section 3, this is a trivial step, and we show only its code without further comment: This concludes the case study. It demonstrates the following points. (i) It shows that the types of the data model are not complex. They belong to any introductory course in functional programming. (ii) A default visualization is always present, but it might not be adequate. However, it can be used for initial testing and verification of the data model. (iii) Improving the visualization of the data model amounts to identification of (sub)types $D_i$ for which specialization functions $(D_i \rightarrow \text{AGEC} D_i)$ need to be developed. These are bijections between $D_i$ and $V_i$, and they can be defined with pure functions on pure data domains. 5 Related Work The GEC Toolkit is a refined version of the well-known model-view paradigm [18], introduced by Trygve Reenskaug (then named as the model-view-controller paradigm) in the language Smalltalk. In the GEC Toolkit both models and views are defined by means of data models. The generic programming technology provides automatic and specialized visualization of all data models. Other model-view approaches based on functional programming use a similar value-based approach [10], or an event-based version [17]. In both cases, the programmer needs to explicitly handle view registration and manipulation. The Vital system [14] is an interactive graphical environment for direct manipulation of Haskell-like scripts. Shared goals are: direct manipulation of functional expressions, manipulation of custom types, views that depend on the data type (data type styles), guarded data types, and the ability to work with infinite data structures. Differences are that our system is completely implemented in Clean, while the Vital system has been implemented in Java. This implies that our system can handle, by construction, all Clean values. Obviously, they are well-typed. In addition, the purpose of a GEC is to edit values of type $t$, while the purpose of a Vital session is to edit Haskell scripts. Taking a different perspective on the type-directed nature of our approach, one can argue that it is also possible to obtain editors by starting from a grammar specification. Projects in this flavor are for instance Proxima [23], which relies on XML and its DTD (Document Type Definition language), and the Asf+Sdf Meta-Environment [9] which uses an Asf syntax specification and a Sdf semantics specification. The major difference with such an approach is that these systems need both a grammar and some kind of interpreter. In our system higher-order elements are immediately available as a functional value that can be applied and passed to other components. Because a GEC is a $t$-stateful object, it makes sense to have a look at object oriented approaches. The power of abstraction and composition in our functional framework is similar to mixins [13] in object oriented languages. One can imagine an OO GUI library based on compositional and abstract mixins in order to obtain a similar toolkit. Still, such a system lacks higher-order data structures. 6 Conclusions and Future Work We have presented a programming method for the GEC Toolkit, and illustrated it by means of the family tree editor case study. Programming GUIs with the GEC Toolkit requires knowledge of functional data structures, such as algebraic data types, and functions that manipulate them. This is material that is covered in any introductory course in functional programming. This enables novice programmers to program highly dynamic GUI applications. We are currently working on a Web-enabled back-end for the GEC Toolkit. This expands the application domain of GEC programming from the desktop to the world wide web. We are investigating whether the high level of abstraction facilitates reasoning about interactive applications, perhaps using proof tools such as Sparkle. Acknowledgements The interactive family tree case study was suggested by Marie-José van Diem. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/32880/32880.pdf?sequence=1", "len_cl100k_base": 6909, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29680, "total-output-tokens": 9529, "length": "2e12", "weborganizer": {"__label__adult": 0.0003578662872314453, "__label__art_design": 0.0003285408020019531, "__label__crime_law": 0.0002428293228149414, "__label__education_jobs": 0.000652313232421875, "__label__entertainment": 5.346536636352539e-05, "__label__fashion_beauty": 0.00013506412506103516, "__label__finance_business": 0.0001308917999267578, "__label__food_dining": 0.0003345012664794922, "__label__games": 0.0003712177276611328, "__label__hardware": 0.0005383491516113281, "__label__health": 0.0003571510314941406, "__label__history": 0.00017535686492919922, "__label__home_hobbies": 7.778406143188477e-05, "__label__industrial": 0.0002655982971191406, "__label__literature": 0.00020182132720947263, "__label__politics": 0.00019490718841552737, "__label__religion": 0.0004096031188964844, "__label__science_tech": 0.0037994384765625, "__label__social_life": 8.350610733032227e-05, "__label__software": 0.0027866363525390625, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002675056457519531, "__label__transportation": 0.0004410743713378906, "__label__travel": 0.0001842975616455078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32906, 0.01217]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32906, 0.50059]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32906, 0.83347]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2352, false], [2352, 5410, null], [5410, 7931, null], [7931, 10898, null], [10898, 13527, null], [13527, 16442, null], [16442, 18937, null], [18937, 20698, null], [20698, 23568, null], [23568, 26600, null], [26600, 29422, null], [29422, 32906, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2352, true], [2352, 5410, null], [5410, 7931, null], [7931, 10898, null], [10898, 13527, null], [13527, 16442, null], [16442, 18937, null], [18937, 20698, null], [20698, 23568, null], [23568, 26600, null], [26600, 29422, null], [29422, 32906, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32906, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32906, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2352, 2], [2352, 5410, 3], [5410, 7931, 4], [7931, 10898, 5], [10898, 13527, 6], [13527, 16442, 7], [16442, 18937, 8], [18937, 20698, 9], [20698, 23568, 10], [23568, 26600, 11], [26600, 29422, 12], [29422, 32906, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32906, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
113b667cfce57a876833db80df9f0bd6a6acd17e
Easy-to-Use, Web-Based Graphical User Interface for Controlling Entities in Constructive Simulations Per-Idar Evensen, Kristian Selvaag, Dan Helge Bentsen, Helene Rodal Holhjem, Håvard Stien Norwegian Defence Research Establishment (FFI) Instituttveien 20 NO-2027, Kjeller NORWAY per-idar.evensen@ffi.no, kristian.selvaag@ffi.no, dan-helge.bentsen@ffi.no, helene-rodal.holhjem@ffi.no, havard.stien@ffi.no ABSTRACT At the Norwegian Defence Research Establishment (FFI) we investigate how to increase combat effectiveness in land force operations. As part of this work we need to conduct detailed, entity-level simulations of battalion to brigade level operations, to assess and compare the performance of different land force structures and operational concepts. For our use, traditional constructive simulation systems often do not have the required level of resolution, are too complex and cumbersome to use, or are not flexible enough with respect to representation of new technologies. We are therefore developing webSAF, an easy-to-use, web-based graphical user interface (GUI) for controlling semi-automated entities in constructive simulations. So far we have developed functionality for controlling indirect fire and manoeuvre entities simulated in Virtual Battlespace (VBS) and air defence entities simulated in VR-Forces. Since we conduct simulations for experimentation and analysis purposes (and not procedure training), the system has been designed to only require a minimum amount of input from the operators. Furthermore, our simulations need to be conducted with a minimum number of operators on each side. In this paper we describe the overall design and implementation of the GUI system, as well as the experiences from the initial experiments with the system 1.0 INTRODUCTION At the Norwegian Defence Research Establishment (FFI) we investigate how to increase combat effectiveness in land force operations. As part of this work we need to conduct detailed, entity-level simulations of battalion to brigade level operations, to assess and compare the performance of different land force structures and operational concepts. For our use, traditional constructive simulation systems often do not have the required level of resolution, are too complex and cumbersome to use, or are not flexible enough with respect to representation of new technologies (e.g. new sensor systems, weapon systems, and protection systems). We are therefore developing webSAF, an easy-to-use, web-based graphical user interface (GUI) system for controlling semi-automated entities in constructive simulations [1]. So far we have developed functionality for controlling indirect fire and manoeuvre entities simulated in Virtual Battlespace (VBS) and air defence entities simulated in VR-Forces. In the future we plan to extend the system with functionality for controlling combat service support entities simulated in VBS, and air entities simulated in VR-Forces. Since we conduct simulations for experimentation and analysis purposes, and not command and staff training, the system has been designed to only require a minimum amount of input from the operators. It is a goal that military officers should be able to control the entities with minimal instruction. In addition, the simulations should be conducted with a minimum number of operators on each side. First, in this paper, we give a brief description of the background for this work. Secondly, we describe the intended use of the simulation system, including a list of requirements. Thirdly, we give an overall description of the simulation system we are composing and its components. After this, we describe the design and implementation of the web-based GUI system in more detail. Finally, we summarize the initial experiences with the system, and outline our plans for further work. 2.0 BACKGROUND The first time constructive simulation with semi-automated forces (SAF) was used to support analysis of land force structures at FFI, was in 2010. In project “Future Land Forces” the performance of five fundamentally different land force structures were evaluated through a series of simulation experiments [2][3]. The goal was to rank these structures based on their relative performance. In addition, the experiments revealed several strengths and weaknesses of the evaluated structures. The lightweight simulation tool mōsbē from BreakAway was used in the experiments. The main reasons for this choice were that mōsbē supports simulation of brigade level operations and has a user interface that makes it easy to control large groups of entities. The experiments were conducted as simulation-supported wargames, where military officers planned and controlled the operations, and the simulation tool kept track of the movement of units and calculated the results of duels and indirect fire attacks. The development of mōsbē was discontinued in 2008, but the tool was in use at FFI until 2014. After this, GESI (GEfechts-1Simulation System) from CAE (which is the command and staff training system at the Norwegian Army Land Warfare Centre) has been used a few times in a similar manner. Most recently, simulations in GESI have been used to support the special review of the Norwegian land forces, which has been taking place in 2017. Both mōsbē and GESI have several significant weaknesses which can produce questionable simulation results. For example, they do not support representation of micro-terrain features, and this systematically favours long-range, direct fire weapon systems. In addition, the human behaviour models are very simple, and this entails that the operators have to spend a lot of time micromanaging the units. Furthermore, they do not have an application programming interface (API) for developing additional functionality or plug-ins. When used for experimentation and analysis purposes, it is also a disadvantage that GESI requires a large number of operators (since it is a system for training command and staff procedures). This, of course, limits the convenience and accessibility of GESI simulations for experimentation purposes. Consequently, there is a need to establish a new capability for conducting more detailed constructive simulations of battalion level operations at FFI. Based on our experience with simulations in mōsbē and GESI, we have identified two main factors that have the potential to improve the fidelity of our constructive simulations: (1) increased terrain resolution, and (2) better tactical artificial intelligence (AI) that can exploit this terrain [4]. We expect that these two factors will result in more realistic detection and engagement distances. In the recent years we have seen an increasing use of web technologies for modelling and simulation (M&S). The new version of the HyperText Markup Language (HTML), HTML5 (which was finalized in October 2014), especially provides new abilities for creating interactive web-interfaces for simulations and games [4]. The biggest advantage of using web technology for M&S is accessibility. Following this trend, we decided to develop our own easy-to-use, web-based GUI system for controlling constructive entities simulated in VBS and VR-Forces (or any other simulation tool with a suitable API). This way we are able to tailor the GUI system to our specific needs. The system has been named webSAF. The next generation distributed simulation environments are envisaged to rely heavily on open standards and service-based architectures [5]. M&S as a service (MSaaS) is an architectural and organizational approach that promotes abstraction, loose coupling, reusability, composability and discovery of M&S services [6][7]. In this paradigm, the vision is that instead of composing a simulation environment as a federation of different individual simulation systems, the users will be able to compose a simulation environment based on services. Web-based GUI systems which are independent from the simulation systems will of course fit perfectly into the MSaaS paradigm. 3.0 SIMULATION OF LAND FORCE OPERATIONS FOR EXPERIMENTATION AND ANALYSIS We conduct simulations of land force operations for experimentation and analysis purposes, and one of the main research questions we are investigating is how to increase combat effectiveness. As part of this work we need to assess and compare the performance of different land force structures, which will vary with regard to: composition of material and equipment, tactical organization, and operational concept. Our focus is on simulation of the actual combat phases with engagements and skirmishes. However, these phases of combat are also the most complex and therefore the most challenging phases to simulate. We have not found a single simulation tool that is satisfactory for our use, and it is not possible for us to develop our own simulation system from scratch. Our overall solution is therefore to tailor available commercial off-the-shelf (COTS) simulation tools to suit our needs. 3.1 Simulation-Supported Wargame Our simulation experiments are conducted as what can be described as simulation-supported, two-sided (blue and red) wargames. Military officers participate as players/operators on both sides. A typical experiment consists of a planning phase, a wargaming session, and an after action review (AAR) session. We normally use tactical vignettes derived from national defence scenarios. After the experiments there is an analysis phase. 3.2 Entity-Level versus Aggregate-Level Models With today's computing capabilities it should be feasible to simulate operations up to brigade level size using entity-level models. Entity-level models have higher resolution and thus the potential to achieve higher fidelity than aggregate-level models. It is also easier to see what is going on in an entity-level simulation, and this makes them more accessible for face validation [4]. Nevertheless, it is also a known issue that current entity-level models tend to produce attrition levels that are higher than those observed historically [8][9]. “Possible phenomena present in actual combat and accounted for in [the parameters of aggregate-level attrition models (such as the Lanchester models)] but not [in the] entity-level combat models that could explain this include target duplication, shooter non-participation, suppression effects, self-preservation, and suboptimal use of weapons and targeting systems” [8]. In other words, current constructive entity-level combat models lack good representations of the human aspects of combat and combat friction, resulting in that the simulated operations tend to run smoother than they would in the real world. For our use, however, it would have been difficult to calibrate aggregate-level models to represent new combat systems and new concepts due to the lack of data from real operations. Furthermore, our need for higher resolution, and also the possibility of combining virtual and constructive simulations (e.g. having virtual air entities operating together with constructive ground entities), supports that we need a simulation system based on entity-level models. 3.3 Modelling Human Behaviour Modelling realistic human behaviour and cognition, including decision-making and creativity, is the hardest and most complex challenge in combat simulation [10]. Human behaviour modelling is challenging because “[h]uman behaviour is not generally yet thought to obey observable laws” [11]. Consequently, the current status for human behaviour simulation is that it can be used “to understand, [but] not necessarily predict, the aggregate behaviour of an inherently complex system for which we have no better model” [12]. When using human behaviour models “it is often possible to perform sensitivity analysis and identify broad trends as opposed to exact predictions” [12]. For example, a constructive simulation may show that increasing the number of main battle tanks (MBTs) has a positive effect on the outcome of a scenario, but it cannot be used to pinpoint the exact number of MBTs required to win a battle with a certain probability [12]. 3.4 Summary of Simulation Capability Requirements The most important requirements for our new simulation capability can be summarized as follows: - It must support entity-level simulations of battalion level land force operations, and use simulation models with high resolution. Typically, we need to simulate operations that include between one and four maneuver battalions (each with 50–60 combat vehicles and about 200 soldiers), one or two artillery battalions, and one air defence battalion on each side. - It must represent entities from the following capabilities: maneuver (MBTs, infantry fighting vehicles (IFVs), unmanned ground vehicles (UGVs), infantry, etc.), indirect fire (artillery, missile launchers, close air support, etc.), air defence (missile launchers, radars, etc.), aviation (fixed wing aircrafts, helicopters, unmanned aerial vehicles (UAVs), etc.), combat engineering, and intelligence, surveillance, target acquisition, and reconnaissance (ISTAR) (sensors and facilities). - It must support high terrain resolution (10 meters between the elevation points, or better) and representation of micro-terrain features. - It must have a GUI that is easy to use (military officers should be able to control the entities with minimal instruction). - It must have an API for developing additional functionality (e.g. customized simulation models). - It should have a tactical AI where the operators are able to issue high-level orders at the company level and more detailed orders at the platoon level for vehicles and squad level for infantry. - It should have a tactical AI where the entities are able to intelligently take advantage of the terrain. 4.0 SIMULATION SYSTEM Based on the requirements above, we are composing a simulation system where the ground-to-ground combat entities are simulated in VBS and the air and air defence entities are simulated in VR-Forces. The VBS entities will use behaviour models developed in VBS Control. All the constructive entities are controlled from the web-based GUI system. Figure 1 shows an overview of the components in the simulation system. The simulation tools are briefly described below, and webSAF described in more detail in the next section. 4.1 Simulation Tools 4.1.1 Virtual Battlespace (VBS) VBS is an interactive, three-dimensional synthetic environment, for use in military training and experimentation. It is developed by BISim and is based on game technology from the Armed Assault (ARMA) series. VBS is delivered with a comprehensive content library funded by different nations over the years, and in addition it has its own scripting language for creating new functionality. VBS is used by many military organizations worldwide (including the Norwegian Armed Forces), and has become an industry standard in game-based military simulation. VBS Fusion is a C++-based API for VBS, developed by SimCentric Technologies. It provides a comprehensive object-oriented C++ library for developing plug-ins for VBS. The plug-ins are compiled as dynamic link libraries (DLLs), which can be loaded by the VBS engine. 4.1.2 VBS Control VBS Control is a new framework for AI in VBS. The behaviour models used by VBS Control are based on behaviour trees (BTs), and users can create customized behaviour models through the VBS Control Editor. The BTs can be visually debugged in real time. BTs are a relatively new and increasingly popular approach for developing behaviour models [13]. The approach has become especially popular for creating behaviours for non-player characters (NPCs) in computer games, robots, and autonomous vehicles. The first high-profile computer game which used BTs was Halo 2 from Bungie Software [14], which was released in 2004. Easy-to-Use, Web-Based GUI for Controlling Entities in Constructive Simulations BTs are represented as directed trees with a hierarchy of control flow nodes and task nodes that control the behaviour of an agent. The control flow nodes contain some decision logic and have at least one child node. The task nodes are leaf nodes (nodes without children) and contain conditional tasks which test some property in the simulated environment (or the real world in case of robots and autonomous vehicles) or action tasks which alter the state of the simulation (or the real world) in some way [4]. What makes BTs so powerful is their **composability** and **modularity**. Task nodes and control flow nodes are composed into sub-trees which represent more complex actions, and these actions can be composed into higher-level behaviours [15]. Task nodes and action sub-trees can be reused, and different sub-trees can be developed independent of each other. BT editors enable users (e.g. military officers) to create modular behaviour models without needing programming skills. For our system we will typically need to develop BTs for combat drills corresponding to each manoeuvre order that can be issued from webSAF. 4.1.3 VR-Forces VR-Forces is a framework for computer-generated forces (CGF) developed by VT MAK. It includes simulation models for hundreds of battlefield units and systems in all domains (land, naval, air, and space), and can be used in both aggregate- and entity-level mode. The VR-Forces framework is customizable, and provides several C++-based APIs for different development tasks. 5.0 WEB-BASED GUI SYSTEM (webSAF) webSAF consists of a **web server**, a number of **web clients**, and a **map server**. The web clients connect to the server through WebSockets, and send and receive data in the form of JavaScript Object Notation (JSON) packets. Map tiles used by the GUI system are streamed from the map server. The ground-to-ground combat entities are simulated in VBS, and the web server communicates with VBS through remote procedure calls (RPCs) using the Apache Thrift framework [16]. The air defence sensors and weapons are simulated in VR-Forces, and the web server communicates with VR-Forces using JSON packets. The communication with VR-Forces goes through a WebLVC Server [17], which wraps the JSON packets inside High Level Architecture (HLA) interactions. The HLA interactions are unwrapped by plug-ins in VR-Forces. The typical data transferred between the web server and the simulation systems are entity status data and orders. 5.1 Components 5.1.1 Web Server The web server is the management hub of the system. It is the connection link between the web clients and the simulation, and validates all user interactions. 5.1.2 Web Clients Each operator connects to the web server from a web browser and selects side (blue or red) and role (currently joint fires, manoeuvre, and air defence). 5.1.3 Map Server The map server is a simple HyperText Transfer Protocol (HTTP) file server, serving map tile images in Portable Network Graphics (PNG) format. Tiles are organized in a file structure according to z-, x-, and y-indexes, where the z-index indicates zoom level, and x- and y-indexes indicate longitude and latitude, respectively. 5.2 Operator Roles So far the GUI system supports three operator roles: manoeuvre, joint fires, and air defence. Functionality for more roles (e.g. aviation, naval, and ISTAR) will be implemented in the future. 5.2.1 Manoeuvre The manoeuvre operator controls combat vehicles and infantry. Orders are currently issued at the platoon level for vehicles and at the squad level for infantry. (In the future we plan to implement functionality for issuing higher-level orders at the company level.) In addition, the manoeuvre operator can request indirect fire support from the joint fires operator. The goal is that each manoeuvre operator should be able to control an entire manoeuvre battalion, including organic indirect fire support entities like mortars. 5.2.2 Joint Fires The joint fires operator receives fire requests from the manoeuvre operators, and prioritizes and forwards fire missions (possibly with adjustments) to the indirect fire entities. Currently, we only have support for land-based indirect fire entities like tube artillery, rocket artillery, and missile launchers, but in the future we plan to also implement functionality for fire support from air and sea entities. 5.2.3 Air Defence The air defence operator receives detections from the air defence sensors in VR-Forces. VR-Forces also communicates the tracks in a prioritized list, which is presented to the air defence operator. For each track, VR-Forces also sends a list of prioritized launchers which can engage on that track. The air defence operator selects which track to engage, which launcher to use, and when to fire. 5.3 GUI Functionality The main component of the GUI is the map area. In addition, the GUI shows the order of battle (OOB) (to the left) and notifications (to the right). The OOB and notifications can be easily hidden/shown using the buttons in the upper left and right corners respectively (or using the underlined hotkeys). The top bar shows the operator’s role and the current simulation time. Figure 2 shows an example of the web-based GUI for a blue joint fires operator. The GUI functionality has been developed in close collaboration with military subject matter experts (SMEs). 5.3.1 Map Area Map navigation is similar to most so called “sliding maps” often found online, like OpenStreetMap [18] and Google Maps [19]. The map can be panned by dragging the map with the mouse. When the mouse button is released, the map continues to slide before slowing to a halt. The sliding action enables the user to pan great distances with minimal hand movement. The map view can be zoomed in or out by scrolling the mouse wheel. The bottom bar shows a map scale bar for the current zoom level and the mouse cursor coordinates in Military Grid Reference System (MGRS). Overlaid on the map are units, spot reports, tracks, and other visual information. Map icons for friendly units are aggregated based on the OOB and zoom level. At the most zoomed-in level, each icon represents a platoon for vehicles and a squad for infantry. At the most zoomed-out level, each icon represents a company (or equivalent). 5.3.2 Order of Battle (OOB) The OOB has a collapsible tree view, similar to the file explorer in many operating systems. The tree represents the command hierarchy of the operator’s side (blue or red). To the right of the unit name is a MIL-STD-2525C symbol, indicating the type and size of the unit. Double clicking the unit name zooms and pans the map to bring that unit to the centre. Hovering over the unit name with the cursor highlights that unit on the map. Conversely, hovering over a unit icon in the map highlights that unit in the OOB. 5.3.3 Notifications Notifications are received when an event of interest occurs, for example, when enemy entities are spotted, or friendly entities are lost. When a notification is received, it appears on the top of the list. The list is kept chronologically sorted, with the newest notification on top. The exclamation mark in the notifications button has a badge that indicates the total number of active notifications. Each notification has a label, for example “Enemy spotted” or “Fire order underway”. Clicking the notification label does different things depending on the type of notification. For example, clicking an “Enemy spotted”-notification centers the map on where the enemy was spotted. The time since the notification was received is indicated below the label. 5.3.4 Fog of War An operator cannot see entities on the opposing side by default. Rather, when a unit under the control of the current operator spots an enemy entity, it is reported in the GUI as MIL-STD-2525C symbols to indicate type. In addition, the entity’s approximate speed and heading is shown as a labelled arrow protruding from the map icon’s base. If multiple enemies are spotted close to each other, the map icons will stack, as illustrated in Figure 3. The number beside each stacked icon indicates the count of that type. The first time that an enemy entity is spotted in an area, a notification is given. Clicking this notification automatically pans... the map to the location where the enemy entity was spotted. As long as an enemy entity can be seen by friendly troops, the spot information (type, position, speed, and heading) is continuously updated in the map view. Receiving updates with too high frequency though, might give the operator a situational awareness well above what is realistic. In addition, if a lot of enemy entities can be seen, too frequent updates might cause network congestion. Therefore, the value of the spot update frequency cannot be set until after running through several test scenarios. Figure 3: Stacked spot report icons. 5.3.5 Issuing Orders Units can be selected for the purpose of issuing orders. Once they are selected, their icons become highlighted in the map and orders can be issued by bringing up the context menu (right-clicking the map). Different options appear on the context menu depending on the role of the operator and whether any units have been selected. A joint fires operator will for example have the option of moving the selected artillery unit to the clicked location, or designate a new remotely delivered mine field. 5.4 GUI Design Philosophy The GUI design philosophy is based on balancing the following aspects, roughly prioritized in descending order: 1. *Time efficient user interaction*. The user should be able to do a lot, fast. 2. *Familiar user interaction*. The system should be easy to learn. 3. *Ease of implementation*. The project needs to be completed within time and budget constraints. 4. *Ease of maintenance and extension*. The project is anticipated to expand in the future. The software architecture therefore needs to be amenable to changes and extensions. 5. *Aesthetics*. Visual style should not hinder adoption by users. One of the main drivers behind the project was to save operator time during constructive simulation, in order to reduce the number of operators required. The GUI was therefore designed to help the operator reach his or her goals with the least amount of manual effort. Operator efficiency takes precedence over other design aspects. Familiarity was the driving aspect for other design decisions. The “sliding maps” way of panning and zooming in and out in the map view, for example, is largely unchanged compared with widely used online map services. The right-click context menu is also a tried and true way of minimizing UI clutter, while still keeping useful interactions just a click away. The context menu is complemented by a few static menus, mainly the OOB and the notification menus. The user has the option of hiding them, and when not hidden, they are kept to the sides as to not clutter the user’s main focus: the map. Apart from the thin bars at the top and bottom of the screen, the map fills the entire GUI area. Menus and available interactions are hidden in optional overlays instead of being ever present surrounding the map. The disadvantage is that, for a new user, it is not immediately obvious what interactions are available. On the other hand, it can be overwhelming for a new user to see all the available options simultaneously. A user who has become familiar with the GUI, however, will prefer the situational awareness given by a larger map. Another aspect that might discourage new users is poor aesthetics. Though aesthetics is rarely prioritized in internal software projects, slow adoption of new software and routines is a persistent problem across organizations [20]. Thus some consideration was given to the visual design of the GUI, like the medium-dark colour scheme, button highlighting, and flat menus with sharp corners and faint shadows to indicate depth ordering. Perceived performance is closely linked to aesthetics; users get fatigued by slow user interfaces. Performance tuning has not taken significant development time so far, but it was considered during software architecture design and in selecting external software libraries. 5.5 GUI Implementation The GUI was implemented in the TypeScript programming language [21]. TypeScript provides code type safety which helps catch errors early in development. It compiles to JavaScript that runs in modern web browsers. To ease the development of GUI components, the web framework React was employed. React is maintained by Facebook, and provides an easy way to create scalable and responsive single page applications [22]. React also has an open source ecosystem of ready-made GUI components with permissive licenses. Some of these components were used, and others were developed in-house. The component that contains the map is rendered using OpenLayers [23], an open source web mapping library that provides similar functionality to other online maps. To manage communication, a simple event system sends updates (in JSON format) between GUI components and the web server via a WebSockets client. Figure 4 shows the architecture of the web client. ![Diagram](image) **Figure 4: Software architecture of the web client.** 6.0 INITIAL EXPERIENCES WITH THE SYSTEM We have so far tested webSAF in small experiments with up to three operators on each side, and are able to simulate operations with a battalion of manoeuvre entities, a battalion of indirect fire entities, and a battalion of air defence entities on each side. However, we have not yet developed behaviour models (for VBS Control) for the majority of the combat drills for manoeuvre platoons. The initial tests with the web-based GUI system have been successful, and we have so far not observed any performance issues with the web server or GUI. The system has not yet been stress tested, but for simulations with multiple manoeuvre battalions we envisage that we need about eight to ten operators on each side. 6.1 Overall Feedback from Officers Since the GUI system is being developed in close collaboration with military SMEs and officers, suggested improvements are implemented continuously. However, the overall feedback we have received from military officers (typical users of command and staff training systems and C2 systems) is that webSAF appears to be better than other GUI systems they have seen before, especially with regards to accessibility, ease of use, and visual feedback. Senior military officers have also expressed that they want their command and staff training system, and also their C2 systems, to have GUIs with similar functionality. 6.2 Advantages to Developing a Web-Based GUI System There are several advantages to developing our own web-based GUI system for controlling entities in constructive simulations: - It requires minimal hardware for the operator clients. - No simulation software (and thus no licenses) needs to be installed on the operator clients (only a web browser is needed). - It can be tailored to a specific use (e.g. two-sided wargaming or command and staff training). - It is in principle independent of the simulation tools in use (a plugin is needed for each tool), and can be used to control entities in a federation of different simulation tools. - There are a lot of tools and libraries available (many of them are open source) for developing web-based GUls and applications. - In principle, the web-based GUI system allows military participants to join a simulation experiment from any location, even on a classified network. We expect to see more web-based GUls for simulation systems in the near future, and we also expect more web-based GUls for C2 systems. For example, a demonstrator for a system for simulation support during the planning of military operations with a web-based GUI has been developed FFI [24]. In principle, the same web-based GUI system could be used for wargaming, command and staff training, and C2, for example by defining different sets of operator roles. 7.0 FURTHER WORK Much of the work ahead will be focused on developing behaviour models of combat drills for manoeuvre platoons [13]. It is worth mentioning that the composability and modularity of BT-based behaviour models open up opportunities for collaboration on development, and sharing of behaviour models, between nations using VBS. In addition, we need to include additional capabilities like combat service support and ISTAR in the simulation system. The GUI system must also be further developed with new roles and orders to control entities from the additional capabilities. Before we can start doing useful experiments, the simulation system requires calibration, validation and extensive testing. We expect to be able to conduct simulation experiments with multiple manoeuvre battalions on each side by the mid-2019. Moreover, it will also be interesting to compare the results from these simulations with the results from our previous simulations, and investigate if the new, more detailed simulations are more useful. So far the development of webSAF has been focused on creating a working solution. In the future we will look at the possibility of making the GUI system more interoperable by using the Coalition Battle Management Language (C-BML) [25] standard for sending orders to the simulated units. We will also look at the possibility of employing the WebLVC protocol [17] for transferring entity status data from the whole simulation system (and not just VR-Forces). 8.0 SUMMARY AND CONCLUSION This paper has presented the design and implementation of webSAF, a web-based GUI system for controlling entities in constructive simulations (semi-automated entities in VBS and VR-Forces). The GUI functionality is being developed in close collaboration with military SMEs and officers, and is designed to be easy to use, in order to reduce the number of operators required. The GUI system will be used to operate simulation-supported, two-sided wargames, and so far it has been tested in small experiments with an operator controlling a manoeuvre battalion, an operator controlling a battalion of indirect fire entities, and one operator controlling a battalion of air defence entities on each side. Developing a web-based GUI system for controlling entities in constructive simulations has several advantages. It requires only minimal hardware, and no simulation software, for the operator clients. In addition, it can be tailored to a specific use (e.g. two-sided wargaming or command and staff training), and is in principle simulation system agonistic. Military officers have expressed that webSAF appears to be better than other GUI systems they have seen before, and that they want their command and staff training system, and their C2 systems, to have GUIs with similar functionality. We believe the web-based GUI approach is the way ahead to make constructive simulations and C2 systems more accessible and easier to use. 9.0 REFERENCES Easy-to-Use, Web-Based GUI for Controlling Entities in Constructive Simulations
{"Source-Url": "https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-MSG-159/MP-MSG-159-07.pdf", "len_cl100k_base": 7043, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31097, "total-output-tokens": 9214, "length": "2e12", "weborganizer": {"__label__adult": 0.0011444091796875, "__label__art_design": 0.0009794235229492188, "__label__crime_law": 0.006801605224609375, "__label__education_jobs": 0.004848480224609375, "__label__entertainment": 0.0005331039428710938, "__label__fashion_beauty": 0.0004868507385253906, "__label__finance_business": 0.0007929801940917969, "__label__food_dining": 0.0009393692016601562, "__label__games": 0.0255889892578125, "__label__hardware": 0.0031585693359375, "__label__health": 0.001018524169921875, "__label__history": 0.0029048919677734375, "__label__home_hobbies": 0.00015878677368164062, "__label__industrial": 0.0032138824462890625, "__label__literature": 0.0005373954772949219, "__label__politics": 0.0032291412353515625, "__label__religion": 0.0006537437438964844, "__label__science_tech": 0.33642578125, "__label__social_life": 0.0003502368927001953, "__label__software": 0.041778564453125, "__label__software_dev": 0.556640625, "__label__sports_fitness": 0.0032367706298828125, "__label__transportation": 0.00406646728515625, "__label__travel": 0.0005908012390136719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39608, 0.04269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39608, 0.31897]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39608, 0.92107]], "google_gemma-3-12b-it_contains_pii": [[0, 3063, false], [3063, 7310, null], [7310, 11156, null], [11156, 14361, null], [14361, 15874, null], [15874, 19153, null], [19153, 22267, null], [22267, 24263, null], [24263, 26359, null], [26359, 29250, null], [29250, 32626, null], [32626, 35948, null], [35948, 38730, null], [38730, 39529, null], [39529, 39608, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3063, true], [3063, 7310, null], [7310, 11156, null], [11156, 14361, null], [14361, 15874, null], [15874, 19153, null], [19153, 22267, null], [22267, 24263, null], [24263, 26359, null], [26359, 29250, null], [29250, 32626, null], [32626, 35948, null], [35948, 38730, null], [38730, 39529, null], [39529, 39608, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39608, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39608, null]], "pdf_page_numbers": [[0, 3063, 1], [3063, 7310, 2], [7310, 11156, 3], [11156, 14361, 4], [14361, 15874, 5], [15874, 19153, 6], [19153, 22267, 7], [22267, 24263, 8], [24263, 26359, 9], [26359, 29250, 10], [29250, 32626, 11], [32626, 35948, 12], [35948, 38730, 13], [38730, 39529, 14], [39529, 39608, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39608, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
93549042988b8c5afe413877fc9457919fd88e53
Applying Software Engineering Techniques to the Development of Robotic Systems Claudia Pons\textsuperscript{1,2}, Gabriela Arévalo\textsuperscript{1,2}, Gonzalo Zabala\textsuperscript{1}, Ricardo Morán\textsuperscript{1} \textsuperscript{1} CAETI - UAI, Buenos Aires, Argentina \textsuperscript{2} CONICET Avda. Rivadavia 1917 (1033), Buenos Aires, Argentina \{gabrielag.arevalo, claudia.pons, gonzalo.zabala\}@uai.edu.ar, richi.moran@gmail.com Abstract. In these days most robotic systems tend to be complex to maintain and reuse because existing frameworks are based mainly on code-driven approaches. This means the software development process is reduced to the implementation of systems using specific programming languages. During the constant evolution, the systems grow in size and in complexity. Even when these approaches address the needs of robotics-focused markets, currently used methodologies and toolsets fail to cope with the needs of such complex software development process. The general objective of our project is the definition of a methodological framework supported by a set of tools to deal with the requirements of the robotic software development process. A major challenge is to make the step from code-driven to model-driven in the development of robotic software systems. Separating robotics knowledge from short-cycled implementation technologies is essential to foster reuse and maintenance. In this paper we report our initial results. Keywords: robotic software system, software development process, software engineering. 1 Introduction Robotic systems (RSs) play an increasing role in everyday life. The need for robotic systems in industrial and educational settings increases and becomes more demanding. While robotic systems grow to be more and more complex, the need to apply the engineering principles to their software development process is a major challenge in these days. Traditional approaches, based on mainly coding the applications without using modelling techniques, are used in the development process of these software systems. Even when the applications are running and being used in the different robotic systems, we identify several problems. Among them, it is worth mentioning there is no clear documentation of design decisions taken during the coding phase, making the evolution and the maintenance of the systems difficult. When using specific programming languages, such Smalltalk in EToys\(^3\), or C in RobotC\(^4\), we lose the possibility of generalizing concepts that could be extracted, reused and applied in different systems, avoiding to code them from scratch when they are needed. Thus, we observe that currently used methodologies and toolsets fail to address the needs of such complex software development process. It is widely accepted that new approaches should be established to meet the needs of the development process of actual complex RSs. Component-based development (CBD) [20], Service Oriented Architecture (SOA) [8] [9], as well as Model Driven software Engineering (MDE) [19] [15] and Domain-Specific Modeling (DSM) [13] are the main modelling and composition-based technologies in the RSs domain. In our project, we will investigate on the current use of those modern software engineering techniques to improve the development of robotic software systems and their actual automation level. Considering that existing systems are already coded, a major challenge is to make the step from code-driven to model-driven in the development of robotic software systems to extract the general and specific concepts of existing applications based on the different specific programming languages. The general objective of our research and development project is the definition of a methodological framework (composed of models and code) supported by a set of tools able to deal with the requirements of the robotic software development process and considering the existing implemented approaches. Robotic platforms must possess a highly dynamic adaptive capacity, accompanying the rate of development of such technologies and the specific features of each hardware platform. 2 Specific Objectives and Working Hypothesis In this project we are working on the following hypothesis: - It is mandatory to work towards applying engineering principles to cope with the complexity of existing implemented robotic software systems because actual systems are more focused on hand-crafted single-unit systems. - Interfaces and behavior of the robotic systems should be defined at a higher level of abstraction so that they could be re-used with different platforms. Separating robotics knowledge from short-cycled implementation technologies is essential to foster reuse and maintenance. - Applying existing software engineering modelling methodologies, such as MDE, SOA and CBD, to build robotic software systems will save a great amount of time and effort while favouring reusability and maintenance of such systems. Within this context, the specific objectives of our project are: \(^3\) http://www.etoys.com/ \(^4\) http://www.robotc.net/ – Summarizing the existing state of the art concerning the application of software engineering modelling methodologies, such as SOA, MDE and CBD on the robotic systems development field; – Building a methodological approach on top of the applications of existing techniques providing an advance in the field; – Building tool support to the robotic software development process. Examples of these tools are: a domain specific modeling language equipped with graphical editors, code generation facilities, integration with web services and component definition editors. – Addressing the results to build real robotic systems used in industry and education. 3 Problem Relevance: Existing Approaches Although the complexity of robotic software is high because it is mainly based on specific programming languages, reuse is still restricted to the level of libraries. At the lowest level, different libraries have been implemented for robotic systems to perform tasks, such mathematical computations for kinematics, dynamics and machine vision [10]. Instead of composing systems out of building blocks with proved services, the overall software integration process for another robotic system often is still reimplementation of the glue logic to bring together the various libraries. Often, the overall integration is completely driven by middleware systems and their functionalities. Middlewares are often used to hide complexity regarding inter-component communication, such as OpenRTM-aist [4]. Obviously, this approach is expensive and does not take advantage from a mature process to enhance overall robustness. We have faced with this problem in our own practice. We have been programming educational robots for more than 10 years [3] [2] and we have observed in the last years the emergence of robotic kits oriented to non-expert users gave rise to the development of a significant number of educational projects using robots. Those projects apply robots at different education levels, from kindergarten through higher education, especially in areas of physics and technology. In this context, one of the problems we have found is that the hardware of the robotic kits is constantly changing; in addition its use is not uniform across different regions and even education levels. Therefore, the technical interfaces of these robots should hide these differences so that teachers are not required to change their educational material every time they are used. An example of these interfaces is “Physical Etoys” [2], a project which we participated in and which proposes a standard teaching platform for programming robots, regardless of whether they are based on Arduino, Lego, or other technologies. In this context, it is widely accepted that new approaches should be established to meet the needs of the development process of today complex RSs. Component-based development (CBD) [20], Service Oriented Architecture (SOA) [8] [9], as well as Model Driven software Engineering (MDE) [15] and Domain-Specific Modeling (DSM) [13] are the most relevant technologies in the RSs domain. Firstly, the Component-based development paradigm [20] states that application development should be achieved by linking independent parts, the components. Strict component interfaces based on predefined interaction patterns separates the functionalities in different parts, and thus partition the overall complexity. This results in loosely coupled components that interact via services with contracts. Components, such as architectural units, allow specifying very precisely, using the concept of port, both the provided services and the required services by a given component and defining a composition theory based on the notion of a connector. Component technology offer high rates of reusability and ease of use, but little flexibility regarding to the implementation platform: most existing component are linked to C/ C++ and Linux (e.g. Microsoft robotics developer studio [1], EasyLab [7]), although some achieve more independence, thanks to the use of some middleware (e.g. Smart Software Component model [16], Orocos [10] ). Secondly, we need to define interfaces and behavior at a higher level of abstraction so that they could be used in systems with different platforms. This is what prompted the idea of abstract components, which would be independent of the implementation platform but could be translated into an executable software or hardware component. Thus, the migration from code-driven designs to a model-driven development is mandatory in robotic components to overcome the current problems. A model-based description is a suitable means to express contracts at component interfaces and to apply tools to verify the overall behavior of composed systems and to automatically derive the executable software. Instead of building tool support for each framework from scratch, one should now try to either express the needed models in standardized modeling languages like UML or any DSL, separating components from the underlying computer hardware. In the context of software engineering, the Model Driven Development (MDD) [19] and Domain-Specific Modeling approach (DSM) [13] have emerged as a paradigm shift from code-centric software development to model-based development. Models are considered as first-class constructs in software development, and developers’ knowledge is encapsulated by means of model transformations. The essential characteristic of MDD and DSM is that software development primary focus and work products are models. Its major advantage is that models can be expressed at different levels of abstraction and hence they are less bound to any underlying supporting technology. Finally, Service-oriented architecture (SOA) [8][9] is a flexible set of design principles used during the phases of systems development and integration in computing. A system based on a SOA will package functionality as a suite of interoperable services that can be used within multiple, separate systems from several business domains. SOA also generally provides a way for consumers of services, such as web-based applications, to be aware of available SOA-based services. SOA defines how to integrate widely disparate applications for a Web-based environment and uses multiple implementation platforms. Rather than defining an API, SOA defines the interface in terms of protocols and functionality. Service-orientation requires loose coupling of services with operating systems, and other technologies that underlie applications. So far, there is no proposal taking advantage of the combined application of CBP, SOA and MDE neither to robotic software system development in general, nor to educational robotic system development in particular. 4 First results: Modeling and automatic code derivation The MDD approach represents a paradigm where models of the system, at different levels of abstraction, are used to guide the entire development process. Models are implementation-independent and they are automatically transformed to executable code. The MDD process can be divided into three phases: the first phase builds a platform independent model (PIM), which is a high-level technology-independent model; then, the previous model is transformed into one or more platform specific models (PSM); these models are lower level and describes the system in accordance with a given deployment technology; finally, the source code is generated from each PSM. As said in section 1, most systems are coded without documentation or designed models. In this section we show how we could have MDD process for automatically deriving from the existing code of an already implemented robotic system with a reverse-engineered approach. To illustrate our approach, we use a small example of a 4-wheel robot, which is composed of a distance sensor and two motors A and B. The robot moves straightforward constantly, while there are not obstacles on its way. Whenever the robot finds an obstacle, it turns left to avoid the obstacle and keeps moving. To find the obstacle, the distance sensor will detect if the robot has a wall in a distance less than 20 cm. If so, the robot will change the power of the motors to make them turn left. If there is no wall, the motor keeps with the same value. Depending on the existing platforms, there are different ways we implement this robot behavior. We will show Physical Etoys\(^5\), RobotC\(^6\) and Pharo \(^7\). Physical Etoys is a visual programming tool that connects the virtual world of computers with the real world in which we live in. With Physical Etoys you can program real world objects (such as robots) to perform interesting tasks, or you can sense the world and use that information to control virtual objects (such as drawings on the screen). The user must grab tiles representing instructions and assembling a script. Figure 1 shows the visual representations of our example using Physical Etoys. If you do not use the predefined tiles to build the script, you can code explicitly the robot and its behavior using Smalltalk\(^8\) (the embedded programming language) as shown in Figure 2. RobotC is an Integrated development environment targeted towards students that is used to program and control LEGO NXT’s, VEX’s, and RCX robots using a programming language based on C. It aims to allow code to be ported from one robotics platform to another with little or no change in code. You do not have a visual programming environment in RobotC, all robot behaviour must be defined by coding in C as shown in Figure 3. If you use an embedded robot framework in existing programming languages, in our case Pharo (a free open-source Smalltalk environment), you can also code the robot behaviour. Figure 4 shows how we code the example using Pharo. Depending on the abstraction level of the programming languages, sometimes we need to deal with specific details of the implementation. For example, in Pharo we code explicitly how to connect the port and plug the motors previously to specify the desired behavior. These connections are implicit in other platforms. If we need to represent our example in another platform, we must provide some code transformation from one platform to another one, or even build the application from scratch. But this process is expensive. Our proposal is to build a PIM that allows to abstract the domain concepts and their functionalities using MDD and CBSD. With the generated models we can then derive the code in any specific robotic language. Thus, in our example, we can identify the components Robot, DistanceSensor and Motor, and the functionality is as we described previously. We can represent this robot with a Component and Behaviour Models represented with their respective UML models. Figure 6 shows a UML Component Diagram that identifies the structural components of the example and which are the required/provided interfaces in their connectors, and Figure 5 shows an Activity Diagram to model the behaviour of the robot example. Even though these models are useful enough to understand the existing implementation and show the transformation of PSM (code) to PIM (Component and Behavior Models), we could have an intermediate PSM model of objects (due to the fact we are working with object-oriented code) represented with a Class Diagram inferring \(^5\) http://tecnodacta.com.ar/gira/projects/physical-etoys/ \(^6\) http://www.robotc.net \(^7\) http://www.pharo-project.org \(^8\) www.cincomsmalltalk.com I nxt motorA motorC sensorDeDistancia | nxt := LegoNxt new connectOnPort: 'COM4'. motorA := NxtMotor new plugOn: nxt portA. motorC := NxtMotor new plugOn: nxt portC. [distanceSensor rawValue < 20 ifTrue: [motorA power: 50. motorC power: -50] ifFalse: [motorA power: 75. motorC power: 75] ] repeat. Fig. 3. Implementation of the Robot with Nxc Fig. 4. Implementation of the Robot with Pharo due to the space limitations of the paper, we will not present the corresponding Class Diagram of our example. Thus, with an abstraction of the concepts represented mainly in Component and Behavior models, we can generate semi-automatically the example in another robot programming language considering that we can have glue code to fill in the code (as we have shown in Pharo). Fig. 5. Activity Model of the Behaviour of our Example Fig. 6. Component Model of our Example All the previous examples are built based on already implemented applications where the developer is able to access and modify -if needed- the code. However, the robot can be connected to external devices using predefined interfaces defined in those external components and whose implementation the developer can not access in the environment he/she is working on. In our exam- ple, Figure 8 (Pharo code) shows how we code and connect our example robot to an external component named Kinect Server that will indicate if it moves forwards or backwards depending the movements of arms of the user. In this specific case, we are not able to see the code of the component that models those arms’ movements. However, in Pharo we can design the interfaces that will connect to the external component. In this specific case, the code is \texttt{kinect := KinectServer new connect}. However, in Physical Etoys we can design the interfaces that will connect to the external component using a visual representation and code (Figures 9 and 10). In our case, they are blue points represent the connections to the external device that deals with arms movements. When we design the interfaces we mean we are not implementing the functionality of the external components, but more than we implement the glue code to be able to connect both components. Even though the internal and external identified components have a similar structure in the models, the way they connect to the robot are different because in the first case the developer implements their interfaces, and in the second case, the interfaces are already defined and the developer should be able to connect to them by implementing the corresponding glue code. In our specific case in Physical Etoys, it is represented with blue points. It is then worth to be able to identify two models: the Component Model shows the internal components, and we build a Service Model that shows the external components. Thus, Figure 7 shows the Component and the Service Models together. In our specific case, our service model is reduced to only one component. In more complex platforms, we can have several services that can be modelled with their respective glue code to be connected to the implemented robots. Summarizing, based on existing implementations we propose to infer the structure and behavior of the robots into class, activity and component/service models. These models are PIMs that can generate then new implementations. of the robots in another specific programming language, keeping the abstract concepts in these models and specific features in PIMs (such as code). Fig. 9. Implementation of the Robot with Etoys in a visual way Fig. 10. Implementation of the Robot with Etoys using code 5 Conclusions and Future Work In our project we are focused on capturing the fundamental concepts of the robotic software development process, its relationships and properties. This modeling approach includes concepts to represent services and components as primary elements in the robotic system in a higher level abstraction than the code itself. So, the CBSD and SOA paradigms provide a starting point for a MDE approach in robotics where the differences between various software platforms and middleware systems can be completely hidden from the user due to the definition of intermediate abstraction level. The original contribution of this project consists in the development of a methodological framework supported with different tools for the construction of robotic software systems using mainly MDD. There are only preliminary proposal on applying model-driven development to robotics, see for example the works described in [17], [12], [7] [18] and [5]. None of these works takes advantage of the combination of the model-driven paradigm with service-oriented and component-based approaches, as we propose in this project. It is worth mentioning the Microsoft Robotics Studio (MSRS) [1], that is a service-oriented development and simulation platform to create robotics applications. MSRS is strongly based on the concept of SOA. References
{"Source-Url": "http://imgbiblio.vaneduc.edu.ar/fulltext/files/TC104342.pdf", "len_cl100k_base": 4195, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21726, "total-output-tokens": 5600, "length": "2e12", "weborganizer": {"__label__adult": 0.0004010200500488281, "__label__art_design": 0.0002799034118652344, "__label__crime_law": 0.0003981590270996094, "__label__education_jobs": 0.0006666183471679688, "__label__entertainment": 4.9233436584472656e-05, "__label__fashion_beauty": 0.0001628398895263672, "__label__finance_business": 0.0001825094223022461, "__label__food_dining": 0.0003659725189208984, "__label__games": 0.0005235671997070312, "__label__hardware": 0.0013513565063476562, "__label__health": 0.0004978179931640625, "__label__history": 0.00021076202392578125, "__label__home_hobbies": 0.00011843442916870116, "__label__industrial": 0.0006012916564941406, "__label__literature": 0.0001842975616455078, "__label__politics": 0.00025081634521484375, "__label__religion": 0.0004222393035888672, "__label__science_tech": 0.01274871826171875, "__label__social_life": 8.124113082885742e-05, "__label__software": 0.0032291412353515625, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.0004107952117919922, "__label__transportation": 0.000934600830078125, "__label__travel": 0.0002052783966064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24741, 0.02675]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24741, 0.79598]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24741, 0.89501]], "google_gemma-3-12b-it_contains_pii": [[0, 2454, false], [2454, 5124, null], [5124, 8272, null], [8272, 11768, null], [11768, 13420, null], [13420, 16638, null], [16638, 17947, null], [17947, 20044, null], [20044, 21659, null], [21659, 24741, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2454, true], [2454, 5124, null], [5124, 8272, null], [8272, 11768, null], [11768, 13420, null], [13420, 16638, null], [16638, 17947, null], [17947, 20044, null], [20044, 21659, null], [21659, 24741, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24741, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24741, null]], "pdf_page_numbers": [[0, 2454, 1], [2454, 5124, 2], [5124, 8272, 3], [8272, 11768, 4], [11768, 13420, 5], [13420, 16638, 6], [16638, 17947, 7], [17947, 20044, 8], [20044, 21659, 9], [21659, 24741, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24741, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
273436df25d17136bb4b6a6572e97d20257d0b44
Rserve A Fast Way to Provide R Functionality to Applications Simon Urbanek Abstract Rserve is a TCP/IP server which allows other programs to use facilities of R from various languages without the need to initialize R or link to the R library. Every connection has a separate workspace and working directory. Client-side implementations are available for popular languages such as C/C++ and Java. Rserve supports remote connection, authentication and file transfer. This paper describes the Rserve concept, compares it with other techniques and illustrates its use on several practical examples. 1 Introduction The R statistical environment provides a powerful suite of tools for statistical analysis for use in many scientific fields. Its application is not limited to statistical research only, but its modular design allows the use of R and its packages in other projects. Often it is feasible to hide the programming aspects of R from the user and integrate the computing capabilities of R into a customized software designed for a specific target group. Possible examples are web-based servlets that allow the user to analyze his/her data using a fixed process, or software for data visualization which uses R facilities to generate statistical models. R provides two native interfaces for communication with other applications: a simple standard input/output model or R as a shared library. As we will describe later they are unsatisfactory in many situations, either because of speed concerns or when the host application is not written in the C language. In this paper we propose another method of using R facilities from other applications, called Rserve. It is not limited to specific programming languages and even allows separation of client and R environments. The main concerns while developing the system were speed and ease of use. In section 2 we describe the basic design and functionality of **Rserve** while more detailed description of the implementation is given in section 3. In section 4 we compare Rserve to other communication methods, including the Omegahat (Temple Lang, 2000) approach and in section 5 we illustrate the use of Rserve on several basic examples. A real application of Rserve is described in section 6. Concluding remarks and ideas for the future are mentioned in section 7. ## 2 Basic design and features The main goal of Rserve is to provide an interface which can be used by applications to perform computations in R. Our experience with other communication methods has shown that there are three main points to be considered when designing a new system: separation, flexibility and speed. It is important to separate the R system from the application itself. One reason is to avoid any dependence on the programming language of the application, since a native direct interface to R (Chambers, 1998) is usable from the C language only (R Development Core Team, 2003). Another aspect comes from the fact that tight integration with R is more error prone, because the application must take internals of R into account. On the other hand application developers want the interface to be very flexible and make use of most R facilities. Finally speed is a crucial element, because the goal is to provide the user with the desired results quickly, without the need of starting an R session from scratch. A client/server concept allows us to meet all three key requirements. The computation is done by the Rserve core, which is a server answering requests from clients such as applications. The communication between the Rserve and the client is done via network sockets, usually over TCP/IP protocol, but other variations are also possible. This allows the use of a central Rserve from remote computers, the use of several Rserve by the remote client to distribute computation, but also local communication on a single machine. One Rserve can serve multiple clients simultaneously\(^1\). Every connection to Rserve obtains its own data space and working directory. This means, that objects created by one connection don’t affect other connections in any way. Additionally each connection can produce local files, such as images created by R’s bitmap graphics device, without interfering with other connections. Every application can open multiple connections to process parallel tasks. The data transfer between the application and Rserve is performed in a binary form to gain speed and minimize the amount of transferred data. Intermediate objects are stored in Rserve, therefore only objects of interest need to be transferred to the client. For practical examples, see section 5. Beside communication with the R core, Rserve has also an integrated authentication and file transfer protocol, which makes Rserve suitable for use on separate machines. User authentication is provided to add a level of security for remote use. File transfer allows copying of files needed for the computation or produced by R from the client to the server and vice versa. Currently Rserve supports two main groups of commands for communication with R: creation of objects in R and evaluation of R code. Most basic objects, such as numbers, strings or vectors can be constructed via direct object creation. The contents of the objects are sent in binary form from the client to the server. \(^1\)An exception to this rule is the Windows operating system. The reasons and alternative solutions are described in the next section. This provides an efficient way of transporting data necessary for evaluation. All objects are always passed by value to separate client and server data spaces. This way both the client and the server are free to dispose of the data at any time, preventing crashes which are inherent in other communication methods where the systems share the same data physically. The second main command group is the evaluation of R code. As opposed to object creation such code is sent in clear text to Rserve and is treated as if the code was typed on the console in R. The resulting object of the evaluation can be sent back in binary form to the client if requested. Most R types are supported, including scalar numbers, strings, vectors, lists (hence classes, data frames and so on), lexical objects etc. This allows Rserve to pass entire models back to the client. The client may decide to not receive any objects, which is useful while setting up intermediate objects in R which are not directly relevant to the client. Rserve provides two basic error handling facilities. The three possible results of an evaluation are successful evaluation, run-time error in the code and parser error. The status is always returned to the client application to allow corresponding action. Since Rserve is just a layer between the application and R it is still possible to influence run-time error handling in R itself e.g. with the error option or the try command. A typical use of Rserve facilities is to load all necessary data into R, perform computations according to the user input, such as construction of models, and send results back to the application for display. All data and objects are persistent until the connection is closed. This allows an application to open a connection early e.g. when the user first specified the dataset, pass all necessary data to the server and respond to user input by ad-hoc computing of the desired models or estimates. Since the results are not in textual form, no tedious parsing of the results is necessary. The interface to Rserve is modular and documented, allowing access to Rserve from any application or programming language which supports sockets, including current scripting and programming languages. We have implemented a client for Rserve in pure Java, which interfaces to most facilities of Rserve and maps all objects available in Rserve into native Java objects or classes. The use of the Java client is illustrated in the examples section. 3 Implementation details In the previous section we have presented the basic goals, design and features of Rserve. In this section we aim to to describe the technical implementation details. The information provided here is aimed at developers who want to understand the technical details of the implementation or to write a new Rserve client. Others are free to skip this section, the use of Rserve is illustrated in the following sections. Rserve uses a client/server design to separate R from the client, to allow distribution of tasks over multiple computers or to provide a central computation node. The communication can be performed over any reliable bi-directional stream, the current implementation uses datagram sockets. This allows communication over TCP/IP or unix sockets. The default TCP port is 6311 but can be modified upon Rserve startup. Rserve uses its own message-oriented data transfer protocol over the stream, which is described in the Rserve online documentation (Urbanek, 2003). It defines the encoding of all basic data types, such as integers, doubles, strings, but also R's SEXP - simple expressions. All data types in R are internally represented as SEXP. Any more complex objects, such as arrays, vectors, lists or closures are transported as SEXP. Rserve takes care of the encoding and decoding of the expressions and uses its own storage format. Because of the client/server nature of Rserve all objects are passed by value (exception to this rule are symbols which are passed by name, unless explicitly evaluated). The same format is used for the encoding of data types in both directions. We decided to define our own data transfer encoding in order to be able to mimic R’s SEXP more closely. The definition is platform independent as it defines the format of all primitive types as well. The current version of Rserve supports encoding of most SEXP types, namely all primitive types, logicals, vectors, arrays, lists, LANGSXP, CLOSXP and symbols (as names). Decoding is implemented only for integers, doubles, strings and the corresponding arrays. Other types are likely to follow in the future. The protocol is message-oriented. Rserve waits for a packet containing the command (e.g. CMD_eval) and all associated parameters (e.g. string "R.version.string"). Rserve performs the specified action (in this case evaluating "R.version.string") and sends a response packet. The response packet indicates whether the command was successful or not and contains also the required data (in our case a string containing the R version). In case of an error the source of the error is indicated (I/O error, parser or run-time evaluation in R). The commands supported by the current Rserve can be divided into the following categories: 1. user authentication 2. evaluation of R expressions 3. assignment of values to R symbols 4. file transfer 5. administration (server shutdown) User authentication allows restricting the use of Rserve to certain users. Rserve currently supports two authentication methods: plain text or unix crypt password. The Rserve user information is stored in a separate file on the server, which is not connected with the system user database. The goal is to prevent unauthorized access to Rserves which operate in remote mode. The transport itself is not encrypted, it is possible to use other tools, such as ssh to enhance security. For more detailed discussion concerning R, Rserve and security, please consult the online documentation (Urbanek, 2003). Evaluation of R expressions is the key functionality of Rserve. It enables the client to execute code in R and to retrieve the result. The result is transported to the client as an encoded SEXP. Unless additional packages are used, any printed output is ignored, only the resulting value is returned. The error handling behavior is dependent on settings in R, such as the error option. From the application’s point of view the result is the same as if the error occurred in a regular R session using the terminal - the default being to unwind the stack and to return the control to the application along with the corresponding error code. Assigning of values to R symbols provides a fast way of transporting data to Rserve. Without this feature the only way of passing values was to send the corresponding command string, such as: "a <- c(1.5,2.4,6.7,3.5,1.2)". This approach is rather clumsy since the application usually stores the values in a binary form and would have to construct the string. Further problems are also caused by special characters and limited string representation of a number. Therefore Rserve provides a way of transporting values in encoded form to Rserve and assigning them to R symbols. Rserve could be extended by adding decoding support for more SEXP types. This would allow invocation of R commands in the style of .Call function. This functionality should be available in the next versions of Rserve. Since Rserve can be used in remote mode, where the server and the client run on different machines, it may be convenient to transfer files, such as data sets or images generated by R from the server to the client or vice versa. Rserve provides a simple file transport facility for this purpose. The supported facilities comprise opening, creating, reading, writing and deleting files. Finally the last group provides the shutdown command for server administration. Although the server responds to the usual shutdown signals, such as TERM and KILL, the shutdown command provides the most graceful termination. New connections are not accepted, but all current connections are kept open until closed by the client, then the server terminates. So far we have illustrated what happens while a client is connected to the Rserve. One of our main goals was to eliminate R initialization delay and to provide a separate data space and working directory for each connection. Therefore we need to describe what happens when a new connection is accepted. Rserve is linked to the R shared library and it initializes R during its startup. Now we have an initialized R waiting for commands. Rserve uses fork to create a new, initialized process as soon as a new incoming connection arrives. For most current operating systems fork has little overhead, since the same memory is used for the code and data segment is copied on write only. This allows very fast spawning of R processes for the clients. At the same time this method guarantees that each connection receives a clean, separate data space, unpolluted by previous connections which is independent of all other R instances. Before answering queries from the client Rserve creates a new working directory for the connection. The root of the working directories is configurable (default is /tmp/Rserv) and each subdirectory is of the form connX where X is a decimal number unique to the connection. Empty working directories are removed once the connection has been closed. The reason for retaining non-empty directories is that a local application (e.g. web server) may want to access the generated data (e.g. bitmap images previously generated) even after the connection was closed and is then responsible for their removal. This feature may become configurable at some later point if necessary. Rserve initializes an incoming connection by sending an identification string of 32 bytes which describes the basic capabilities of the server. Therefore listening Rserves are easily identifiable as they send 32 bytes of which the first four are "Rsrv". Rserve was primarily designed for unix operating systems, because those are mainly used for network servers. Rserve can be also used on computers running the Windows operating system, but several restrictions apply. The main difference is the inability to use the fork command to spawn new instances of Rserve quickly. Windows provides no such facilities. Although Rserve supports threads, R does not and therefore there is no alternative but to use separate Rserve instances for separate connections. Therefore the Windows version of Rserve supports only one connection at a time. Any subsequent connections to the same Rserve share the same working directory and data space. In this case Rserve doesn’t change the working directory. Depending on the client there are several approaches for using Rserve in a Windows environment. Applications which use one Rserve instance at a time are free to launch their own instance, use it and shut it down upon completion. This is sufficient for most applications. More sophisticated applications can initialize a pool of parallel Rserve instances and distribute computation among them, launching new instances when necessary. In Windows this can be easily done by an external application, therefore we decided not to incorporate this functionality directly in Rserve. 4 Comparison with other methods Rserve aims to complement the variety of available methods for communication to R, not to replace them. Native API for communication with R is defined on the level of the C or FORTRAN languages, excluding other languages unless some kind of a bridge is used, specific for each language. Rserve provides an interface which is defined independently of any programming or scripting language. A console-based interface, where commands for R are stored in a file and written into another file (also known as batch mode) is currently used by several applications, such as VASMM (Masaya Iizuka and Tanaka, 2002). It is very slow, because a new instance of R has to be started for each request. Results are usually stored in textual form, which is not suitable for interprocess communication and requires a parser at the application’s end. This may not be a problem for some scripting languages such as perl, but it is a problem for other languages such as Java. Rserve still provides a way of capturing textual output if necessary, although the preferred method is binary transfer. In turn Rserve has very little overhead for each new connection, because it doesn’t need to initialize a new instance of R. Since our main applications of Rserve involve the Java client for Rserve, we also compared Rserve to the SJava interface from the Omegahat project (Temple Lang, 2000). SJava is conceptually far more flexible than Rserve, because it allows calling both R from Java and Java from R. Rserve implies that Java is the controlling application and unlike SJava it has no concept of callbacks. As Rserve provides computational facilities for the client and every action is initiated on the Java side, callbacks are in fact undesirable, since R is not thread-safe. Let us compare Rserve with the R-from-Java part of SJava, because they are based on very similar philosophies. Rserve can be used remotely, because objects are copied when necessary. This approach allows distributed computing on multiple machines and CPUs. SJava works only locally since it embeds R into Java via JNI. Due to the fact that there is no synchronization between Java and R, and given that R is not multi-thread safe, it is fatal to make more than one call from Java into R. The application developer is responsible for proper synchronization when using SJava. Rserve performs this synchronization by design and also allows the use of multiple concurrent connections. SJava allows passing of object references, which can lead to serious problems and crashes if utmost care is not taken. Both SJava and Rserve support conversion of basic object types between Java and R. Rserve provides much a wider variety of objects passed to Java by encapsulating all native SEXP (simple expressions) of R into a Java class. In SJava conversion of complex types is supported, but the developer must implement his own class converters. Probably one of the main disadvantages of SJava is that it does not run out-of-the-box. The code is dependent on the hardware and operating system used, as well as the Java implementation. In general it is very hard to setup and the solution cannot be deployed with the application. Rserve comes as a regular R source package for unix platforms and as a binary executable for Windows. The client side of Rserve needs no special setting and is platform independent, since it is written in pure Java, currently requiring only the JDK 1.1 specification. The Rserve client classes can be easily deployed with any Java program and no third party software is necessary. Finally R has its own set of functions for socket communication, therefore it should be possible to build a pure R program mimicking the same functionality as Rserve. Although this is true, such an R program would only be able to serve one connection at a time and would lack separate workspaces. The use of the serializable format of R instead of the Rserve protocol was also suggested, but the serialization is known only to R and therefore the application would have to implement the full serialization protocol. Only limited documentation was at our disposal when the decision was made, therefore we decided to use our own binary protocol. 5 Using Rserve Rserve itself is provided as a regular R package and can be installed as such. The actual use is not performed by the library command, but by starting the Rserve executable (Windows) or typing R CMD Rserve on the command line (all others). By default Rserve runs in local mode with no enforced authentication. Once the Rserve is running any applications can use its services. All of our applications using Rserve represent Java programs which use R for computation, therefore we will show examples using the Java client for Rserve. The principles are identical when using other Rserve clients, therefore using Java as the starting point poses no limitation. Before plunging into real examples, let us consider the minimal “hello world” example: ```java Rconnection c = new Rconnection(); REXP x = c.eval("R.version.string"); System.out.println(r.asString()); ``` The code has the same effect as typing `R.version.string` in R. In the first line a connection to the local Rserve is established. Then the R command is evaluated and the result stored in a special object of the class REXP. This class encapsulates any objects received or sent to Rserve. If the type of the returned objects is known in advance, accessor methods can be called to obtain the Java object corresponding to the R value, in our case a regular String. Finally this string is printed on the standard output. The following code fragment illustrates the use of slightly more complex native Java types: ```java double[] d = (double[]) c.eval("rnorm(100)").getContent(); ``` This single line in Java provides an array of 100 doubles representing random numbers from the standard normal distribution. The numeric vector in R is automatically converted into double[] Java type. In cases where no native Java type exists, Rserve Java client defines its own classes such as RList or RBool\(^2\). This approach makes the use of Rserve very easy. As a first more practical example we want to calculate a Lowess smoother through a given set of points. The Java application lets the user specify the data allowing interactive changes of the points, displays a regular scatter plot and needs coordinates of the smoother to be obtained from R. One way of obtaining such a result would be to construct a long string command of the form lowess(c(0.2,0.4,...), c(2.5,4.8,...)) and using the eval method to obtain the result. This is somewhat clumsy, because the points usually already exist in a double array in the Java application and the command string must be constructed from these. An alternative involves constructing objects in R directly. The following code shows the full Lowess example: ```java double[] dataX, dataY; ... RConnection c = new RConnection(); c.assign("x", dataX); c.assign("y", dataY); RList l = c.eval("lowess(x,y)").asList(); double[] lx = (double[]) l.at("x").getContent(); double[] ly = (double[]) l.at("y").getContent(); ``` First the Java application defines the arrays for the data points dataX and dataY. The application is responsible for filling these arrays with the desired content. Then we assign the contents of these arrays to R variables x and y. The assign command transfers the contents in binary form to Rserve and assigns this content to the specified symbol. This is far more efficient than constructing a string representation of the content. Once the variables are set in R we are ready to use the lowess function. It returns a list consisting of two vectors x and y which contain the smoother points. The RList object provides the method at for extraction of named entries of a list. Since lists may contain entries of different types, the object returned by the at method is of the class REXP whose content can be cast into double[] in our case. The result can now be used by the Java application. More complex computations can be performed even without transmission of resulting objects. This is useful when defining functions or constructing complex models. Model objects are usually large, because they contain original data points, residuals and other meta data. Although they can be transferred to the client, it is more efficient to retain such objects in R and extract relevant information only. This can be done by using the voidEval method which does not transfer the result of the evaluation back to the client: ```java c.assign(y, ...) ... c.voidEval("m<-lm(y~a+b+c)""); double[] coeff = (double[]) c.eval("coefficients(m)").getContent(); ``` In the above example a linear model is fitted, but its content is not passed back to the client. It is stored in an object in R for later use. Finally the coefficients are extracted from the model and passed back to the Java application. \(^2\)Java’s boolean type has no support for NA missing values, therefore it cannot be used to directly represent the logical type in R. So far we used Rserve in local mode only. Extension to remote Rserve connections is possible without code changes, except for additional parameters to the Rconnection constructor, specifying the remote computer running the Rserve. For details about the use of remote authentication, error handling and file transfer, consult the documentation supplied with the Rserve and the Java client. The use is again straight-forward, since native Java facilities, such as input/output streams are used. 6 Example In the following we want to describe a real-life application of Rserve. The example features Klimt (Urbanek and Unwin, 2001), a software for visualization and analysis of trees and forests. Klimt is written entirely in Java and provides numerous interactive facilities for visualization of tree models and analysis of associated data. Klimt can be used as a stand-alone application, but it requires R for the construction of tree models. Therefore it needs a way of communicating with R to perform the necessary computations. There are four tasks for which Klimt connects to a Rserve: initialization, construction of a tree from the open data set, construction of tree branches when the user interactively modifies a tree and finally construction of derived variables. When initializing R by opening the Rserve connection, Klimt checks the version of R and loads the necessary libraries for tree construction - tree or rpart depending on the user's choice. Before the first tree is generated or the first variable is used, Klimt stores the entire data set in R by assigning each variable of the data set into R objects of the same name. For tree construction Klimt simply evaluates an R expression of the form: "tree("+formula+","+parameters+")$frame". The resulting object contains a data frame which entirely describes the tree. This information is converted by Klimt into an internal representation of a tree. The formula is generated from the items selected by the user from a list. Optional parameters can be specified by the user. It is recommended to wrap the evaluated expression in the try function. The resulting object is then either the requested tree, or a string containing the error message if the command was not successful. In Klimt the actual code looks like this: ```java REXP r=c.eval("try(tree("+formula+","+parameters+")$frame")"); if (r.getType()==REXP.XT_STRING) { String error=r.asString(); ... } else { SNode root=convertTree(r.asList()); ... } ``` Here SNode is the internal recursive representation of a tree in Klimt. A similar approach is used for interactive tree splitting. The user interactively specifies the split, resulting in two nodes. Two subsets corresponding to the interactively created nodes are used, one tree is grown for each node and attached to its parent node. The main advantage is that the connection is held open and therefore the data set doesn't need to be re-transmitted to Rserve. ```java REXP r=c.eval("try(tree("+formula+","+parameters+")$frame")"); if (r.getType()==REXP.XT_STRING) { String error=r.asString(); ... } else { SNode root=convertTree(r.asList()); ... } ``` Here SNode is the internal recursive representation of a tree in Klimt. A similar approach is used for interactive tree splitting. The user interactively specifies the split, resulting in two nodes. Two subsets corresponding to the interactively created nodes are used, one tree is grown for each node and attached to its parent node. The main advantage is that the connection is held open and therefore the data set doesn't need to be re-transmitted to Rserve. Finally derived variables can be created by evaluating an expression supplied by the user and stored in the requested variable: ```r REXP r=c.eval("try("+varName+" <- "+expr")"); ``` If the expression supplied by the user is correct, then the result must be an array of the same length as the data set in Klimt. Since the variables are stored directly in R, expressions of the form \( v_1/v_2 + v_3 \) deliver the expected result if the data set contains the variables \( v_1, v_2 \) and \( v_3 \). 7 Conclusions Rserve complements the family of interfaces between applications and R by providing a fast, language-independent and remote-capable way of using all facilities of R from other programs. Due to a clean separation between R (server) and the application (client), internal data manipulation on one side cannot affect the other. Using network sockets for the communication ensures platform and software independence of the client and the server. At the same time restriction to local use is also possible, requiring no physical network. For concurrent connections Rserve offers both data and file space separation between connections. Each new connection is accepted almost immediately without the need for initializing R engine. Integrated file transfer protocol allows the use of remotely created files, such as plot bitmaps created by R. User authentication is provided for a level of security, especially when used in remote mode. This concept is suitable for distributed computing and load balancing. The supplied Java client provides an easy embedding of R facilities into Java programs. Evaluation and transfer of most types from R to the application is provided, including complex objects such as models. All basic types are automatically converted to corresponding Java classes. Rserve is very versatile, since it poses no limit on the facilities of R used. Although Rserve allows the execution of all R commands, the user should avoid any commands involving the GUI or console input, since Rserve has no console and there is no guarantee that it has any GUI at all. An exception are applications that provide their own copy of Rserve and have control over the way Rserve is started. Typical uses of Rserve include interactive applications using R for model construction (see KLIMT project) or web-servlets performing online computations. As of now only basic types, such as numbers, strings and vectors hereof, can be assigned directly to R objects. The framework allows the transfer of arbitrarily complex types supported by the REXP, but the Rserve side is not fully implemented yet. Only transfer of evaluated objects supports all common expression types. Rserve currently provides two client implementations: for Java and C languages. The Rserve protocol is well defined and allows the implementation of further clients in other programming or scripting languages when needed. Rserve was tested on Linux, Mac OS X and Windows operating systems. The Windows version is the only restricted one, because there is no possible way of spawning new instances of R quickly in Windows. If all a Windows application needs is non-concurrent Rserve connections, it can provide its own copy of the Rserve binary, which will automatically find the last installed R and use it for computations, preventing clashes with other applications. The Rserve project is released under GPL software license, which means that it can be modified or enhanced if necessary. The current Rserve is already used by several projects and is being enhanced as needs arise. For details and recent development, please visit the Rserve project page: http://stats.math.uni-augsburg.de/Rserve References Affiliation Simon Urbanek Department of computer oriented statistics and data analysis University of Augsburg Universitätsstr. 14 86135 Augsburg Germany E-mail: simon.urbanek@math.uni-augsburg.de
{"Source-Url": "https://www.r-project.org/conferences/DSC-2003/Proceedings/Urbanek.pdf", "len_cl100k_base": 6674, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30821, "total-output-tokens": 7485, "length": "2e12", "weborganizer": {"__label__adult": 0.0002639293670654297, "__label__art_design": 0.00028634071350097656, "__label__crime_law": 0.0003070831298828125, "__label__education_jobs": 0.0009012222290039062, "__label__entertainment": 8.296966552734375e-05, "__label__fashion_beauty": 0.00012093782424926758, "__label__finance_business": 0.0002651214599609375, "__label__food_dining": 0.00032973289489746094, "__label__games": 0.0004730224609375, "__label__hardware": 0.0008320808410644531, "__label__health": 0.00042629241943359375, "__label__history": 0.0002340078353881836, "__label__home_hobbies": 7.30752944946289e-05, "__label__industrial": 0.0004203319549560547, "__label__literature": 0.00022590160369873047, "__label__politics": 0.00021898746490478516, "__label__religion": 0.0004191398620605469, "__label__science_tech": 0.053802490234375, "__label__social_life": 0.000125885009765625, "__label__software": 0.025909423828125, "__label__software_dev": 0.91357421875, "__label__sports_fitness": 0.0002512931823730469, "__label__transportation": 0.0003654956817626953, "__label__travel": 0.00018465518951416016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34256, 0.01068]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34256, 0.61114]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34256, 0.91914]], "google_gemma-3-12b-it_contains_pii": [[0, 1916, false], [1916, 5464, null], [5464, 9053, null], [9053, 12325, null], [12325, 16148, null], [16148, 19735, null], [19735, 22855, null], [22855, 25914, null], [25914, 29549, null], [29549, 32906, null], [32906, 34256, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1916, true], [1916, 5464, null], [5464, 9053, null], [9053, 12325, null], [12325, 16148, null], [16148, 19735, null], [19735, 22855, null], [22855, 25914, null], [25914, 29549, null], [29549, 32906, null], [32906, 34256, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34256, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34256, null]], "pdf_page_numbers": [[0, 1916, 1], [1916, 5464, 2], [5464, 9053, 3], [9053, 12325, 4], [12325, 16148, 5], [16148, 19735, 6], [19735, 22855, 7], [22855, 25914, 8], [25914, 29549, 9], [29549, 32906, 10], [32906, 34256, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34256, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
9d1adcbba193a22228b94d3b131d00141040bd45
Automating DNSSEC Delegation Trust Maintenance Abstract This document describes a method to allow DNS Operators to more easily update DNSSEC Key Signing Keys using the DNS as a communication channel. The technique described is aimed at delegations in which it is currently hard to move information from the Child to Parent. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7344. Copyright Notice Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction .................................................. 3 1.1. Terminology .............................................. 4 1.2. Requirements Notation .................................... 4 2. Background .................................................... 5 2.1. DNS Delegations ........................................... 5 2.2. Relationship between Parent and Child DNS Operators .... 5 2.2.1. Solution Space ...................................... 6 2.2.2. DNSSEC Key Change Process .......................... 7 3. CDS (Child DS) and CDNSKEY (Child DNSKEY) Record Definitions 7 3.1. CDS Resource Record Format ............................... 8 3.2. CDNSKEY Resource Record Format .......................... 8 4. Automating DS Maintenance with CDS/CDNSKEY Records ............ 8 4.1. CDS and CDNSKEY Processing Rules ....................... 9 5. CDS/CDNSKEY Publication ...................................... 9 6. Parent-Side CDS/CDNSKEY Consumption .......................... 9 6.1. Detecting a Changed CDS/CDNSKEY ........................ 10 6.1.1. CDS/CDNSKEY Polling ................................ 10 6.1.2. Polling Triggers .................................... 11 6.2. Using the New CDS/CDNSKEY Records ....................... 11 6.2.1. Parent Calculates DS ................................. 12 7. IANA Considerations .......................................... 12 8. Privacy Considerations ....................................... 12 9. Security Considerations ...................................... 13 10. Acknowledgements ............................................. 14 11. References .................................................. 15 11.1. Normative References ................................... 15 11.2. Informative References ................................ 15 Appendix A. RRR Background ...................................... 17 Appendix B. CDS Key Rollover Example ............................ 17 1. Introduction The first time a DNS Operator signs a zone, they need to communicate the keying material to their Parent through some out-of-band method to complete the chain of trust. Depending on the desires of the Parent, the Child might send their DNSKEY record, a DS record, or both. Each time the Child changes the key that is represented in the Parent, the updated and/or deleted key information has to be communicated to the Parent and published in the Parent’s zone. How this information is sent to the Parent depends on the relationship the Child has with the Parent. In many cases this is a manual process -- and not an easy one. For each key change, there may be up to two interactions with the Parent. Any manual process is susceptible to mistakes and/or errors. In addition, due to the annoyance factor of the process, Operators may avoid changing keys or skip needed steps to publish the new DS at the Parent. DNSSEC provides data integrity to information published in DNS; thus, DNS publication can be used to automate maintenance of delegation information. This document describes a method to automate publication of subsequent DS records after the initial one has been published. Readers are expected to be familiar with DNSSEC, including [RFC4033], [RFC4034], [RFC4035], [RFC5011], and [RFC6781]. This document outlines a technique in which the Parent periodically (or upon request) polls its signed Children and automatically publishes new DS records. To a large extent, the procedures this document follows are as described in [RFC6781], Section 4.1.2. This technique is designed to be friendly both to fully automated tools and humans. Fully automated tools can perform all the actions needed without human intervention and thus can monitor when it is safe to move to the next step. The solution described in this document only allows transferring information about DNSSEC keys (DS and DNSKEY) from the Child to the Parental Agent. It lists exactly what the Parent should publish and allows for publication of standby keys. A different protocol, [CPSYNC-DNS], can be used to maintain other important delegation information, such as NS and glue records. These two protocols have been kept as separate solutions because the problems are fundamentally different and a combined solution is overly complex. This document describes a method for automating maintenance of the delegation trust information and proposes a polled/periodic trigger for simplicity. Some users may prefer a different trigger, for example, a button on a web page, a REST interface, or a DNS NOTIFY. These alternate additional triggers are not discussed in this document. This proposal does not include all operations needed for the maintenance of DNSSEC key material, specifically the initial introduction or complete removal of all keys. Because of this, alternate communications mechanisms must always exist, potentially introducing more complexity. 1.1. Terminology The terminology we use is defined in this section. The highlighted roles are as follows: - **Child**: The entity on record that has the delegation of the domain from the Parent. - **Parent**: The domain in which the Child is registered. - **Child DNS Operator**: The entity that maintains and publishes the zone information for the Child DNS. - **Parental Agent**: The entity that the Child has a relationship with to change its delegation information. - **Provisioning System**: A system that the Operator of the master DNS server operates to maintain the information published in the DNS. This includes the systems that sign the DNS data. - **CDS/CDNSKEY**: This notation refers to CDS and/or CDNSKEY, i.e., one or both. 1.2. Requirements Notation The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 2. Background 2.1. DNS Delegations DNS operation consists of delegations of authority. For each delegation, there are (most of the time) two parties: the Parent and the Child. The Parent publishes information about the delegations to the Child; for the name servers, it publishes an NS [RFC1035] Resource Record Set (RRset) that lists a hint for name servers that are authoritative for the Child. The Child also publishes an NS RRset, and this set is the authoritative list of name servers to the Child zone. The second RRset the Parent sometimes publishes is the DS [RFC4034] set. The DS RRset provides information about the DNSKEY(s) that the Child has told the Parent it will use to sign its DNSKEY RRset. In DNSSEC, a trust relationship between zones is provided by the following chain: Parent DNSKEY --> DS --> Child DNSKEY. A prior proposal [AUTO-CPSYNC] suggested that the Child send an "update" to the Parent via a mechanism similar to DNS UPDATE. The main issue became: how does the Child find the actual Parental Agent/server to send the update to? While that could have been solved via technical means, it failed to reach consensus. There is also a similar proposal in [PARENT-ZONES]. As the DS record can only be present at the Parent [RFC4034], some other method is needed to automate which DNSKEYs are picked to be represented in the Parent zone’s DS records. One possibility is to use flags in the DNSKEY record. If the Secure Entry Point (SEP) bit is set, this indicates that the DNSKEY is intended for use as a secure entry point. This DNSKEY signs the DNSKEY RRset, and the Parental Agent can calculate DS records based on that. But this fails to meet some operating needs, including the Child having no influence on what DS digest algorithms are used and DS records that can only be published for keys that are in the DNSKEY RRset; thus, this technique would not be compatible with Double-DS rollover [RFC6781]. 2.2. Relationship between Parent and Child DNS Operators In practical application, there are many different relationships between the Parent and Child DNS Operators. The type of relationship affects how the Child DNS Operator communicates with the Parent. This section will highlight some of the different situations but is by no means a complete list. Different communication paths: - Direct/API: The Child can change the delegation information via automated/scripted means. The Extensible Provisioning Protocol (EPP) [RFC5730], used by many Top-Level Domains (TLDs), is an example of this. Other examples are web-based programmatic interfaces that Registrars make available to their Resellers. - User Interface: The Child uses a web site set up by the Parental Agent for updating delegation information. - Indirect: The communication has to be transmitted via an out-of-band mechanism between two parties, such as by email or telephone. This is common when the Child DNS Operator is neither the Child itself nor the Registrar for the domain, but a third party. - Multi-step Combinations: The information flows through an intermediary. It is possible, but unlikely, that all the steps are automated via APIs and there are no humans involved. A domain name holder (Child) may operate its own DNS servers or outsource the operation. While we use the word “Parent” as singular, a Parent can consist of a single entity or a composite of many discrete parts that have rules and roles. We refer to the entity that the Child corresponds with as the Parent. An organization (such as an enterprise) may delegate parts of its name-space to be operated by a group that is not the same as that which operates the organization’s DNS servers. In some of these cases, the flow of information is handled either in an ad hoc manner or via some corporate mechanism; this can range from email to a fully automated operation. 2.2.1. Solution Space This document is aimed at the cases in which there is a separation between the Child and Parent. A further complication is when the Child DNS Operator is not the Child. There are two common cases of this: a) The Parental Agent (e.g., Registrar) handles the DNS operation. b) A third party takes care of the DNS operation. If the Parental Agent is the DNS Operator, life is much easier; the Parental Agent can inject any delegation changes directly into the Parent’s provisioning system. The techniques described below are not needed in the case when the Parental Agent is the DNS Operator. In the case of a third-party DNS Operator, the Child either needs to relay changes in DNS delegation or give the Child DNS Operator access to its delegation/registration account. Some Parents want the Child to express their DNSKEYs in the form of DS records, while others want to receive the DNSKEY records and calculate the DS records themselves. There is no consensus on which method is better; both have good reasons to exist. This solution is DS vs. DNSKEY agnostic and allows operation with either. 2.2.2. DNSSEC Key Change Process After a Child DNS Operator first signs the zone, there is a need to interact with the Parent, for example, via a delegation account interface to upload or paste in the zone’s DS information. This action of logging in through the delegation account user interface authenticates that the user is authorized to change delegation information for the Child published in the Parent zone. In the case where the Child DNS Operator does not have access to the registration account, the Child needs to perform the action. At a later date, the Child DNS Operator may want to publish a new DS record in the Parent, either because they are changing keys or because they want to publish a standby key. This involves performing the same process as before. Furthermore, when this is a manual process with cut and paste, operational mistakes will happen -- or worse, the update action will not be performed at all. The Child DNS Operator may also introduce new keys and can do so when old keys exist and can be used. The Child may also remove old keys, but this document does not support removing all keys. This is to avoid making signed zones unsigned. The Child may not enroll the initial key or introduce a new key when there are no old keys that can be used (without some additional out-of-band validation of the keys) because there is no way to validate the information. 3. CDS (Child DS) and CDNSKEY (Child DNSKEY) Record Definitions This document specifies two new DNS resource records, CDS and CDNSKEY. These records are used to convey, from one zone to its Parent, the desired contents of the zone’s DS resource record set residing in the Parent zone. The CDS and CDNSKEY resource records are published in the Child zone and give the Child control of what is published for it in the parental zone. The Child can publish these manually, or they can be automatically maintained by DNS provisioning tools. The CDS/CDNSKEY RRset expresses what the Child would like the DS RRset to look like after the change; it is a "replace" operation, and it is up to the software that consumes the records to translate that into the appropriate add/delete operations in the provisioning systems (and in the case of CDNSKEY, to generate the DS from the DNSKEY). If neither CDS nor CDNSKEY RRset is present in the Child, this means that no change is needed. 3.1. CDS Resource Record Format The wire and presentation format of the Child DS (CDS) resource record is identical to the DS record [RFC4034]. IANA has allocated RR code 59 for the CDS resource record via Expert Review [DNS-TRANSPORT]. The CDS RR uses the same registries as DS for its fields. No special processing is performed by authoritative servers or by resolvers, when serving or resolving. For all practical purposes, CDS is a regular RR type. 3.2. CDNSKEY Resource Record Format The wire and presentation format of the CDNSKEY ("Child DNSKEY") resource record is identical to the DNSKEY record. IANA has allocated RR code 60 for the CDNSKEY resource record via Expert Review. The CDNSKEY RR uses the same registries as DNSKEY for its fields. No special processing is performed by authoritative servers or by resolvers, when serving or resolving. For all practical purposes, CDNSKEY is a regular RR type. 4. Automating DS Maintenance with CDS/CDNSKEY Records CDS/CDNSKEY resource records are intended to be "consumed" by delegation trust maintainers. The use of CDS/CDNSKEY is OPTIONAL. If the Child publishes either the CDS or the CDNSKEY resource record, it SHOULD publish both. If the Child knows which the Parent consumes, it MAY choose to only publish that record type (for example, some Children wish the Parent to publish a DS, but they wish to keep the DNSKEY "hidden" until needed). If the Child publishes both, the two RRsets MUST match in content. 4.1. CDS and CDNSKEY Processing Rules If there is neither CDS nor CDNSKEY RRset in the Child, this signals that no change should be made to the current DS set. This means that, once the Child and Parent are in sync, the Child DNS Operator MAY remove all CDS and CDNSKEY resource records from the zone. The Child DNS Operator may choose to do this to decrease the size of the zone or to decrease the workload for the Parent (if the Parent receives no CDS/CDNSKEY records, it can go back to sleep). If it does receive a CDS or CDNSKEY RRset, it needs to check them against what is currently published (see Section 5). The following acceptance rules are placed on the CDS and CDNSKEY resource records as follows: - Location: MUST be at the Child zone apex. - Signer: MUST be signed with a key that is represented in both the current DNSKEY and DS RRsets, unless the Parent uses the CDS or CDNSKEY RRset for initial enrollment; in that case, the Parent validates the CDS/CDNSKEY through some other means (see Section 6.1 and the Security Considerations). - Continuity: MUST NOT break the current delegation if applied to DS RRset. If any of these conditions fail, the CDS or CDNSKEY resource record MUST be ignored, and this error SHOULD be logged. 5. CDS/CDNSKEY Publication The Child DNS Operator publishes CDS/CDNSKEY RRset(s). In order to be valid, the CDS/CDNSKEY RRset(s) MUST be compliant with the rules in Section 4.1. When the Parent DS is in sync with the CDS/CDNSKEY RRset(s), the Child DNS Operator MAY delete the CDS/CDNSKEY RRset(s); the Child can determine if this is the case by querying for DS records in the Parent. 6. Parent-Side CDS/CDNSKEY Consumption The CDS/CDNSKEY RRset(s) SHOULD be used by the Parental Agent to update the DS RRset in the Parent zone. The Parental Agent for this uses a tool that understands the CDS/CDNSKEY signing rules in Section 4.1, so it might not be able to use a standard validator. The Parent MUST choose to use either CDNSKEY or CDS resource records as its default updating mechanism. The Parent MAY only accept either CDNSKEY or CDS, but it MAY also accept both so it can use the other in the absence of the default updating mechanism; it MUST NOT expect there to be both. 6.1. Detecting a Changed CDS/CDNSKEY How the Parental Agent gets the CDS/CDNSKEY RRset may differ. Below are two examples of how this can take place. Polling: The Parental Agent operates a tool that periodically checks each of the Children that has a DS record to see if there is a CDS or CDNSKEY RRset. Pushing: The delegation user interface has a button (Fetch DS) that, when pushed, performs the CDS/CDNSKEY processing. If the Parent zone does not contain DS for this delegation, then the "push" SHOULD be ignored. If the Parental Agent displays the contents of the CDS/CDNSKEY to the user and gets confirmation that this represents their key, the Parental Agent MAY use this for initial enrollment (when the Parent zone does not contain the DS for this delegation). In either case, the Parental Agent MAY apply additional rules that defer the acceptance of a CDS/CDNSKEY change. These rules may include a condition that the CDS/CDNSKEY remains in place and valid for some time period before it is accepted. It may be appropriate in the "Pushing" case to assume that the Child is ready and thus accept changes without delay. 6.1.1. CDS/CDNSKEY Polling This is the only defined use of CDS/CDNSKEY resource records in this document. There are limits to the scalability of polling techniques; thus, some other mechanism is likely to be specified later that addresses CDS/CDNSKEY resource record usage in the situation where polling runs into scaling issues. Having said that, polling will work in many important cases such as enterprises, universities, and smaller TLDs. In many regulatory environments, the Registry is prohibited from talking to the Registrant. In most of these cases, the Registrant has a business relationship with the Registrar, so the Registrar can offer this as a service. If the CDS/CDNSKEY RRset(s) do not exist, the Parental Agent MUST take no action. Specifically, it MUST NOT delete or alter the existing DS RRset. 6.1.2. Polling Triggers It is assumed that other mechanisms will be implemented to trigger the Parent to look for an updated CDS/CDNSKEY RRset. As the CDS/CDNSKEY resource records are validated with DNSSEC, these mechanisms can be unauthenticated. As an example, a Child could telephone its Parent and request that it process the new CDS or CDNSKEY resource records, or an unauthenticated POST could be made to a web server (with rate-limiting). Other documents can specify the trigger conditions. 6.2. Using the New CDS/CDNSKEY Records Regardless of how the Parental Agent detected changes to a CDS/CDNSKEY RRset, the Parental Agent SHOULD use a DNSSEC validator to obtain a validated CDS/CDNSKEY RRset from the Child zone. A NOT RECOMMENDED exception to this is if the Parent performs some additional validation on the data to confirm that it is the "correct" key. The Parental Agent MUST ensure that previous versions of the CDS/CDNSKEY RRset do not overwrite more recent versions. This MAY be accomplished by checking that the signature inception in the Resource Record Signature (RRSIG) for CDS/CDNSKEY RRset is later and/or that the serial number on the Child’s Start of Authority (SOA) is greater. This may require the Parental Agent to maintain some state information. The Parental Agent MAY take extra security measures. For example, to mitigate the possibility that a Child’s Key Signing Key (KSK) has been compromised, the Parental Agent may inform (by email or other methods) the Child DNS Operator of the change. However, the precise out-of-band measures that a Parent zone takes are outside the scope of this document. Once the Parental Agent has obtained a valid CDS/CDNSKEY RRset it MUST check the publication rules from Section 4.1. In particular, the Parental Agent MUST check the Continuity rule and do its best not to invalidate the Child zone. Once checked, if the information in the CDS/CDNSKEY and DS differ, it may apply the changes to the Parent zone. If the Parent consumes CDNSKEY, the Parent should calculate the DS before doing this comparison. 6.2.1. Parent Calculates DS There are cases where the Parent wants to calculate the DS record due to policy reasons. In this case, the Child publishes CDNSKEY records, and the Parent calculates the DS records on behalf of the Children. When a Parent operates in "calculate DS" mode, it can operate in one of two sub-modes: full: The Parent only publishes DS records it calculates from DNSKEY records. augment: The Parent will make sure there are DS records for the digest algorithm(s) it requires(s). In the case where the Parent fetches the CDNSKEY RRset and calculates the DS, the resulting DS can differ from the CDS published by the Child. It is expected that the differences are only due to the different set of digest algorithms used. 7. IANA Considerations IANA has assigned RR Type code 59 for the CDS resource record. This was done for a draft version whose content was later incorporated into this document [DNS-TRANSPORT]. This document is the reference for CDS RRtype. IANA has assigned an RR Type for the CDNSKEY as described below: Type: CDNSKEY Value: 60 Meaning: DNSKEY(s) the Child wants reflected in DS Reference: This document 8. Privacy Considerations All of the information handled or transmitted by this protocol is public information published in the DNS. 9. Security Considerations This work is for the normal case; when things go wrong there is only so much that automation can fix. If the Child breaks DNSSEC validation by removing all the DNSKEYs that are represented in the DS set, its only repair actions are to contact the Parent or restore the DNSKEYs in the DS set. In the event of a compromise of the server or system generating signatures for a zone, an attacker might be able to generate and publish new CDS/CDNSKEY resource records. The modified CDS/CDNSKEY records will be picked up by this technique and may allow the attacker to extend the effective time of his attack. If there is a delay in accepting changes to DS, as in [RFC5011], then the attacker needs to hope his activity is not detected before the DS in the Parent is changed. If this type of change takes place, the Child needs to contact the Parent (possibly via a Registrar web interface) and remove any compromised DS keys. A compromise of the account with the Parent (e.g., Registrar) will not be mitigated by this technique, as the "new Registrant" can delete or modify the DS records at will. While it may be tempting, the techniques specified in this document SHOULD NOT be used for initial enrollment of keys since there is no way to ensure that the initial key is the correct one. If it is used, strict rules for inclusion of keys -- such as hold-down times, challenge data inclusion, or similar -- MUST be used along with some kind of challenge mechanism. A Child cannot use this mechanism to go from signed to unsigned (publishing an empty CDS/CDNSKEY RRset means no change should be made in the Parent). The CDS RR type should allow for enhanced security by simplifying the process. Since key change is automated, updating a DS RRset by other means may be regarded as unusual and subject to extra security checks. As this introduces a new mechanism to update information in the Parent, it MUST be clear who is fetching the records and creating the appropriate records in the Parent zone. Specifically, some operations may use mechanisms other than what is described here. For example, a Registrar may assume that it is maintaining the DNSSEC key information in the Registry and may have this cached. If the Registry is fetching the CDS/CDNSKEY RRset, then the Registry and Registrar may have different views of the DNSSEC key material; the result of such a situation is unclear. Therefore, this mechanism SHOULD NOT be used to bypass intermediaries that might cache information and, because of that, get the wrong state. If there is a failure in applying changes in the Child zone to all DNS servers listed in either Parent or Child NS set, it is possible that the Parental Agent may get confused either because it gets different answers on different checks or CDS RR validation fails. In the worst case, the Parental Agent performs an action reversing a prior action after the Child signing system decides to take the next step in the key change process, resulting in a broken delegation. DNS is a loosely coherent distributed database with local caching; therefore, it is important to allow old information to expire from caches before deleting DS or DNSKEY records. Similarly, it is important to allow new records to propagate through the DNS before use (see [RFC6781]). It is common practice for users to outsource their DNS hosting to a third-party DNS provider. In order for that provider to be able to maintain the DNSSEC information, some users give the provider their Registrar login credentials (which obviously has negative security implications). Deploying the solution described in this document allows third-party DNS providers to maintain the DNSSEC information without Registrants giving their Registrar credentials, thereby improving security. By automating the maintenance of the DNSSEC key information (and removing humans from the process), we expect to decrease the number of DNSSEC related outages, which should increase DNSSEC deployment. 10. Acknowledgements We would like to thank a large number of folk, including Mark Andrews, Joe Abley, Jaap Akkerhuis, Roy Arends, Doug Barton, Brian Dickson, Paul Ebersman, Tony Finch, Jim Galvin, Paul Hoffman, Samir Hussain, Tatuya Jinmei, Olaf Kolkman, Stephan Lagerholm, Cricket Liu, Matt Larson, Marco Sanz, Antoin Verschuren, Suzanne Woolf, Paul Wouters, John Dickinson, Timothe Litt, and Edward Lewis. Special thanks to Wes Hardaker for contributing significant text and creating the complementary (CSYNC) solution, and to Patrik Faltstrom, Paul Hoffman, Matthijs Mekking, Mukund Sivaraman, and Jeremy C. Reed for text and in-depth review. Brian Carpenter provided a good Gen-ART review. There were a number of other folk with whom we discussed this document; apologies for not remembering everyone. 11. References 11.1. Normative References [ RFC1035 ] Mockapetris, P., "Domain names - implementation and [ RFC2119 ] Bradner, S., "Key words for use in RFCs to Indicate Rose, "DNS Security Introduction and Requirements", RFC 4033, March 2005. Rose, "Resource Records for the DNS Security Extensions", RFC 4034, March 2005. Rose, "Protocol Modifications for the DNS Security [ RFC5011 ] StJohns, M., "Automated Updates of DNS Security (DNSSEC) [ RFC6781 ] Kolkman, O., Mekking, W., and R. Gieben, "DNSSEC Operational Practices, Version 2", RFC 6781, December 2012. 11.2. Informative References Synchronization using DNS UPDATE", Work in Progress, December 2010. [ CPSYNC-DNS ] Hardaker, W., "Child To Parent Synchronization in DNS", Work in Progress, July 2014. [ PARENT-ZONES ] Andrews, M., "Updating Parent Zones", Work in Progress, November 2013. Appendix A. RRR Background RRR is our shorthand for the Registry/Registrar/Registrant model of Parent-Child relationships. In the RRR world, the different parties are frequently from different organizations. In the single enterprise world, there are also organizational, geographical, and cultural separations that affect how information flows from a Child to the Parent. Due to the complexity of the different roles and interconnections, automation of delegation information has not yet occurred. There have been proposals to automate this, in order to improve the reliability of the DNS. These proposals have not gained enough traction to become standards. For example, in many of the TLD cases, there is the RRR model (Registry/Registrar/Registrant). The Registry operates DNS for the TLD, and the Registrars accept registrations and place information into the Registry’s database. The Registrant only communicates with the Registrar; frequently, the Registry is not allowed to communicate with the Registrant. In that case, as far as the Registrant is concerned, the Registrar is the same entity as the Parent. In many RRR cases, the Registrar and Registry communicate via EPP [RFC5730] and use the EPP DNSSEC extension [RFC5910]. In a number of Country Code TLDs (ccTLDs), there are other mechanisms in use as well as EPP, but in general, there seems to be a movement towards EPP usage when DNSSEC is enabled in the TLD. Appendix B. CDS Key Rollover Example This section shows an example on how CDS is used when performing a KSK rollover. This example will demonstrate the Double-DS rollover method from Section 4.1.2 of [RFC6781]. Other rollovers using CDNSKEY and double KSK are left as an exercise to the reader. The table below does not reflect the Zone Signing Keys (ZSKs) as they do not matter during KSK rollovers. The wait steps highlight what RRset needs to expire from caches before progressing to the next step. <table> <thead> <tr> <th>Step</th> <th>State</th> <th>Parent</th> <th>Child</th> <th>DNSKEY and CDS signer</th> <th>Child CDS</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Add CDS</td> <td>A</td> <td>A</td> <td>A</td> <td>AB</td> </tr> <tr> <td>2</td> <td>Wait for DS change</td> <td>A</td> <td>A</td> <td>A</td> <td>AB</td> </tr> <tr> <td>3</td> <td>Updated DS</td> <td>AB</td> <td>A</td> <td>A</td> <td>AB</td> </tr> <tr> <td>4</td> <td>Wait &gt; DS TTL</td> <td>AB</td> <td>A</td> <td>A</td> <td>AB</td> </tr> <tr> <td>5</td> <td>Actual Rollover</td> <td>AB</td> <td>B</td> <td>B</td> <td>AB</td> </tr> <tr> <td>6</td> <td>Child Cleanup</td> <td>AB</td> <td>B</td> <td>B</td> <td>AB</td> </tr> <tr> <td></td> <td>Parent cleans</td> <td>B</td> <td>B</td> <td>B</td> <td>B</td> </tr> <tr> <td></td> <td>Optional CDS delete</td> <td>B</td> <td>B</td> <td>B</td> <td></td> </tr> </tbody> </table> Table 1: States Authors’ Addresses Warren Kumari Google 1600 Amphitheatre Parkway Mountain View, CA 94043 US EMail: warren@kumari.net Olafur Gudmundsson OGUD Consulting 3821 Village Park Dr. Chevy Chase, MD 20815 US EMail: ogud@ogud.com George Barwood 33 Sandpiper Close Gloucester GL2 4LZ United Kingdom EMail: george.barwood@blueyonder.co.uk
{"Source-Url": "https://tools.ietf.org/pdf/rfc7344.pdf", "len_cl100k_base": 7500, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 35217, "total-output-tokens": 8505, "length": "2e12", "weborganizer": {"__label__adult": 0.0003573894500732422, "__label__art_design": 0.0004279613494873047, "__label__crime_law": 0.0013628005981445312, "__label__education_jobs": 0.0013132095336914062, "__label__entertainment": 0.00015056133270263672, "__label__fashion_beauty": 0.0002009868621826172, "__label__finance_business": 0.0018873214721679688, "__label__food_dining": 0.0003066062927246094, "__label__games": 0.0007777214050292969, "__label__hardware": 0.0024929046630859375, "__label__health": 0.0004515647888183594, "__label__history": 0.0005865097045898438, "__label__home_hobbies": 0.00011724233627319336, "__label__industrial": 0.0006594657897949219, "__label__literature": 0.0004565715789794922, "__label__politics": 0.0007939338684082031, "__label__religion": 0.0004649162292480469, "__label__science_tech": 0.2342529296875, "__label__social_life": 0.00015938282012939453, "__label__software": 0.2027587890625, "__label__software_dev": 0.548828125, "__label__sports_fitness": 0.0002856254577636719, "__label__transportation": 0.0006375312805175781, "__label__travel": 0.00029540061950683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33581, 0.03764]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33581, 0.41329]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33581, 0.8957]], "google_gemma-3-12b-it_contains_pii": [[0, 1637, false], [1637, 3660, null], [3660, 5991, null], [5991, 7597, null], [7597, 9793, null], [9793, 11800, null], [11800, 14257, null], [14257, 16421, null], [16421, 18359, null], [18359, 20603, null], [20603, 22684, null], [22684, 23977, null], [23977, 26356, null], [26356, 28793, null], [28793, 30244, null], [30244, 30506, null], [30506, 32441, null], [32441, 33581, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1637, true], [1637, 3660, null], [3660, 5991, null], [5991, 7597, null], [7597, 9793, null], [9793, 11800, null], [11800, 14257, null], [14257, 16421, null], [16421, 18359, null], [18359, 20603, null], [20603, 22684, null], [22684, 23977, null], [23977, 26356, null], [26356, 28793, null], [28793, 30244, null], [30244, 30506, null], [30506, 32441, null], [32441, 33581, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33581, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33581, null]], "pdf_page_numbers": [[0, 1637, 1], [1637, 3660, 2], [3660, 5991, 3], [5991, 7597, 4], [7597, 9793, 5], [9793, 11800, 6], [11800, 14257, 7], [14257, 16421, 8], [16421, 18359, 9], [18359, 20603, 10], [20603, 22684, 11], [22684, 23977, 12], [23977, 26356, 13], [26356, 28793, 14], [28793, 30244, 15], [30244, 30506, 16], [30506, 32441, 17], [32441, 33581, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33581, 0.04348]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
68fe7bf09dd7f028204b485a8912ed045a565b08
SPECIFICATIONS FOR COMPUTER-AIDED AND ON-LINE GROUP CONFERENCING Jacques Vallee, et al Institute for the Future Prepared for: Advanced Research Projects Agency 20 May 1974 DISTRIBUTED BY: NTIS National Technical Information Service U. S. DEPARTMENT OF COMMERCE 5285 Port Royal Road, Springfield Va. 22151 This report proposes a set of features for a future computer conferencing system that would employ the computer and network characteristics currently available, or at the planning stage, on the ARPA network. It addresses the problem of on-line group conferencing from the point of view of user language structure, differentiating between conference participants and activity organizers. The system outlined here is assumed to be planned for implementation in assembly language, its program size not exceeding the size of the existing FORUM system. <table> <thead> <tr> <th>KEY WORDS</th> <th>LINK A</th> <th></th> <th>LINK B</th> <th></th> <th>LINK C</th> <th></th> </tr> </thead> <tbody> <tr> <td>Communications media</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Computer conferencing</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Computer hardware</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Computer software</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Expert judgment</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>FORUM</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Network conferencing</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Office automation</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Policy formulation</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Remote conferencing</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Teleconferencing</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Voice conferencing</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> SPECIFICATIONS FOR COMPUTER-AIDED AND ON-LINE GROUP CONFERENCING Special Report for: Contract No. DAHC 15 72 C 0165 ARPA Policy-Formulation Interrogation Network Sponsored by: Advanced Research Projects Agency Principal investigator: Jacques Va'lee (415) 854-6322 Contractor: Institute for the Future 2740 Sand Hill Road Menlo Park, California 94025 Effective date of contract: 6 March 1972 Contract expiration date: 30 June 1974 (Amended Under Modification No. P00003) Amount of contract: $340,000.00 (Amended Under Modification No. P00003) ARPA order number: 2005 Program code number: A 74880 The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily repre- senting the official policies, either expressed or implied, of the Advanced Research Projects Agency of the U.S. Government. A. INTRODUCTION The program we specify in this document constitutes a means of organizing, storing, distributing, and retrieving textual and numerical inputs (referred to in this document as entries) such that a user may, for example, communicate with others both in real time and asynchronously (store and forward), extract information from others in questionnaire fashion, or create small data bases for private use. It identifies each user by first and last names and a personal password, which is set or changed by the user himself. The operational unit will be known as an activity. The activity consists of an agenda, a specification of the participants who may use the activity, and subdivisions within the activity known as parts. Each part is a distinct "message area" in which entries are made and viewed by the users designated in the user specification list. The parts of an activity allow entries to be made by straightforward typing or by the submission of a text file prepared and stored elsewhere in the host computer. The organizer of an activity is that user who creates an activity, designates how many parts it shall have, designates the topics which are to be discussed or actions to be taken in the parts, and designates who shall be allowed to attend the activity. Any user may create an activity, of which he immediately becomes the organizer. The participant of an activity is a user specified by the organizer of that activity who may view the entries in the various parts as well as enter new entries into these message areas. The editor of an activity is a user designated by the organizer who has access to all of the commands and functions of the participant role as well as the ability to edit entries that have been submitted in a part—rearrange, copy, or delete entries. The only distinction between the roles of editor and organizer is that the organizer has the ability to revise the parts of an activity, revise the list of users, and change the roles designated to users. The observer role allows a user to participate in an activity only in a passive manner. The observer may view the entries made but may not make any new entries. In addition to the activities to which a user has access, there is a private message mode in which the user may send a message to any other user of the program. A personal "file" of private messages sent and received is kept. Only the user has access to these private messages and he may delete, rearrange, or move messages from his private file. Thus, a user need not enter an activity to use the system as a "mailbox". It should be noted that although the private message mode is provided as separate from activities, a user may also send and receive private messages while in an activity. It should also be noted that a user may leave an activity at any time to either join another activity or go into the private message mode. Similarly, the user may leave the private message mode at any time and join any activity in which he is a valid participant. B. USE OF THE PROGRAM The program is designed to be operated from terminals of various kinds. They include hard-copy terminals that print on paper and display screen terminals (CRTs—or cathode ray tube terminals). The program will ask the user for the type of terminal he is operating. After starting the program, you will first be asked to type your last name, followed by a carriage return [CR]. If you have not used it before, the program will ask you to type your first name (followed by a carriage return [CR]), and you will then be asked to select and type a personal password. Passwords must be alphabetic or numeric characters. If you have registered previously, you will be asked to give your last name and password. The system will then ask you to enter information about the type of terminal equipment you are using. Having given your name, password, and terminal type, the program will then print the list of activities which you may attend. You may elect to join an activity at this point by typing the number associated with the chosen activity followed by a carriage return [CR]. It is important to note that the numbers associated with the various activities are relative to your own list of valid activities. Thus, the same activity may be referenced using different numbers for different users. If you do not select an activity (i.e., you do not type a number, but rather only a carriage return [CR]), you will be placed in the private message mode. After selecting an activity (or when placed in private message mode), you will receive all of the private messages that you have not yet seen. If you have selected an activity, you will then be placed in that activity. If you have selected private message mode, you will remain in that mode. C. USER AIDS Getting Help If you are not sure what to do at any time during an activity, strike the question mark key [?] as your first character of input. To Leave To end your participation, simply hang up your telephone. If you want to remain on the network in other work, use the "QUIT" command. You will then be placed into the TENEX executive. Moving from One Activity to Another To leave one discussion part and go to another in the same activity, you may either: 1. send a private message to the system and use the commands "GO (to part) N" (where N is a number), "NEXT (part)", or "PREVIOUS (part)"; or 2. type a CONTROL-F and, after receiving a prompt [-], use the same commands listed above. Editing Entries Simple editing of your entries (or commands) may be done before submitting or finishing the text by using the following: To delete the character you typed Strike the back arrow key [«•] or type a CONTROL-A. If you strike the back arrow or CONTROL-A several times, the corresponding number of characters will be erased. Blank spaces between words are considered characters. To erase an entire entry Strike the DEL (delete) or RUBOUT key twice prior to submitting an entry with the two final carriage returns [CR]; or type a CONTROL-X. For users acquainted with TENEX conventions, the control character operations for program control and editing are accepted. For more powerful editing of entries, the user may use TECO text-editor commands such that the current entry may be altered. The procedure for accessing TECO commands is as follows. 1. Before completing an entry with the final two carriage returns [CR], strike the ESC (escape) or ALTMODE key. The program will respond with: Text editor: 2. At this point, the user must specify the editor commands he wishes to use by typing: TECO[CR] 3. The cursor is automatically placed at the last character of the entry being corrected. 4. The standard TECO commands with the exception of the file storing and submission are available. 5. To leave the text editor and resume typing the entry normally, one must respond to the TECO command prompt [*] as follows: *;M;S ($) indicates ESC or ALTMODE key) Stopping Output You may stop output to your terminal by striking the DEL (delete) or RUBOUT key twice. The program will stop printing the current block of text and continue with the next operation. D. JOINING AN ACTIVITY Having selected an activity, the title of the activity and information on its structure and participants will be printed. Since all of the parts in a conference are discussions, the following sections concern themselves with the typical operations in this process. Other processes (e.g., eliciting a number of a probability estimate) are self-explanatory and require no background other than that obtained by typing a question mark [?] when you are prompted for special information and are not sure what to do. Making an Entry in a Discussion While in a discussion you may make an entry at any time—simply start typing. To end an entry, strike the carriage return [CR] twice. Making an Anonymous Entry in a Discussion Begin your message by striking the exclamation point key [!]. (Note that I must be the first character typed.) Type your message as you would for a standard entry, ending it with two carriage returns [CR]. Sending a Private Message Begin your message by typing a left parenthesis [()]. (Note that ( must be the first character typed.) The system will automatically print the word "to". You should then enter the name of the recipient of your message, followed by one carriage return [CR]. The program will then prompt you for your message with a hyphen [-]. You may then begin typing the message. End your message with two carriage returns [CR]. Answering Special Questions During the course of a discussion, a user may "ask" a question which will be printed to your terminal like a standard entry and will then prompt you for a specific type of information. If it is not clear to you what is required by the question, type a question mark [?], followed by a carriage return [CR]. Commands Two means of accessing commands are available in the system. 1. While participating in a discussion activity, you may send a private message to the program itself rather than to another user. This allows you to access commands without leaving the discussion. Once the command action is taken by the program, you are returned to the ongoing discussion, having never really left the activity. 2. At any point in the program, you may go to command mode by typing a CONTROL-F. This command mode removes you from any conference part you are participating in and prints a prompt to your terminal. You may then type in the command text. After the command action is completed, you are either returned to the command mode (indicated by another prompt), or to a part at which the command given will explicitly place you. E. COMMANDS AVAILABLE TO A PARTICIPANT Information Commands DESCRIPT Explains the use of the other commands available to a user. For example, "DESCRIPT REVIEW" will give a description of the "REVIEW" command and instructions on its use. To get a description of all commands available one may use "DESCRIPT ALL". 'INFORMATION (on) Will give summarized information on any of the following. 1. "ACTIVITY #", by giving the title of the activity specified and the contents or description of the parts making up the activity. Where the title of a part is preceded by an asterisk [*], the user will find new entries which he has not seen. 2. "PART #", by giving a description of the part designated. This command assumes that the part specified is contained in the user's current activity. 3. "ALL ACTIVITIES", by listing all of the activities which you may attend and placing an asterisk [*] before each activity which contains information that you have not seen. Note that to locate the part in which new entries have been made, you should use the "INFORMATION (on) ACTIVITY #" command. STATUS Allows the user to do any of the following. 1. Get the "STATUS (of) ACTIVITY #", which gives the title, contents, and the names of participants currently active in the activity. 2. Get the "STATUS (of) PARTICIPANTS", which gives the list of users eligible for a specified activity (e.g., "STATUS (of PARTICIPANTS (in) ACTIVITY #") and whether they are currently logged in, the entire list of users and their status. If you wish only to find out who is currently using the program, insert "NOW" before specifying an activity. 3. Get the status of "ALL ACTIVITIES", which gives a list of all the activities you may attend and a list of the current users of those activities. SET Allows the user to personally change his password (e.g., "SET PASSWORD (to be) -") or to change the first name listed in the master participant directory (e.g., "SET FIRSTNAME (to be) -"). Commands for Moving with the System Structures CONTINUE Returns you to the part in which you were participating prior to entering the command mode or private message mode. GO (to part) # Puts you into the part of an activity which you specify. This command assumes that you are already in the activity and simply wish to go to another section. JOIN (activity) Allows you to move from one activity or from the private message mode to another activity. If you would like a list of activities available to you, follow the "JOIN (activity)" request with a question mark (?). NEXT (part) Puts you into the next sequential part on the agenda. PREVIOUS (part) Returns you to the part preceding your current one. QUIT Will cause the program to stop, but you will remain logged into the host computer. You will be placed in the TENEX executive. To leave the program and simultaneously log out of the computer, simply hang up your telephone. Do not use the "QUIT" command and then hang up the telephone. **START (activity)** Puts you at the beginning of the activity you are currently using and lets you begin your participation as if you had issued a "JOIN (activity) #" command. **LEA'.** Will cause you to "leave" your current activity (i.e., the program will know that you are not currently active in any activity) and place you into the private message mode. When asking a question of the users of a discussion one may suppress recording of the author (thus giving anonymity to the authors) by using "ASK (the following question) SECRET IN PART #". **DELETE (entries)** Will delete from the user's personal file of private messages any entries which he designates (e.g., "DELETE (entries) PRIVATE"). Once deleted, these entries are not recoverable. If the user has organizer or editor status, he may delete entries in parts of the activity in which he has those roles. **COPY (entries)** Allows the user to make a copy of the entries designated in another part or to copy entries to be placed at another location within the same part. The options for this command are: 1. "COPY (entries) [ENTRY #] TO [ENTRY #]" 2. "COPY (entries) [ENTRY #] TO [CR]" In the latter case, the entries are copied to the end of the part. If the user specifies an entry number already used, the user may insert the entries by subdividing entry numbers. Thus, if the user wishes to copy entries 10 through 25 (16 entries in all) to reside between entry 45 and 46, he may type "COPY (entries) 10-25 TO 45.01[CR]". The sixteen entries will be numbered 45.01 through 45.16. If any of the entry numbers in this range are filled, the program will not insert the entries and will inform you why. You may then further subdivide by inserting the entries to 45.001; thus the entries will be numbered 45.001 through 45.016. **MOVE (entries)** Operates exactly as does the "COPY" command, except that the entries moved are deleted from their original location after being placed in the new locations. VIEW (entries) Retrieves and displays the entries you specify. You may use any of the following options: alone or in combination. 1. "BY" and a list of participant names (or the word "ALL"). For example: REVIEW (entries) BY SMITH[CR] or REVIEW (entries) BY SMITH AND LEE[CR] 2. "IN" and a list of entry numbers (or the word "ALL"). For example: REVIEW (entries) IN 2,5-9[CR] 3. "LAST N" entries (to see only the previous entry, simply type "LAST"). For example: REVIEW (entries) LAST 3[CR] 4. "BEFORE", "ON", or "AFTER" a date. For example: REVIEW (entries) BEFORE 17-AFR-73[CR] or REVIEW (entries) ON 4/1/73[CR] 5. "RE" and a text string in quotation marks ['"']. For example: REVIEW (entries) RE "ENERGY"[CR] The program will retrieve all entries in the current discussion activity containing that text string. If you do not wish to review the complete heading (author's name, date, and time stamp) and text of the entries you have specified, you may use any of the following restrictions, alone or in combination. 1. "BY FIRST LINE" or "BY FIRST N LINES" 2. "NO HEADING" 3. "NO TEXT" F. COMMANDS AVAILABLE TO AN ORGANIZER Any participant may create an activity, of which he automatically becomes the organizer, with all the capabilities of that role. It should be noted that an organizer of one activity does not necessarily have the role of organizer in another unless he has created it or has been assigned that role by an organizer. **CREATE (activity)** Is the command used to get up the structure and designate the participants of an activity. This command will cause the following series of questions to be asked of the organizer: 1. Title Please enter a descriptive title for this activity. (The title may be longer than one line, but only the first line will be displayed on the list of available conferences. End the title with two carriage returns [CR].) 2. Privacy Do you wish the activity to be private? (Answer yes or no followed by a carriage return [CR]. If you answer no, any user will be allowed to join the activity, even if his name is not specified on the participant list. If you answer yes, only those users specified by the organizer will be allowed to participate.) 3. Participants Please enter the names of the other participants. # 1 (org): Last name, First name # Name : (The organizer's name is placed in the list automatically. Thereafter, the organizer will be prompted for each participant's name. If no more participants are to be added, simply strike the carriage return [CR].) 4. Parts Please type in the subdivisions for the activity, beginning each one with the word "PART". Type "END" when you are finished. - PART 1 [CR] Please enter descriptive title for Part 1. - Title - Title [CR] - [CR] Note: only the first line of the title will be displayed with the contents. - PART 2 [CR] Please... - END This will create an activity with two parts, both of which cue open discussions. While creating an activity, one may insert more structure into the parts with the following creation commands: 1. "INSERT (entry)", will place an entry attributed to the organizer at the first available entry number. 2. "ASK (the following question)", will insert a "question entry" at the first available entry. 3. "FEEDBACK (results from entry) [ENTRY #]", will display the aggregate results from a question entry to all the participants. 4. "PROCEED (to part) [PART #]", will cause the part under which this command is used to be effectively "closed". That is, after entries are made or questions asked by the organizer, he may use this command to prevent participants from making any other entries in the part. This command is most useful in structuring questionnaire-type activities or parts. **ASSIGN** Will allow the organizer to assign a role to a user. The default setting during the creation sequence is "participant", but after setup is complete, the organizer may assign the role of observer, editor, participant, or even organizer to any user of the activity. It should be noted that there is no restriction as to the number of users who may have editor roles. **REVISE** Will allow an organizer to change any section of the creation sequence after the activity is set up.
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/779064.pdf", "len_cl100k_base": 5110, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 30519, "total-output-tokens": 5659, "length": "2e12", "weborganizer": {"__label__adult": 0.0004138946533203125, "__label__art_design": 0.0008521080017089844, "__label__crime_law": 0.0015363693237304688, "__label__education_jobs": 0.028533935546875, "__label__entertainment": 0.0002187490463256836, "__label__fashion_beauty": 0.0001806020736694336, "__label__finance_business": 0.003849029541015625, "__label__food_dining": 0.000423431396484375, "__label__games": 0.000965118408203125, "__label__hardware": 0.0236663818359375, "__label__health": 0.0005040168762207031, "__label__history": 0.0006341934204101562, "__label__home_hobbies": 0.0002808570861816406, "__label__industrial": 0.00334930419921875, "__label__literature": 0.0004773139953613281, "__label__politics": 0.0006499290466308594, "__label__religion": 0.00043892860412597656, "__label__science_tech": 0.18798828125, "__label__social_life": 0.00021207332611083984, "__label__software": 0.248046875, "__label__software_dev": 0.4951171875, "__label__sports_fitness": 0.00030875205993652344, "__label__transportation": 0.0010728836059570312, "__label__travel": 0.000354766845703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22008, 0.02318]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22008, 0.33408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22008, 0.92173]], "google_gemma-3-12b-it_contains_pii": [[0, 310, false], [310, 859, null], [859, 2125, null], [2125, 2980, null], [2980, 4994, null], [4994, 7137, null], [7137, 8629, null], [8629, 10181, null], [10181, 11912, null], [11912, 13826, null], [13826, 15712, null], [15712, 17692, null], [17692, 18828, null], [18828, 20300, null], [20300, 21518, null], [21518, 22008, null]], "google_gemma-3-12b-it_is_public_document": [[0, 310, true], [310, 859, null], [859, 2125, null], [2125, 2980, null], [2980, 4994, null], [4994, 7137, null], [7137, 8629, null], [8629, 10181, null], [10181, 11912, null], [11912, 13826, null], [13826, 15712, null], [15712, 17692, null], [17692, 18828, null], [18828, 20300, null], [20300, 21518, null], [21518, 22008, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22008, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22008, null]], "pdf_page_numbers": [[0, 310, 1], [310, 859, 2], [859, 2125, 3], [2125, 2980, 4], [2980, 4994, 5], [4994, 7137, 6], [7137, 8629, 7], [8629, 10181, 8], [10181, 11912, 9], [11912, 13826, 10], [13826, 15712, 11], [15712, 17692, 12], [17692, 18828, 13], [18828, 20300, 14], [20300, 21518, 15], [21518, 22008, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22008, 0.06796]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
190a2c23a202925afad55bfc0803afe765e4ab53
Breakfast of Champions: Towards Zero-Copy Serialization with NIC Scatter-Gather Deepti Raghavan♠, Philip Levis♠, Matei Zaharia♠, Irene Zhang♠ ♠Stanford, ♠Microsoft Research Abstract Microsecond I/O will make data serialization a major bottleneck for datacenter applications. Serialization is fundamentally about data movement: serialization libraries coalesce and flatten in-memory data structures into a single transmittable buffer. CPU-based serialization approaches will hit a performance limit due to data movement overheads and be unable to keep up with modern networks. We observe that widely deployed NICs possess scatter-gather capabilities that can be re-purposed to accelerate serialization’s core task of coalescing and flattening in-memory data structures. It is possible to build a completely zero-copy, zero-allocation serialization library with commodity NICs. Doing so introduces many research challenges, including using the hardware capabilities efficiently for a wide variety of non-uniform data structures, making application memory available for zero-copy I/O, and ensuring memory safety. CCS Concepts • Networks → Programming interfaces. Keywords data serialization, kernel bypass networking, datacenters ACM Reference Format: 1 Introduction The microsecond era is here [5]. As Figure 1 shows, datacenter applications today can achieve microsecond packet round-trip times, reaching single digit RTTs with kernel-bypass. At these latencies, everyday systems services, like data serialization, become unaffordable bottlenecks. Data serialization [2, 3, 36–38] is important in datacenter applications. Many distributed applications [1, 33, 41], RPC libraries [13], and microservice deployments [10] rely on serialization as a communication primitive, but serialization already causes a big performance penalty. Google reported that Protobuf [37] accounted for 5% of its datacenter cycles [18] in 2015, and we expect the problem to worsen today. Concretely, we find that Protobuf takes 1.0 µs to serialize and deserialize a simple data structure with a single 1024 byte-sized string. Figure 1 overlays this overhead. Protobuf serialization for this data structure adds a staggering 43% overhead to eRPC [17]. Each extra microsecond of serialization overhead significantly affects the throughput a server can achieve and the number of cores necessary to saturate the network. The main problem is that general-purpose CPUs cannot perform serialization’s core task efficiently enough. Serialization must move data, because there is fundamental tension between the application’s optimal in-memory layout and the network’s optimal on-the-wire layout for a data structure. Data structures often contain pointers (e.g., trees and graphs), so applications can easily modify data structures without having to re-allocate all the memory contiguously. Serialization coalesces these scattered pointers into a contiguous buffer for transmission. Performing this data movement in software will limit throughput in modern networks. networks, because it requires copying each field at least once and providing a buffer to store the final result. Without high performance serialization libraries, applications are forced to hand-roll their own serialization or integrate custom hardware accelerators. Redis [31] improves CPU-based serialization by restricting its functionality, but cannot avoid the overhead required to move memory. The most complicated object Redis can serialize is a list. On the other hand, deploying and integrating custom hardware accelerators that do serialization [15, 28] can be difficult in today’s datacenters as it requires extra coordination between network administrators, offload developers and application developers [22]. Our key observation is that while CPUs coalesce scattered memory regions inefficiently, widely deployed NICs already perform a similar function: scatter-gather. Scatter-gather was designed for high-performance computing, where applications frequently move large, statically-sized chunks of memory between servers. Kernel bypass exposes this NIC capability to the serialization library, but it is not obvious how to directly use it for serialization. Thus, this paper asks: How can we leverage NIC scatter-gather capabilities to build serialization libraries that keep up with modern networks? The remainder of the paper describes why existing software serialization is inefficient (§2) and a simple use of NIC scatter-gather for serialization (§3). We finally discuss open research questions around building general-purpose serialization libraries with scatter-gather (§4) and related work (§5). 2 The Limits of Software Serialization This section shows that CPU-based serialization cannot keep up with the peak packet processing throughput of kernel bypass I/O (§2.1), because CPU-based serialization cannot avoid certain data movement overheads (§2.2). 2.1 Software Serialization Hits a Performance Limit To demonstrate the overhead of serialization, we benchmark three software serialization libraries [36–38] on DPDK and find that they only achieve up to 52% of DPDK’s peak single core throughput. We only consider compilation-based serialization [2, 3, 36, 37] because dynamic type inference at runtime [20, 23] (e.g., Java serialization of arbitrary Java classes) adds unaffordable overheads. We use a data structure with a single 1024-byte string field. Although the data structure is so simple that serialization is theoretically unnecessary, it captures the minimal overhead for serialization today. The experiment runs on 11 20-core dual socket Xeon Silver 4114 2.2 GHz servers, connected by Mellanox ConnectX-5 100 Gbps NICs and an Arista 7060CX 100 Gbps switch, with a minimum 450 ns of switching latency. We use concurrent, closed-loop clients to send a serialized message to the server, which deserializes, then re-serializes the same payload and returns it to the client. We use a minimal UDP networking stack for DPDK based on LWIP [7]. We show the results in Figure 2. The “No Serialization” line removes serialization and gives the raw networking stack performance. Kernel bypass requires that packet memory lives in pinned, non-swappable pages, so the networking stack still copies application payloads into registered packet memory on transmission and copies packets into general memory on receive. The “DPDK Single Core” line removes these copies and represents the peak, zero-copy processing throughput possible with DPDK. We include another version of Protobuf, “Protobytes”, where the payload is bytes, not a string, as Protobuf spends a significant amount of time in utf8-validation. **Experiment Results.** FlatBuffers, the fastest serialization baseline, achieves only 5.4 Gbps, about 52% of DPDK’s peak throughput of 10.4 Gbps (highest throughput measured under 15 µs of tail latency), due to two performance gaps. Serialization itself contributes the first 3 Gbps gap between FlatBuffers and No Serialization. Having the networking stack and serialization manage memory separately contributes the 2 Gbps gap between No Serialization and DPDK Single Core. Section 2.2 closely breaks down these gaps. 2.2 Why is Software Serialization So Expensive? The overhead of moving data on CPUs limits the performance of today’s software serialization libraries. In-memory data structures often contain pointers, so serialization must flatten the data into a contiguous representation. Additionally, sometimes applications use serialization libraries to construct and transmit data structures on-demand to respond to application requests (e.g., returning the value of a range of specified keys in a key-value store). Table 1: Breakdown of steps to serialize and deserialize a message with a single 1024-byte-sized string field. Cap’n Proto’s encode and decode are zero-copy because the in-memory buffer layout matches the eventual wire format, while Protobuf requires an expensive transformation to the wire format. Both libraries’ copy-based overheads, marked by stars, scale with message size. <table> <thead> <tr> <th>Step</th> <th>Protobuf</th> <th>Cap’n Proto</th> </tr> </thead> <tbody> <tr> <td>Initialize Data Structure</td> <td>34 ns</td> <td>408 ns</td> </tr> <tr> <td>Copy String Payload</td> <td>167 ns*</td> <td>80 ns*</td> </tr> <tr> <td>Encode to Wire Format</td> <td>351 ns*</td> <td>53 ns</td> </tr> <tr> <td>Decode from Wire Format</td> <td>491 ns*</td> <td>78 ns</td> </tr> <tr> <td>Total Overhead</td> <td><strong>1043 ns</strong></td> <td><strong>619 ns</strong></td> </tr> </tbody> </table> All current serialization libraries, no matter their final wire-format, pay the cost of the copies and allocations required for this data movement. Table 1 breaks down the serialization latencies from Figure 2 with Protobuf and Cap’n Proto (FlatBuffers behaves similarly to Cap’n Proto). After copying the field in (“Copy String Payload”), Protobuf performs an expensive transformation to the on-the-wire format. This transformation causes an additional allocation, copy and utf8-validation during “Encode”, and corresponding costs during “Decode”. Cap’n Proto’s “Encode” and “Decode” are cheaper because the in-memory format matches the wire-format exactly, but even Cap’n Proto must allocate space for the serialized buffer (“Initialize Data Structure”) and copy the payload in (“Copy String Payload”) during transmission. For data structures with large payloads, data movement dominates serialization costs, while converting integers to network ordering, which few wire formats require, adds minimal cost. The second performance gap in Figure 2 comes from the firm separation between the serialization library and networking stack. Modern kernel bypass engines for high-performance computing, e.g., to optimize MPI communication primitives [8, 32]. Networking stacks [9, 34] have re-purposed scatter-gather to manage sending packets that are larger than the maximum packet buffer size. Serialization differs from these use cases because it needs to move potentially many fields whose size and placement dynamically depend on external data or user requests. This section describes the design of a prototype serialization library for the popular Mellanox CX-5 [25] NIC. 3.1 NIC Scatter-Gather Capabilities Whether NIC scatter-gather can be used for high-performance serialization depends on its performance properties and restrictions. The section focuses on the Mellanox CX-5; other modern scatter-gather NICs with PCIe interconnects likely behave similarly (§4.1). Given a list of I/O addresses, a CX-5 makes multiple PCIe requests to coalesce the memory into a single packet. The NIC supports up to 60 scattered memory chunks, but each chunk requires a NIC-to-PCIe round trip. The number of these round trips that can execute concurrently depends on hardware implementation details of the PCIe endpoint at the NIC and the CPU, which we currently do not have knowledge of. To understand this penalty, we ran an experiment where the DPDK echo server described in Section 2.1 transmits a pre-initialized payload of size 1024 bytes (no copies) equally divided into different numbers of chunks to a single client. The RTT increases from 6 µs to a 10.5 µs RTT when the message is sent as a single buffer, versus 60 scatter-gather chunks. Sending back the 1024-byte message as 16 chunks results in higher latency than using FlatBuffers to deserialize, reserialize and transmit the request (which requires copying the payload have a hardware accelerator for coalescing non-contiguous I/O regions: the NIC itself. Modern NICs have scatter-gather engines for high-performance computing, e.g., to optimize MPI communication primitives [8, 32]. Networking stacks [9, 34] have re-purposed scatter-gather to manage sending packets that are larger than the maximum packet buffer size. Serialization differs from these use cases because it needs to move potentially many fields whose size and placement dynamically depend on external data or user requests. This section describes the design of a prototype serialization library for the popular Mellanox CX-5 [25] NIC. 3 Leveraging the NIC for Serialization Speeding up serialization requires reducing CPU data movement. Our key insight is that datacenter servers already structure benchmarked in Section 2.1 and how an echo server the interface our library would produce for the simple data directly, rather than moving the memory. Listing 2 shows setter functions store pointers to application memory requires a zero-copy application interface. The generated Serialization API. Our prototype serialization library ensuring application memory can be used for I/O directly. Section 4.3 discusses research challenges around mission. 3.2 Integrating Networking and Serialization Core Abstraction: Scatter-Gather Array. Our serialization library’s core abstraction is the scatter-gather array abstraction, shown in Listing 1. Scatter-gather arrays point to application data in their original memory location. When applications call serialize, the library produces a scatter-gather array that can be passed to the networking stack instead of a single buffer. The payload is either pre-initialized (“Zero-Copy”) or copied into the packet (“Copy-Out”). The only discernible difference between copy-out and zero-copy starts at about 512 bytes. Additionally, entries much smaller than 256 bytes could hurt performance. When the NIC reads memory regions over PCIe, the PCIe controller sends back 256-byte-sized memory chunks (the chunk size is a hardware setting). Each chunk contains a header, so the header could dominate in the case of small payloads. These results indicate that maximum performance on a CX-5 requires passing in I/O lists with entries that are at least 512 bytes large. The “maximum” number of entries in the I/O list depends on the size of each entry as well as how many concurrent DMAs can run. These tradeoffs preclude simple solutions, such as one scatter-gather operation per data structure field. 3.2 Integrating Networking and Serialization Core Abstraction: Scatter-Gather Array. Our serialization library’s core abstraction is the scatter-gather array abstraction, shown in Listing 1. Scatter-gather arrays point to application data in their original memory location. When applications call serialize, the library produces a scatter-gather array that can be passed to the networking stack instead of a single contiguous buffer. Transmitting scatter-gather arrays is conceptually similar to calling the writev system call [12] in Linux with an iovec data structure, except the Linux kernel still copies the iovec into a contiguous buffer before transmission. Section 4.3 discusses research challenges around ensuring application memory can be used for I/O directly. Serialization API. Our prototype serialization library requires a zero-copy application interface. The generated setter functions store pointers to application memory directly, rather than moving the memory. Listing 2 shows the interface our library would produce for the simple data structure benchmarked in Section 2.1 and how an echo server could use the interface. However, the library only stores pointers for variable-sized values, such as strings, bytes or nested objects. Maintaining pointers to integer fields would not improve performance (storing the pointer to an integer takes about the same space as storing the integer itself), so the serialize function copies integers into the object header. The header contains a bitmap to index which fields are present, followed by metadata for each field that is present. For the data structure in Listing 2, the corresponding scatter-gather array points to the object header in the first entry and to the string field in the second entry. The object header contains a bitmap that indexes whether the single field is present or not and an offset which points to the string field if it is present. The resulting wireformat is similar to Cap’n Proto’s wireformat. Our library can support nested objects and lists, like Cap’n Proto, FlatBuffers and Protobuf. To support a nested field, the object header contains an offset to the nested object’s header (if present). To support a list, the header stores the length of the list and an offset to the actual list data. The final scatter-gather array contains the object header in the first entry (including any nested header data), and pointers to string or bytes fields in further entries from the top-level object as well as any nested objects or lists. Deserialization API. Deserialization requires turning the received payload back into a pointer-based data structure. This requires linearly scanning through all of the possible fields in the object schema, checking if they are present in the bitmap, and recasting each field offset into a pointer. While linearly scanning through all the fields may add overhead for a data structure with a large number of fields, deserialization Listing 1: The scatter-gather array, the core abstraction for scatter-gather based serialization. ```c struct ScatterGatherArray { size_t num_entries; void * ptrs[MAX_ENTRIES]; size_t length[MAX_ENTRIES]; }; ``` Listing 2: Interface produced by our serialization library in C++, for the listed object schema (in Protobuf syntax), along with example code for an echo server. Unlike prior serialization interfaces, this interface uses zero-copy writes and reads. The serialization library avoids copying fields into a pre-allocated buffer and passes a scatter-gather array to the networking stack for transmission. ```c message Object { optional string msg = 1; } class ObjectGenerated { std::pair<char *, size_t> get_msg(); void set_msg(const char *addr, size_t len); ScatterGatherArray serialize(size_t num_entries); void deserialize(const char *payload); }; ``` ObjectGenerated obj_recv, obj_send; obj_recv.deserialize(connection.recv()); recved = obj_recv.get_msg(); obj_send.set_msg(recved[0], recved[1]); ScatterGatherArray sga_send = obj_send.serialize(); connection.send(sga_send); We implemented this approach for the echo server workload which requires copies. A fully integrated serialization library would know the location of any field’s header information ahead of time. Deserialization could then be a constant-time operation and the library could lazily recover the pointer for any given field when the programmer calls get_field. Zero-copy deserialization (without copies) causes the application to take ownership of data allocated in the networking stack’s packet buffers, which the networking stack might need to reclaim later. Additionally, unless the application uses in-place updates when writing data from received packets (e.g., a put request in Redis), the deserialized data might need to be “re-scattered” into specific in-memory data structures, which requires copies. A fully integrated serialization library and networking stack would need to deal with memory safety and reclamation on the deserialization path (§4.4). ### 3.3 Prototype Implementation We implemented this approach for the echo server workload for the data structure in Listing 2 in C++ on top of the same UDP networking stack for DPDK used in Section 2.1. We modified the DPDK datapath to produce a linked list of mbuf packet data structures given the scatter-gather array. The first mbuf contains the packet header with the serialization header copied in. The further mbufs point to the payloads referenced by the scatter-gather array using DPDK’s attach_extbuf API. To comply with kernel bypass I/O memory requirements, the server directly initializes the data structure payload from pre-registered memory. However, Section 4.3 discusses strategies to ensure application memory addresses can be used for I/O. The prototype implementation achieves about 9.15 Gbps (highest throughput measured under 15µs of tail latency). The prototype’s performance improves on all the serialization libraries and the 1-copy (“No Serialization”) baseline, but falls about 1.2 Gbps short of the optimal DPDK throughput. We speculate this gap comes from inefficient use of scatter-gather entries (allocating an entire mbuf for just the packet header and object header). Nonetheless, this prototype shows that leveraging NIC scatter-gather is a promising way to accelerate serialization. ### 4 Open Research Challenges Many challenges remain in building general-purpose and usable serialization libraries that leverage NIC scatter-gather. This section covers four areas of future work. #### 4.1 NIC Support for Scatter-Gather Building a scatter-gather based serialization library requires modeling the performance trade-offs of scatter-gather, which can vary across NICs as well as device drivers. Modeling scatter-gather in current NICs gives insight into how future NIC designs can better support scatter-gather based serializations. Section 3.1 shows that our PCIe-connected NIC adds overhead for transferring small payloads, so scatter-gather can only help for data structures with large enough payloads. Eliminating the PCIe interconnect in the NIC [24] could change these tradeoffs and make scatter-gather beneficial for data structures with smaller payloads. Additionally, understanding how to manage the number of concurrent PCIe requests would help model the time required to transmit any given scatter-gather array. #### 4.2 Using Scatter-Gather Efficiently Translating application data structures into scatter-gather arrays that work efficiently with a specific NIC requires optimizing the memory layout of the scatter-gather array. Data structures could vary in size (many fields or few fields), shape (differently-sized fields) and complexity (contain nested objects). Naively creating one scatter-gather entry per data structure field could add overhead, so the serialization library must modify the memory layout of the scatter-gather array before handing it to the NIC. This optimization encompasses coalescing some fields into larger buffers and keeping some fields as separate entries, given a model of scatter-gather performance. #### 4.3 Accessing Application Memory for Zero-Copy I/O A completely zero-copy serialization solution requires using arbitrary application memory for I/O, which raises issues related to programming effort and memory fragmentation. Kernel bypass requires that any memory used for I/O lives in pinned and backed pages, because the virtual to physical mappings of this memory must remain the same during the program lifetime. As a result, pinning an entire application’s memory for kernel bypass I/O could lead the OS to allocate large amounts of memory that the application will never use. For memory-intensive datacenter workloads, this could impact the performance of other processes or even the ability for other applications to share infrastructure. Thus, the networking stack and serialization library must understand which application memory will be used for I/O and must be pinned. Pinning memory on demand in the networking stack seems promising but would hinder performance on the packet-processing fast path. On-demand pinning would tell the networking stack which data needs to be pinned, but would add the overhead of a system call to packet transmission. Some NICs have additional penalties to consider. Mellanox NICs require memory registration, so the device can do address translation. However, the NIC can only hold a fixed number of address mappings. Fetching a mapping, done when the first address in a newly mapped region is transmitted, adds a 1µs latency penalty. If the networking stack registers too many regions, some mappings might fall out of the NIC memory, causing an effect similar to a cache miss. A new class of kernel bypass-aware memory allocators [40, 42] could enable zero-copy dataflows, but raises research challenges related to application integration and memory fragmentation. They could pin large regions of memory beforehand and allocate “dataplane” memory directly into these regions, while allocating “control” memory into a normal heap. To do this transparently, allocators would need to understand which data needs to be registered with minimal programming effort, perhaps with some sort of compiler-based control flow analysis [4]. To enable multi-tenancy and minimize interference with other processes, the allocators need to to minimize memory fragmentation and understand how to give up unused memory back to the OS. 4.4 Providing Zero-Copy I/O with Memory Safety A zero-copy serialization stack must provide memory safety, in the form of write and free protection during transmission, and a memory management scheme on the deserialization path. As the Demikernel paper [42] suggests, the memory allocator could provide free protection by adding a reference count to any buffers that are transmitted. However, providing transparent, efficient write protection from concurrent memory accesses between the NIC and CPU is an open problem. Relying on Linux write protection would add the overhead of a page fault to kernel bypass applications [11]. The networking stack could adopt techniques from recent work [6] to use cache invalidation to detect when addresses are being overwritten and accordingly respond, but this requires custom hardware. Relying on a memory-safe language such as Rust to build the serialization library and networking stack would not protect against read-write races between the NIC hardware and CPU. On the deserialization path, the networking stack may need to eventually reclaim application buffers (e.g., if an application uses an in-place update to write a value from a received packet). If the application does not free received buffers in time, the networking stack could run out of memory. 5 Related Work Serialization Acceleration. Many libraries attempt to improve CPU-based serialization by optimizing their wire format [36,38], employing SIMD parallelism for decoding [21], or reducing the overhead of type inference in dynamic serialization [20, 23]. These approaches do not remove the fundamental cost required to move memory in software. As a result, recent research proposes offloading serialization to custom accelerators [15, 28, 39] or directly within SSDs for storage [35]. Unlike these accelerators, the scatter-gather functionality already exists in widely used NICs. Kernel Bypass Systems. Our work is enabled by recent kernel bypass I/O frameworks that expose NIC interfaces directly to applications in userspace [14, 30, 34] to eliminate OS level packet processing overheads. Many recent kernel bypass networking stacks [26,27,29,42] build on top of these interfaces to provide APIs to applications while offering low latency, optimized thread scheduling, or zero-copy I/O. eRPC [17] offers general-purpose RPC for commodity networking hardware, and zero-copy networking. None of these systems directly offer general-purpose, zero-copy, data structure serialization as a programming primitive, which requires scatter-gather. Scatter-Gather Capabilities. High-performance computing applications have used scatter-gather to optimize MPI all-to-all communication primitives [8], or provide zero-copy communication over MPI derived datatypes [32]. Kesavan, et al. [19] uses scatter-gather to measure when zero-copy helps an in-memory database, but does not consider serialization of arbitrary data structures. Derecho [16], a recent SMR system, uses scatter-gather to provide zero-copy I/O for scattered data structures, but relies on specific layouts of data structures provided by their memory allocator. We propose designing general-purpose serialization for application data in arbitrary memory layouts. 6 Conclusion As link speeds have increased, servers have less cycles to process packets. Object serialization is a core component of datacenter systems, but it cannot keep up with modern networks. We identify that CPU-based software serialization is inherently inefficient, as it relies the CPU to perform data movement. We propose using a hardware capability already present in widely deployed NICs to accelerate serialization: NIC scatter-gather functionality. Our prototype shows that by leveraging NIC scatter-gather to offload data movement from the CPU to the NIC, it is possible to build a zero-copy and zero-allocation serialization library. We identify several areas of future work: better hardware support for scatter-gather, using scatter-gather efficiently, providing transparent memory registration, and ensuring memory safety with zero-copy. 7 Acknowledgements We thank the anonymous HotOS reviewers, Akshay Narayan, Amy Ousterhout, Anirudh Sivaraman, Anuj Kalia, Jacob Nelson, Kostis Kaffes, Qian Li, Shoumik Palkar, and the members of the Stanford Future Data and SING Research groups for their invaluable feedback. This research was supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Toyota Research Institute, Northrop Grumman, Cisco, SAP, and the NSF under CAREER grant CNS-1651570 and Graduate Research Fellowship grant DGE-1656518. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Toyota Research Institute (“TRI”) provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. References
{"Source-Url": "https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s10-raghavan.pdf", "len_cl100k_base": 5910, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23104, "total-output-tokens": 7380, "length": "2e12", "weborganizer": {"__label__adult": 0.0005016326904296875, "__label__art_design": 0.0003924369812011719, "__label__crime_law": 0.0004210472106933594, "__label__education_jobs": 0.0005545616149902344, "__label__entertainment": 0.00014984607696533203, "__label__fashion_beauty": 0.00021767616271972656, "__label__finance_business": 0.0003027915954589844, "__label__food_dining": 0.00043845176696777344, "__label__games": 0.000751495361328125, "__label__hardware": 0.00923919677734375, "__label__health": 0.0008349418640136719, "__label__history": 0.0004436969757080078, "__label__home_hobbies": 0.00013113021850585938, "__label__industrial": 0.000942230224609375, "__label__literature": 0.0002570152282714844, "__label__politics": 0.0003139972686767578, "__label__religion": 0.0006551742553710938, "__label__science_tech": 0.306396484375, "__label__social_life": 9.524822235107422e-05, "__label__software": 0.0163116455078125, "__label__software_dev": 0.6591796875, "__label__sports_fitness": 0.0003535747528076172, "__label__transportation": 0.0010232925415039062, "__label__travel": 0.00027370452880859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32652, 0.02362]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32652, 0.45085]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32652, 0.85452]], "google_gemma-3-12b-it_contains_pii": [[0, 3365, false], [3365, 8033, null], [8033, 12493, null], [12493, 18321, null], [18321, 23998, null], [23998, 29838, null], [29838, 32652, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3365, true], [3365, 8033, null], [8033, 12493, null], [12493, 18321, null], [18321, 23998, null], [23998, 29838, null], [29838, 32652, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32652, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32652, null]], "pdf_page_numbers": [[0, 3365, 1], [3365, 8033, 2], [8033, 12493, 3], [12493, 18321, 4], [18321, 23998, 5], [23998, 29838, 6], [29838, 32652, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32652, 0.05426]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
12f309681bd5eecbb9c028e03c0324661f69299d
[REMOVED]
{"Source-Url": "https://pure.tue.nl/ws/files/2054414/Metis213202.pdf", "len_cl100k_base": 7596, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34343, "total-output-tokens": 9599, "length": "2e12", "weborganizer": {"__label__adult": 0.0004413127899169922, "__label__art_design": 0.001373291015625, "__label__crime_law": 0.0006504058837890625, "__label__education_jobs": 0.0012884140014648438, "__label__entertainment": 0.00035953521728515625, "__label__fashion_beauty": 0.00026488304138183594, "__label__finance_business": 0.0005421638488769531, "__label__food_dining": 0.000457763671875, "__label__games": 0.0009107589721679688, "__label__hardware": 0.001216888427734375, "__label__health": 0.0007710456848144531, "__label__history": 0.0005745887756347656, "__label__home_hobbies": 0.00011348724365234376, "__label__industrial": 0.0005154609680175781, "__label__literature": 0.000949859619140625, "__label__politics": 0.00047469139099121094, "__label__religion": 0.0007457733154296875, "__label__science_tech": 0.309326171875, "__label__social_life": 0.00019407272338867188, "__label__software": 0.048065185546875, "__label__software_dev": 0.62939453125, "__label__sports_fitness": 0.00029468536376953125, "__label__transportation": 0.000591278076171875, "__label__travel": 0.0002894401550292969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41727, 0.03401]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41727, 0.46804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41727, 0.85566]], "google_gemma-3-12b-it_contains_pii": [[0, 2402, false], [2402, 4889, null], [4889, 8050, null], [8050, 10156, null], [10156, 13285, null], [13285, 16187, null], [16187, 19143, null], [19143, 22271, null], [22271, 22907, null], [22907, 26100, null], [26100, 29260, null], [29260, 32442, null], [32442, 35698, null], [35698, 38518, null], [38518, 41727, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2402, true], [2402, 4889, null], [4889, 8050, null], [8050, 10156, null], [10156, 13285, null], [13285, 16187, null], [16187, 19143, null], [19143, 22271, null], [22271, 22907, null], [22907, 26100, null], [26100, 29260, null], [29260, 32442, null], [32442, 35698, null], [35698, 38518, null], [38518, 41727, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41727, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41727, null]], "pdf_page_numbers": [[0, 2402, 1], [2402, 4889, 2], [4889, 8050, 3], [8050, 10156, 4], [10156, 13285, 5], [13285, 16187, 6], [16187, 19143, 7], [19143, 22271, 8], [22271, 22907, 9], [22907, 26100, 10], [26100, 29260, 11], [29260, 32442, 12], [32442, 35698, 13], [35698, 38518, 14], [38518, 41727, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41727, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
8689a0996d95dd8f7828abde43cac5734bdb90e9
ABSTRACT SAS®9 and SAS Enterprise Guide 3.0 provide an array of powerful tools for analyzing data and generating the information that organizations demand in order to address their business needs. This two-part tutorial focuses on Stored Processes within SAS Enterprise Guide. Part 1 presents the view of the "end user" or "information consumer." Part 2 focuses on the technical aspects of creating and deploying Stored Processes. In Part 1, we explore the flexible and responsive reporting environment and demonstrate benefits such as: - The user has easy access to Stored Processes in Microsoft Excel and Word via the SAS® Add-In for Microsoft Office. - Because the processing is done by the Stored Process, consistent answers are provided to all users. - Parameter–driven Stored Processes allow powerful and flexible “drill–down” capabilities. - Because Stored Processes use the original external data, the user is not limited by Excel row limitations. - The user can perform ad hoc analyses of the data, using many of the SAS analytical functions while working in Excel or Word. - Using the SAS® Information Delivery Portal, all users can access and use the same Stored Processes. In Part 2, we explore the tasks that are required to create and deploy Stored Processes, including: - Preparing the environment in SAS® Management Console. - Defining the metadata repository in the SAS Enterprise Guide Administrator. - Defining and creating Stored Processes. - Adding parameters (macro variables) to the appropriate Stored Processes for flexible reporting. - Defining reporting styles to be used. - Restricting access to data, reports, and Stored Processes. Authorized users can be defined individually or by group. INTRODUCTION Enterprise Guide provides a powerful development environment to create and deploy information for the information consumer. This paper focuses on the use of Enterprise Guide to develop Stored Processes that the information consumer accesses in their chosen environment. This is just one of many approaches to using Enterprise Guide. STORED PROCESSES – THE VIEW OF THE INFORMATION CONSUMER Stored Processes provide the information consumer with a powerful, flexible, and responsive reporting vehicle. And since Stored Processes can be accessed using Office products, the user works with tools they are already familiar with, rather than learning a new tool. Some of the possibilities include: - Access to Stored Processes in Excel / Word via the SAS Add-In for Microsoft Office. - Since the processing is done by the Stored Process, consistent and timely information is provided to all users. - Parameter Driven Stored Processes allow powerful and flexible “drill down” capabilities. - Since the Stored Process uses SAS to process the original external data, the user is not limited to usual Excel row limits. - The user can perform ad hoc analyses against the data, using many of the SAS analytical functions while working in Excel or Word. - Using the SAS Information Delivery Portal, all users can access and use the same Stored Processes. - Using Web Report Studio the user can produce reports against a standardized business view of the corporate data as well as using the same Stored Processes mentioned above. - Access to the information is controlled centrally via the SAS Management Console. EXAMPLE 1: SUPPLIER LISTS We all suffer from those folders of “outdated” information that we statically store on our computers. And having someone run a daily update of common reports to be stored as static is not really productive. A better solution is live access to the data you need, in a format you can easily access and understand. Stored Processes, ideally created in Enterprise Guide, enable you to get up to date information when you need it and how you need it. This first example is quite simple, just to get going. In later examples, we will increase the control and complexity of the reporting environment. The first example Stored Process report produced by Enterprise Guide is a simple Excel listing of suppliers by country. When the Stored Process is created by the programmer in Enterprise Guide, the output format could also be RTF, PDF, and other possibilities. Before Stored Processes can be run in Microsoft Excel or Word, the SAS Add-In to Microsoft Office must be installed on your PC. Once installed, you are ready to begin running Stored Processes. To view the supplier listing in Excel, - Start Excel - Select SAS on the Excel menu bar - Select Browse SAS Programs - Log on to the SAS Metadata Server - Navigate to and select the folder containing the Stored Processes - Select the first Stored Process to run (Supplier Listing) - Click Run - The listing appears in Excel. ![Excel Supplier Listing Example](image-url) In its present state, the Stored Process does not enable interaction. However, since the report is displayed in Excel, the user is free to enhance the report if needed without affecting the source data. EXAMPLE 2: SALES CHANNEL ANALYSIS In the first example, the Stored Process fully specified all reporting parameters. Often, Stored Processes are generalized so that the information consumer can identify reporting parameters such as time frame, region, format, etc. The Sales Channel Analysis demonstrates a simple parameter driven report. From Excel, - Select SAS on the Excel menu bar - Select the Stored Process, Sales Channel Analysis and Run - In the dialog that appears, choose the year to analyze and Run. Notice that each Stored Process has placed its output on a separate Excel sheet - Scroll to the bar chart and view the pictorial representation of the sales channels - Right click on the chart and select Graph Toolbar. Notice the functionality that is available for altering the appearance of the chart to meet your needs. While this example did enable user input (the year), all other reporting parameters were pre-defined as part of the Stored Process. EXAMPLE 3: AD HOC REPORT GENERATOR Stored Processes can let the user “design” a customized report via a simple parameter menu. In the past, this would be accomplished by passing the requirements to the IT department and waiting for the report to be defined and produced. Probably this would have involved several iterations until the desired results were obtained. From Excel, - Select SAS on the Excel menu bar - Select the Stored Process **Ad Hoc Report Generator** and Run - In the following dialog, select the parameters to define your report. Notice that all parameters in the example are required. However, Stored Processes can be created with a mix of required and optional parameters. • Select **Run**. Again, note that each Stored Process creates a separate sheet in your Excel Workbook, so you have all of the results available. If you prefer to produce your reports in **WORD** instead of Excel, the steps are identical to those outlined for Excel. Note that after producing your reports in **WORD**, you may need to adjust the pagination by manually inserting page breaks to meet your needs. EXAMPLE 4: ACCESSING SAS DATA IN MICROSOFT OFFICE In addition to being able to run Stored Processes, the SAS Add-In to Microsoft Office allows the Excel user to work with SAS data files. The real bonus of this feature is that you can access data in excess of the Excel maximum row limit! In this example, we will explore a SAS data table with 951,669 rows. - On the Excel menu bar, select **SAS** \( \rightarrow \) **Open SAS Data Source** - Login to the SAS Metadata Server - Select **Servers** \( \rightarrow \) the server to use \( \rightarrow \) **SAS data library** \( \rightarrow \) **SAS table** \( \rightarrow \) **Open** - Use the **SAS Data Analysis** toolbar to explore the data - Initially, just the first 5000 rows are displayed. To view more rows, click the first right arrow. - If you want to move to the end of the table, click the second right arrow, which will show you that there are 951,669 rows in total. The **SAS Data Analysis** toolbar enables you to subset and manipulate the data. - To subset, select the filter tool (funnel shape) - Using the dialog, select a subset: **country = Netherlands.** This will produce a view of 62,533 rows, but will in no way impact the original data. - To sort the data, select the sort tool and specify the sort columns in the dialog. In addition to viewing the data, you have the ability to analyze the data in MS WORD or EXCEL. For variety, switch to WORD. - Select **SAS** from the WORD menu bar - Select **Browse SAS Programs** - Expand the **SAS Tasks** node and select **Graph** - Select **Pie Chart** - The **Pie Chart property** sheet will open. Note that the property sheet is the same as used in Enterprise Guide. - On the property sheet, select **Simple Pie** - Select **Task Roles** and drag the variables to the appropriate roles - **Column to Chart** - **Sum of** - Provide title and footnote text by selecting **Titles** - Run. EXAMPLE 5: RUNNING STORED PROCESSES USING SAS WEB REPORT STUDIO AND SAS INFORMATION DELIVERY PORTAL Stored Processes are available to run from many environments, aside from Microsoft Office. If you have SAS Web Report Studio, or the SAS Information Delivery Portal, these environments can be customized to your preferences and can be used as your information delivery channel. To produce your report in SAS Web Report Studio, - Open SAS Web Report Studio and login - Select Open Report - Navigate to the Stored Processes Folder - Select the Stored Process to run - View the results, by selecting Reports → Open. - Repeat these steps for other Stored Processes. The SAS Information Delivery Portal gives you the ability to organize information according to your needs and preferences. Before accessing the Stored Processes, add a custom portal page. - Start the SAS Information Delivery Portal and log in - Select Options → Add (under Pages) - Add a name and description for the page - Select Add, and Done. Now add a Portlet (Container) for the Stored Process links. - Select Options → Portlet (under Current Page) - Add a name and description for the portlet - Select Add, and Done. Now you are ready to add the links. - Select the Edit icon on the portlet - Click Add Items - Use the Search tab to locate and select the SAS Stored Processes - When finished selecting, Select Add, and Done. You can reorder the items by using the arrows. When finished, select OK. You are now ready to run the Stored Processes. EXAMPLE 6: USING AN HTML APPLICATION IN YOUR BROWSER SAS Stored Processes are not restricted to SAS client applications. In this example, an HTML application has been created to select and run the Stored Processes. Frames are used to surface a selection menu which is the parameter interface for the selected report. For this example, when you start your browser and open the application, you would see a selection menu, containing the available Stored Processes. Select the Stored Process you want to run, and supply the parameters as shown in the below application. Notice that the available Stored Processes are in the upper left of the window. When you select Run, your report displays in the frame on the right. PART 1 – SUMMARY In Part 1, we have focused on the powerful reporting and analytical environment available to the information consumer, as a result of Enterprise Guide Stored Processes. Not only is information available on demand, but up-to-date data is as well, for those times that further analytics and exploration is required. In part 2, we will focus on how to create what the information consumer needs! ENTERPRISE GUIDE STORED PROCESSES Enterprise Guide 3.0 provides a powerful environment and toolset for end-to-end reporting. Since the information consumer may not use Enterprise Guide for their reporting environment, Enterprise Guide also provides the flexibility to create Stored Processes that enable access to the data and reporting in an alternate delivery channel. Stored Processes can be designed to create reports in a variety of formats depending on requirements - Static reports - Parameter driven reports - Drillable reports. Reports produced using Stored Processes, can be designed for delivery - as standard HTML (Static only) - via Microsoft Office products (Word, Excel) using the SAS Add-In for Microsoft Office - via the SAS Information Delivery Portal - via SAS Web Report Studio. PREPARING THE ENVIRONMENT Before development or reporting or Stored Processes can begin, there are tasks to complete including the setup of the operating system environment and the SAS Metadata environment. Although this may seem to add complexity to the process, typically most of the effort will only be expended once for a set of related projects as they are likely to use the same resources and environment. Benefits of this approach include control of access to the corporate data and computing resources. Some key concepts we will reference in our discussion include: - **METADATA SERVER** The SAS Metadata Server stores information about servers, users, and Stored Processes to provide to client applications. The information is stored in Metadata Repositories. - **STORED PROCESS** A Stored Process is simply a set of SAS code that is available for execution by client applications. - **STORED PROCESS SERVER** A Stored Process Server executes Stored Processes for client applications. The **Administrator** will need to carry out the following tasks when setting up the operating system environment. - Define user accounts and groups - Define physical locations for - Enterprise Guide projects - Stored Process code - HTML applications - Grant permissions, as appropriate - Copy development HTML code into the production location, when ready - Start the SAS Services Application - Start ➔ Programs ➔ SAS ➔ `profilename` ➔ Start SAS Services Application - Start Tomcat (or alternative). - Start ➔ Programs ➔ SAS ➔ `profilename` ➔ Start Tomcat. Using the SAS Management Console, the **administrator** will define the metadata: - Define the Metadata Repository - Add libraries into the Data Library Manager - Import tables into the Data Library Manager - Create a location for the Stored Process metadata, in the **Stored Process Manager** - If using **SAS Web Report Studio**, copy the metadata for completed Stored Processes from the development folder into the Web Report Studio Shared folder in the Stored Process Manager. DEVELOPMENT EFFORTS As with any technology solution, development of the reports and Stored Processes benefits from careful planning, and proper preparation of the EG environment. Requirements gathering It is easy to send a report to someone, when requested, but deploying information to an entire organization benefits from up-front planning, including - Report requirements – determine requirements such as - Source data - Segmentation or summary levels - Granularity, for investigative purposes - Further tools required to explore the data - Deployment requirements – which delivery methods are to be used? (HTML, Web Report Studio, SAS Information Delivery Portal, Excel, Word, etc.). - Development requirements - Source data format, location, and access - Metadata - Execution location. - Security Requirements - Who requires access to top-level reports? - Which users are permitted access to more granular data, for further exploration? GETTING STARTED WITH ENTERPRISE GUIDE 3.0 Our focus will be on defining and creating the reports, Stored Processes, and applications that the Information Consumer will use. Note that the example reports are displayed in Part 1. PROGRAMMER ROLE: REPORTING VIA EG The Programmer’s role will be similar to their role in preparing standard reporting. However, a portion of the efforts will focus on the extensibility of the reporting. For example, the Stored Processes will enable the user to produce further analyses and reports, beyond those originally provided. As the programmer, you will - Perform any necessary data manipulation (e.g. Joins) - Generate reports & Graphs in Enterprise Guide - Create Stored Process to be used by the end user - Define reporting styles to be used. EXAMPLE 1: CREATING SUPPLIER LISTS In this first example, we produce a simple listing of supplier names and addresses by country, using data stored on the BI server. When the user accesses the report, they will always enjoy the most current data (as opposed to a static report created in some batch process). The goal is to create a Stored Process that produces the report whenever it is run. The steps for the report production are provided here, and the Stored Process is created in a later step. Creating the report: - Start Enterprise Guide and open a new project - Select the Repository and Server (Tools ➔ Options ➔ Administrator) - Add the data to the project - Select Open from SAS Server/Binder - Review the data and close the viewer - Create the report - Open the properties sheet (Describe ➔ List Data) - Drop the variables to the desired roles - Select appropriate options, titles, footnotes, etc - Run the task and view the results in Enterprise Guide and in the browser - Save the project. The Supplier report has been created. Now we need to create a Stored Process for access by the user. Creating the Stored Process: - In the Process Flow window, right click on the List Data node - Select Create Stored Process and assign a name, description and key words. - Step through the wizard to assign - Metadata location - Execution environment - Library assignment - Parameters (we will use default for this report) - Output options - HTML-based user interface. To test usage of the Stored Process and the HTML interface which was generated, - Use Windows Explorer to navigate to the folder where the new interface was created - Double click on the HTML file - Select Run - Login when prompted and view the results. Since the Stored Process always processes the most current data, your results are always up to date. **EXAMPLE 2 – SUMMARY SALES REPORT AND BAR CHART** The users are accustomed to receiving a summary sales report comparing countries and channels. Typically, the report would be made available on a periodic basis. It is desired to make the information available when needed, with the most current data available. Further it would be nice to publish the report complete with a graphical representation of the sales figures. In this example, we will produce the summary report, and a supporting graph. In the next example, we will create both together as a Stored Process. Creating the report: - Add the data to the project - Select Open from SAS Server / Binder - Navigate to the data to be used - View the date when loaded, for confirmation - Close the data viewer. - Generate the report - Select Describe ➔ Summary Tables to open the properties sheet - Drag the appropriate variables to the desired roles - Analysis Variables role - Page by role - Classification variables row - Define the layout for the summary table - Select Summary Tables - Drag the variables to the position they will appear in the summary table - Select the statistics to display - Specify the formats for the data cells - Customize the column and row headings - In turn, click on each columns and row and select Heading Properties - Enter and appropriate header. NOTE: a blank header suppresses the header from appearing in the table - Right click in the Box Area and select Box Area Properties - Select Show Page Heading. - Customize the report title and footnote - Run and view the report. Enhance the Summary report with a bar chart showing sales by continent, with division on country within continent. Each channel will be displayed in a separate bar chart. The data are the same as the previous example. To produce the chart, - Select the order date used in the previous example - Double click the Bar Chart task and select Stacked Vertical Bar - Assign columns to the appropriate task role, including Column to Chart, Stack, Sum of, and Group Charts by - Select the Layout property sheet and change the Shape to Cylinder - Select the Horizontal Axis property sheet and supply an appropriate label - Select the Vertical Axis property sheet and supply an appropriate label - Select the Titles property sheet and change the text - RUN and view the chart. **EXAMPLE 3 – CREATING A PARAMETER DRIVEN STORED PROCESS** Once the users work with your Stored Processes for a while, you will find that they request extensibility and customization. Using the Summary Report and Bar Chart from the previous example, we will combine both into a single Stored Process, and provide the user with the option to specify parameters. First, you will create a Code item to store the LIBNAME. In previous examples, this was not required since when a Stored Process is created from a task in the project, a LIBNAME statement is automatically created. But when creating a Stored Process from scratch, the LIBNAME statement is not automatically assigned. Time and effort is saved if the statement is copied from an existing Stored Process. - Open the Supplier Listing Stored Process - Select Preview code - Locate the LIBNAME statement and copy it to the Windows clipboard - Cancel out of the Stored Process window - Select File ➔ New ➔ Code - Paste the LIBNAME statement into the code window - If required, modify the LIBNAME statement (for example, check that the semicolon is included at the end of the statement) - Save the Code item to the local computer - Close the SAS Code Editor window and the Stored Process. Next, create the basic Stored Process. - Select File ➔ New ➔ Stored Process - Assign a Name, Description and Keywords for the Stored Process - Click Next - Click Insert SAS Code - Select Project - Select the LIBNAME entry - Click OK - Move the cursor to a new line after the code - Repeat for the Summary Table and Bar Chart entries - Continue creating the Stored Process as in the earlier example - Test by producing the reports. Now that the Stored Process is working, we are ready to add parameters. Our focus will be on subsetting the data for a particular sales year and then including a subtitle to show the selected year. - Open the code for editing - Right click the Stored Process node and select Open - Select Preview code - Expand the code window to a useful size - Select Edit mode. - Locate the first TITLE statement and add a second title line including a parameter for the sales year. **Remember to use double quotes so that the parameter can be resolved.** Title2 “Summary for Sales Year &year”; - Locate the PROC TABULATE step and add a WHERE statement - Locate the PROC SORT step in the Bar Chart section of the code and - add WHERE statement Where year(order_date) = %year; - add Order_Date to the list of kept variables - Locate the TITLE statement for the Bar Chart and add a second title line to show the sales year - Close the SAS Code Editor - Now we need to define the parameters to the Stored Process: - Select the Parameters property sheet - Click Add - Select Parameters from SAS Code - Skip the dsname parameter - Enter a prompt for the year parameter - Check the Required box and supply a default value - Select the Constraints tab - Select List of values from the pull down menu - Enter the years 1998 to 2002 into the List Values - Click Add to go to the next parameter - Skip the remaining parameters - Close the Add Parameters dialog - Select Save and Run - Select a year to display and click Run - View the results. EXAMPLE 4 – CONVERTING EXISTING SAS MACRO CODE TO A STORED PROCESS In the past, to meet the demand to run ad hoc reports for the user community, we created a complex SAS macro program which contained a number of %let statements used to specify the content and layout of the report. By converting it to a parameter-driven Stored Process, we can make the program available to the users so that they can run it, thus freeing up valuable development time for our programmers, and providing a quicker turnaround for the users. What did the OLD way look like? Remember this required the programmer or somebody to input values into the program prior to running. We all know how easy it is to accidentally type over a semicolon thus wreaking havoc. The OLD way ```sas %let row=Country_Name; %let col=Order_Category; %let analvar=Order_Value; %let statistic=max; %let rowtotal=YES; %let coltotal=YES; %let title=My Table; %let page=NONE; %let year=2002; %macro adhoc; %if &coltotal=YES %then %do; %let coltotal=ALL; %end; %else %do; %let coltotal=; %end; %if &rowtotal=YES %then %do; %let rowtotal=(ALL * &analvar * &statistic); %end; %else %do; %let rowtotal=; %end; %if &page = NONE %then %do; ``` 16 %let page = ; %let pagedim = ; %end; %else %do; %let pagedim = &page,; %let page = &page; %end; proc tabulate data=sugidata.full_order_details; where Order_Year = &year; class &row &col &page; var &analvar; table &pagedim &row &coltotal, (&col * &analvar * &statistic) &rowtotal; title1 "&title"; title2 "Using data for &year"; run; %mend adhoc; %adhoc; The NEW way ![Image of Ad Hoc Report Generator] Note: The appearance of the interface will vary depending on the viewing environment (browser, MS Office, Web Report Studio, Information Delivery Portal) The original macro code could be converted manually by o Adding Stored Process statements (*ProcessBody, %global, %stpbegin, %stpend) o Removing %let statements o Registering the Stored Process and defining the parameters in the Stored Process Manager in the SAS Management Console. As an alternative, Enterprise Guide provides a one-stop environment for doing all the above while having the additional benefit of reducing the risk of creating errors by automating some of the tasks (adding Stored Process statements, identifying parameters from the code, registering the Stored Process). To load an existing SAS program into a new Stored Process: - Select File \(\rightarrow\) New \(\rightarrow\) Stored Process - Enter Name, Description and Keywords - Click Insert SAS code \(\rightarrow\) Project \(\rightarrow\) LIBNAME.sas - Position the cursor after the LIBNAME statement - Click Insert SAS code \(\rightarrow\) Local Computer - Navigate to your SAS program and select it - Maximize the SAS Code Editor. Once the program is loaded, you will need to remove the %let statements since these will be replaced by Stored Process parameters. You should also review the code to make sure that any LIBREFs match the LIBNAME statement. After coding changes are complete, continue creating the Stored Process. - Resize the SAS Code Editor - Continue through the wizard until you reach the Parameters step - Click Add \(\rightarrow\) Parameters from SAS code - Supply a User prompt - Supply a Default Value - Check the Required box - Select the Constraints tab, and set the constraints for the parameters - From the pull down menu, select List of values - Type in the values that the user can choose - Select Single selection - Click Add to go to the next parameter - Continue supplying properties for each parameter - When all parameters have been added, reply OK to the message - Use the up / down arrows to re-sequence the parameters into the order they will be displayed when the Stored Process is run - When complete, click Next - Step through the remainder of the Stored Process wizard as in previous examples and when complete, run the stored process. Did you get errors?? When creating a Stored Process from existing code, it is not uncommon to get one or two errors on the first run. These are usually just syntax of logic errors in the code, due to the changes you have made. To identify and fix any problems - Right click on the Stored Process icon and open the log to identify the problem - Double click the Stored Process icon, open the code editor and fix the problem - Then close and rerun! Now, no more requests to run these legacy reports. The users can run the reports themselves! EXAMPLE 6 – CREATING AN APPLICATION You may want to collect all of the Stored Processes in one container and enable access through a browser. This example will focus on creating an HTML application with frames to display menus and results. The final result enables parameter driven reporting with a choice of reports. Step 1: – Create an HTML page to run the application o Use any HTML editor to create an HTML page (FrameApp.html) with 3 frames arranged as above o Name the frames mainmenu, appmenu and results Step 2: – Create the main menu o Use any HTML editor to create an HTML page (MainMenu.html) to display the applications to be run o When an application is chosen, the menu for that application is to be displayed in the appmenu frame Code for MainMenu.html ```html <html> <head> <TITLE>MainMenu</TITLE> </HEAD> <BODY> <a href="Supplier_Listing_frame.html" TARGET="appmenu">Supplier Listing</a><br> <a href="Sales_Channel_Analysis_frame.html" TARGET="appmenu">Sales Channel Analysis</a><br> <a href="Ad_Hoc_Report_Generator_frame.html" TARGET="appmenu">Ad Hoc Report Generator</a> </BODY> </HTML> ``` Step 3: – Create the default pages for the appmenu and results frames o Use any HTML editor to create two blank HTML pages (**AppMenu.html** and **Results.html**) **Code for AppMenu.html** ```html <html> <head> <TITLE>Application Menu</TITLE> </HEAD> <BODY> </BODY> </HTML> ``` **Code for Results.html** ```html <html> <head> <TITLE>Application Menu</TITLE> </HEAD> <BODY> </BODY> </HTML> ``` Step 4: – Create application HTML files o Create copies of the HTML interfaces produced earlier when the Stored Processes were built in Enterprise Guide. Note: **This isn't essential as we could use the original interfaces, but this way we can** o Make any customisations we might want (for example, remove the option to display the SAS log) o Have the original code as a backup in case we make any mistakes o Still use the original interface in other applications such as the SAS Information Delivery Portal o In each interface, locate the **FORM** tag and add a **TARGET** parameter to display the results in the **results** frame. **Amended FORM tag** ```html <form method="post" action="http://localhost:8080/SASStoredProcess/do" onsubmit="return false;" target="results"> ``` Step 6: – Test the application o Test the application by opening the FrameApp.html page in your browser. **Note:** the SAS Services Application and Tomcat (or alternative) must be running on the application server. o Click an application from the list in the upper left o Choose the options, from the parameters listed o Click Run. Your HTML application is ready for “Prime Time”. Enjoy! **SOFTWARE REQUIREMENTS** The examples and features discussed in this paper used the SAS software as supplied in the SAS Enterprise BI Server Technology Solution. **CONCLUSION** Enterprise Guide 3.0 provides a powerful environment and toolset for end-to-end reporting. As demonstrated in this paper, efforts expended during development and setup enable flexible reporting and analytics to meet your organization’s business needs. RECOMMENDED READING To gain a more complete understanding of Enterprise Guide 3.0, the authors recommend the following publications and web links. CONTACT INFORMATION Your comments and questions are valued and encouraged. Contact the authors at: Marje Fecht Prowerk Consulting LLC Email: marje.fecht@prowerk.com Web: www.prowerk.com Peter Bennett SAS Email: peter.bennett@suk.sas.com SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are trademarks of their respective companies.
{"Source-Url": "https://support.sas.com/resources/papers/proceedings/proceedings/sugi31/258-31.pdf", "len_cl100k_base": 6901, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 43224, "total-output-tokens": 8019, "length": "2e12", "weborganizer": {"__label__adult": 0.00032401084899902344, "__label__art_design": 0.0005388259887695312, "__label__crime_law": 0.0004296302795410156, "__label__education_jobs": 0.0036106109619140625, "__label__entertainment": 0.00014972686767578125, "__label__fashion_beauty": 0.0001760721206665039, "__label__finance_business": 0.01148223876953125, "__label__food_dining": 0.0003786087036132813, "__label__games": 0.0005521774291992188, "__label__hardware": 0.0009489059448242188, "__label__health": 0.00029397010803222656, "__label__history": 0.00032806396484375, "__label__home_hobbies": 0.0001806020736694336, "__label__industrial": 0.0012531280517578125, "__label__literature": 0.0002582073211669922, "__label__politics": 0.00033164024353027344, "__label__religion": 0.000347137451171875, "__label__science_tech": 0.0287628173828125, "__label__social_life": 0.0001928806304931641, "__label__software": 0.33837890625, "__label__software_dev": 0.6103515625, "__label__sports_fitness": 0.0002067089080810547, "__label__transportation": 0.0004799365997314453, "__label__travel": 0.00031375885009765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32484, 0.005]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32484, 0.25176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32484, 0.83195]], "google_gemma-3-12b-it_contains_pii": [[0, 3347, false], [3347, 4798, null], [4798, 5838, null], [5838, 6667, null], [6667, 6814, null], [6814, 7079, null], [7079, 8378, null], [8378, 8993, null], [8993, 10185, null], [10185, 10516, null], [10516, 11646, null], [11646, 14501, null], [14501, 17372, null], [17372, 19956, null], [19956, 22931, null], [22931, 24987, null], [24987, 25842, null], [25842, 28268, null], [28268, 29391, null], [29391, 31422, null], [31422, 32484, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3347, true], [3347, 4798, null], [4798, 5838, null], [5838, 6667, null], [6667, 6814, null], [6814, 7079, null], [7079, 8378, null], [8378, 8993, null], [8993, 10185, null], [10185, 10516, null], [10516, 11646, null], [11646, 14501, null], [14501, 17372, null], [17372, 19956, null], [19956, 22931, null], [22931, 24987, null], [24987, 25842, null], [25842, 28268, null], [28268, 29391, null], [29391, 31422, null], [31422, 32484, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32484, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32484, null]], "pdf_page_numbers": [[0, 3347, 1], [3347, 4798, 2], [4798, 5838, 3], [5838, 6667, 4], [6667, 6814, 5], [6814, 7079, 6], [7079, 8378, 7], [8378, 8993, 8], [8993, 10185, 9], [10185, 10516, 10], [10516, 11646, 11], [11646, 14501, 12], [14501, 17372, 13], [17372, 19956, 14], [19956, 22931, 15], [22931, 24987, 16], [24987, 25842, 17], [25842, 28268, 18], [28268, 29391, 19], [29391, 31422, 20], [31422, 32484, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32484, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
10146887e2f96438cbf6c706ce6a12b14faa20b0
Overview The purpose of this assignment is to give you experience using first-class and higher-order functions so that you can incorporate them into your programming practice. You will use existing higher-order functions, define higher-order functions that consume functions, and define higher-order functions that return functions. The assignment builds on what you’ve already done, and it adds new ideas and techniques that are described in sections 2.7, 2.8, and 2.9 of Build, Prove, and Compare. Setup The executable μScheme interpreter is in /comp/105/bin/uscheme; if you are set up with use comp105, you should be able to run uscheme as a command. The interpreter accepts a -q ("quiet") option, which turns off prompting. Your homework will be graded using uscheme. When using the interpreter interactively, you may find it helpful to use ledit, as in the command ledit uscheme We don't set you up with a template—by this time, you know how to identify solutions and where to put contracts, algebraic laws, and tests. Dire Warnings The μScheme programs you submit must not use any imperative features. Banish set, while, print, println, printu, and begin from your vocabulary! If you break this rule for any exercise, you get No Credit for that exercise. You may find it useful to use begin and println while debugging, but they must not appear in any code you submit. As a substitute for assignment, use let or let*. Except as noted below, do not define helper functions at top level. Instead, use let or letrec to define helper functions. When you do use let to define inner helper functions, avoid passing as parameters values that are already available in the environment. Your solutions must be valid μScheme; in particular, they must pass the following test: /comp/105/bin/uscheme -q < myfilename > /dev/null without any error messages or unit-test failures. If your file produces error messages, we won't test your solution and you will earn No Credit for functional correctness. (You can still earn credit for structure and organization). If your file includes failing unit tests, you might possibly get some credit for functional correctness, but we cannot guarantee it. We will evaluate functional correctness by testing your code extensively. Because this testing is automatic, each function must be named be exactly as described in each question. Misnamed functions earn No Credit. Reading Comprehension (10 percent) Answer these questions before starting the rest of the assignment. As usual, you can download the questions\(^1\). 1. The first step in this assignment is to learn the standard higher-order functions on lists, which you will use a lot. Suppose you need a list, or a Boolean, or a function—what can you call? Review Sections 2.7.2, 2.8.1, and 2.8.2. Now consider each of the following functions: map filter exists? all? curry uncurry foldl foldr \(^1\)/cqs.hofs.txt Put each function into exactly one of the following four categories: (B) Always returns a Boolean (F) Always returns a function (L) Always returns a list (A) Can return anything (including a Boolean, a function, or a list) After each function, write (B), (F), (L), or (A): map filter exists? all? curry uncurry foldl foldr 2. Here are the same functions again: map filter exists? all? curry uncurry foldl foldr For each function, say which of the following five categories best describes it. Pick the most specific category (e.g., (S) is more specific than (L) or (M), and all of these are more specific than (?)). (S) Takes a list and a function and always returns a list of the same size (L) Takes a list and a function and always returns a list of at least the same size (M) Takes a list and a function and always returns a list of at most the same size (?) Might return a list (V) Never returns a list After each function, write (S), (L), (M), (?), or (V): map filter exists? all? curry uncurry foldl foldr 3. Here are the same functions again: map filter exists? all? curry uncurry foldl foldr Put each function into exactly one of the following categories. Always pick the most specific category (e.g. (F2) is more specific than (F)). (F) Takes a single argument: a function (F2) Takes a single argument: a function that itself takes two arguments (+) Takes more than one argument After each function, write (F), (F2), or (+): map filter exists? all? curry uncurry foldl foldr You are now ready to tackle most parts of exercise 14. 4. Review the difference between foldr and foldl in section 2.8.1. You may also find it helpful to look at their implementations in section 2.8.3, which starts on page 130; the implementations are at the end. (a) Do you expect (foldl + 0 '(1 2 3)) and (foldr + 0 '(1 2 3)) to be the same or different? (b) Do you expect (foldl cons '(' '(1 2 3)) and (foldr cons '(' '(1 2 3)) to be the same or different? (c) Look at the initial basis, which is summarized on 156. Give one example of a function, other than + or cons, that can be passed as the first argument to foldl or foldr, such that foldl always returns exactly the same result as foldr. (d) Give one example of a function, other than + or cons, that can be passed as the first argument to foldl or foldr, such that foldl may return a different result from foldr. You are now ready to tackle all parts of exercises 14 and 15. 5. Review function composition and currying, as described in section 2.7.2, which starts on page 125. Then judge the proposed algebraic laws below, which propose equality of functions, according to these rules: - Assume that names curry, o, <, *, cons, even?, and odd? have the definitions you would expect, but that m may have any value. - Each law proposes to equate two functions. If the functions are equal—which is to say, when both sides are applied to an argument, they always produce the same result—then mark the law Good. But if there is any argument on which the left-hand side produces different results from the right, mark the law Bad. Mark these laws: - (((curry <) m) == (lambda (n) (< m n))) - (((curry <) m) == (lambda (n) (< n m))) - (((curry cons) 10) == (lambda (xs) (cons 10 xs)) - (o odd? (lambda (n) (* 3 n))) == odd? - (o even? (lambda (n) (* 4 n))) == even? You are now ready to tackle the first three parts of exercise 19, as well as problem M below. Programming and Proof (90 percent) Overview For this assignment, you will do Exercises 14 (b-f,h,j), 15, and 19, from pages 209 to 212 of Build, Prove, and Compare, plus the exercises A, G1, G2, G3, M, and O below. A summary of the initial basis can be found on page 156. A copy was handed out in class—while you’re working on this homework, keep it handy. Each top-level function you define must be accompanied by a contract and unit tests. Each internal function written with lambda should be accompanied by a contract, but internal functions cannot be unit-tested. Algebraic laws are required only where noted below. Book problems 14. Higher-order functions. Do exercise 14 on page 209 of Build, Prove, and Compare, parts (b) to (f), part (h), and part (j). You must not use recursion—solutions using recursion will receive No Credit. This restriction applies only to code you write. For example, gcd, which is defined in the initial basis, may use recursion. Because you are not defining recursive functions, you need not write any algebraic laws. For this problem only, you may define one helper function at top level. Related reading: For material on higher order functions, see sections 2.8.1 and 2.8.2 starting on page 128. For material on curry, see section 2.7.2, which starts on page 125. 15. Higher-order functions. Do exercise 15 on page 210. You must not use recursion—solutions using recursion will receive No Credit. As above, this restriction applies only to code you write. Because you are not defining recursive functions, you need not write any algebraic laws. For this problem, you get full credit if your implementations return correct results. You get extra credit if you can duplicate the behavior of exists? and all? exactly. To earn the extra credit, it must be impossible for an adversary to write a μScheme program that produces different output with your version than with a standard version. However, the adversary is not permitted to change the names in the initial basis. Related reading: Examples of foldl and foldr are in sections 2.8.1 and 2.8.2 starting on page 128. You may also find it helpful to study the implementations of foldl and foldr in section 2.8.3, which starts on page 130; the implementations are at the end. Information on lambda can be found in section 2.7, on pages 118 to 121. 19. Functions as values. Do exercise 19 on page 212 of Build, Prove, and Compare. You cannot represent these sets using lists. If any part of your code to construct or to interrogate a set uses cons, car, cdr, or null?, you are doing the problem wrong. Do all four parts: • Parts (a) and (b) require no special instructions. • In part (c), your add-element function must take two parameters: the element to be added as the first parameter and the set as the second parameter. When you code part (c), compare values for equality using the equal? function. To help you design part (c), put comments in your source code that complete the right-hand sides of the following algebraic laws: (member? x (add-element x s)) == ... (member? x (add-element y s)) == ..., where (not (equal? y x)) (member? x (union s1 s2)) == ... (member? x (inter s1 s2)) == ... (member? x (diff s1 s2)) == ... • In part (d), when you code the third approach to polymorphism, write a function set-ops-from which places your set functions in a record. To define record functions, use the syntactic sugar described in the book in Section 2.16.6 on page 191. In particular, be sure your code includes this record definition: (record set-ops (empty member? add-element union inter diff)) Code your solution to part (d) as a function set-ops-from, which will accept one argument (an equality predicate) and will return a record created by calling make-set-ops. Your function might look like this: (define set-ops-from (eq?) (let ([empty ...] (make-set-ops empty member? add union inter diff)) Fill in each ... with your own implementations. Each implementation is like one you wrote in part (c), except instead of using the predefined equal?, it uses the parameter eq?—that is what is meant by “the third approach to polymorphism.” No additional laws are needed for part (d). To help you get part (d) right, we recommend that you use these unit tests: (check-assert (procedure? set-ops-from)) (check-assert (set-ops? (set-ops-from =))) And to write your own unit tests for the functions in part (d), you may use these definitions: (val atom-set-ops (set-ops-from =)) (val nullset (set-ops-empty atom-set-ops)) (val member? (set-ops-member? atom-set-ops)) (val add-element (set-ops-add-element atom-set-ops)) (val union (set-ops-union atom-set-ops)) (val inter (set-ops-inter atom-set-ops)) (val diff (set-ops-diff atom-set-ops)) Related reading: For functions as values, see the examples of λambda in the first part of section 2.7 on page 118. For function composition and currying, see section 2.7.2. For polymorphism, see section 2.9, which starts on page 132. Relating imperative code to functional code A. Good functional style. The Impcore-with-locals function (define f-imperative (y) (locals x) (begin (set x e) (while (p? x y) (set x (g x y))) (h x y))) is in a typical imperative style, with assignment and looping. Write an equivalent μScheme function f-functional that doesn’t use the imperative features begin (sequencing), while (goto), and set (assignment). • Assume that p?, g, and h are free variables which refer to externally defined functions. • Assume that e is an arbitrary expression. • Use as many helper functions as you like, as long as they are defined using let or letrec and not at top level. • You need not write any algebraic laws. Hint #1: If you have trouble getting started, rewrite while to use if and goto. Now, what is like a goto? Hint #2: (set x e) binds the value of e to the name x. What other ways do you know of binding the value of an expression to a name? Don’t be confused about the purpose of this exercise. The exercise is a thought experiment. We don’t want you to write and run code for some particular choice of g, h, p?, e, x, and y. Instead, we want you write a function that works the same as f-imperative given any choice of g, h, p?, e, x, and y. So for example, if f-imperative would loop forever on some inputs, your f-functional must also loop forever on exactly the same inputs. Once you get your mind twisted in the right way, this exercise should be easy. The point of the exercise is not only to show that you can program without imperative features, but also to help you develop a technique for eliminating such features. Related reading: No part of the book bears directly on this question. You’re better off reviewing your experience with recursive functions and perhaps the solutions for the Scheme assignment. Graph problems From COMP 15, you should be familiar with graphs and graph algorithms. In the next few problems you will work with an immutable representation of directed graphs: a graph is represented by an association list in which each node is associated with a list of its immediate successors. This representation is called a successors map. (It is a close cousin to the widely used “adjacency list.”) For example, the ASCII-art graph ``` A --> B --> C | ^ | | +-----------+ ``` could be represented as a successors map by `(([A [B C]] [B C]] [C ])). Note: The graph problems below can be solved using only first-order functions. But you will find the problems much easier if you use let, lambda, and either of the fold functions. Related reading: The previous assignment. The definitions of equal? in section 2.3.1 (basic recursive functions on lists). Material on association lists in section 2.3.6. G1. List of edges. An edge is represented by a record ``` (make-edge N1 N2) ``` where N1 and N2 are nodes. Define make-edge using the following record definition: ``` (record edge [from to]) ``` Function edge-list consumes a graph represented as a successors map and returns a list of all the edges in the graph. Edges may be listed in any order. For example, here are acceptable responses for a list of the edges in the graph pictured above: ``` (list3 (make-edge A B) (make-edge B C) (make-edge A C)) ``` ``` (list3 (make-edge A B) (make-edge A C) (make-edge B C)) ``` Define function edge-list. Algebraic laws are optional, but unit tests are required. Hints: - You can solve this problem with algebraic laws, but you need to work at a high level of abstraction. Start by observing that a successors map has one of these two forms: - '() - (bind node successors graph), where node is a node, successors is a list of nodes, and graph is a graph represented as a successors map In the second form, extract node and successors using predefined functions alist-first-key and alist-first-attribute, and extract graph using cdr. - There are plenty of lists in this problem. You will have an easier time if you find a way to use the predefined list functions, together with something you define that can add an edge to a list of edges. - By 105 standards, the solution to this problem requires a lot of code. To keep it manageable, use let or let*. G2. Graph-building: adding an edge. Function add-edge takes two arguments: an edge made with make-edge and a graph that is represented as a successors map. It returns a new graph that is like the original, except that the new graph has had the given edge added to it. Depending on whether the from node already appears in the graph, it may have to be added. (Determine its appearance using equal?) For any edge e and graph g, function add-edge satisfies this algebraic law: (permutation? (cons e (edge-list g)) (edge-list (add-edge e g))) Implement add-edge, and in addition, write the following: - At least one unit test using check-assert and the law above - At least one unit test using check-expect with an empty graph - At least one unit test using check-expect with a nonempty graph You may include the implementation of permutation? from the solutions to the previous homework. Here are our requirements for algebraic laws: - Each recursive function you define must be specified using algebraic laws. - If none of your functions are recursive, you need not write any algebraic laws. Hint: We know of at least two entirely different ways of coding add-edge: - The first way is to treat the association list (that is, the successors map) entirely as an abstraction. That is, use only find, bind, and the “laws of association lists” shown in lecture—never look directly at the representation. To succeed in this way, you will have to understand what happens when you call find on a key that is not there. - The second way is to get down in the weeds with the representation of the association list. You will wind up using car and cdr, and if you are smart you will also use alist-first-key and alist-first-attribute. We recommend coding the first way, and we recommend avoiding recursion. **G3. Graph update: removing a node.** Calling `(remove-node node graph)` returns a new graph that is like graph, but with all references to node removed: - No value equal? to node appears as a key in the representation. - No value equal? to node appears as the successor of any other node. If the original graph does not mention node, then `(remove-node node graph)` returns a new graph that is equal? to the original. Implement `remove-node`. For full credit, implement `remove-node` without using any recursion. If you do choose to use recursion, specify each recursion function by giving algebraic laws. ### Calculational reasoning about functions **M. Reasoning about higher-order functions.** Using the calculational techniques from Section 2.4.5, which starts on page 107, prove that \[ (o ((\text{curry map}) f) ((\text{curry map}) g)) == ((\text{curry map}) (o f g)) \] To prove two functions equal, prove that when applied to equal arguments, they return equal results. Take the following laws as given: \[ ((o f g) x) == (f (g x)) \quad ; \text{apply-compose law} \] \[ (((\text{curry f}) x) y) == (f x y) \quad ; \text{apply-curried law} \] Using these laws should keep your proof relatively simple. **Related reading**: Section 2.4.5. The definitions of composition and currying in section 2.7.2. Example uses of `map` in section 2.8.1. The definition of `map` in section 2.8.3. ### Ordered lists **O. Ordered lists.** I said in class that in most cases, a function that consumes lists uses the obvious inductive structure on lists: a list either empty or is made with `cons`. Here is a problem that requires a more refined inductive structure. Define a function `ordered-by?` that takes one argument—a comparison function that represents a transitive relation—and returns a predicate that tells if a list is totally ordered by that relation. Assuming the comparison function is called `precedes?`, here is an inductive definition of a list that is ordered by `precedes?`: - The empty list is ordered by `precedes?`. - A singleton list is ordered by `precedes?`. - A list of the form `(cons x (cons y zs))` is ordered by `precedes?` if the following properties hold: - `x` is related to `y`, which is to say `(precedes? x y)`. - List `(cons y zs)` is ordered by `precedes?`. Here are some examples. Note the parentheses surrounding the calls to \texttt{ordered-by}. \begin{verbatim} -> ((ordered-by? <) '(1 2 3)) #t -> ((ordered-by? <=) '(1 2 3)) #t -> ((ordered-by? <) '(3 2 1)) #f -> ((ordered-by? >=) '(3 2 1)) #t -> ((ordered-by? >=) '(3 3 3)) #t -> ((ordered-by? =) '(3 3 3)) #t \end{verbatim} \textit{Hints:} - The structure of your function should be informed by the structure of the inductive definition of what it means for a list to be ordered by a relation. To elicit that structure, write algebraic laws. - For the code itself, you will need \texttt{letrec}. - We recommend that your submission include the following unit tests, which help ensure that your function has the correct name and takes the expected number of parameters. \begin{verbatim} (check-assert (procedure? ordered-by?)) (check-assert (procedure? (ordered-by? <))) (check-error (ordered-by? < '(1 2 3))) \end{verbatim} \textit{Related reading:} Section 2.9, which starts on page 132. Especially the polymorphic sort in section 2.9.2—the \texttt{lt?} parameter to that function is an example of a transitive relation. Section 2.7.2. Example uses of \texttt{map} in section 2.8.1. The definition of \texttt{map} in section 2.8.3. \section*{What and how to submit} You must submit four files: - A \texttt{README} file containing - The names of the people with whom you collaborated - A list identifying which problems you solved - A note identifying any extra-credit work you did - A \texttt{cqs.hofs.txt} containing the reading-comprehension questions\(^3\) with your answers edited in - A PDF files \texttt{semantics.pdf} containing the solutions to Exercise \textbf{M}. If you already know \LaTeX\(^4\), by all means use it. Otherwise, write your solution by hand and scan it. Do check with someone else who can confirm that your work is legible—if we cannot read your work, we cannot grade it. \footnotesize \(^3\)\url{cqs.hofs.txt} \(^4\)\url{http://www.latex-project.org/} 11 • A file solution.scm containing the solutions to Exercises 14 (b–f,h,j), 15, 19, A, G1, G2, G3, and O. You must precede each solution by a comment that looks like something like this: ;; ;; Problem A ;; As soon as you have the files listed above, run submit105-hofs to submit a preliminary version of your work. Keep submitting until your work is complete; we grade only the last submission. Avoid common mistakes Listed below are some common mistakes, which we encourage you to avoid. *Passing unnecessary parameters.* In this assignment, a very common mistake is to pass unnecessary parameters to a nested helper function. Here’s a silly example: ``` (define sum-upto (n) (letrec ([sigma (lambda (m n) ;;; UGLY CODE (if (> m n) 0 (+ m (sigma (+ m 1) n))))]) (sigma 1 n))) ``` The problem here is that the `n` parameter to `sigma` never changes, and it is already available in the environment. To eliminate this kind of problem, don’t pass the parameter: ``` (define sum-upto (n) (letrec ([sum-from (lambda (m) ;;; BETTER CODE (if (> m n) 0 (+ m (sum-from (+ m 1))))]) (sum-from 1)) ``` I’ve changed the name of the internal function, but the only other things that are different is that I have removed the formal parameter from the `lambda` and I have removed the second actual parameter from the call sites. I can still use `n` in the body of `sum-from`; it’s visible from the definition. An especially good place to avoid this mistake is in your definition of `ordered-by?` in problem O. Another common mistake is to fail to redefine functions `length` and so on in exercise 15. Yes, we really want you to provide new definitions that replace the existing functions, just as the exercise says. How your work will be evaluated Structure and organization The criteria in the general coding rubric\(^5\) apply. As always, we emphasize *contracts* and *naming*. In par- ticular, unless the contract is obvious from the name and from the names of the parameters, an inner function defined with *lambda* and a *let* form needs a contract. --- \(^5\) ../coding-rubric.html There are a few new criteria related to let, lambda, and the use of basis functions. The short version is use the functions in the initial basis; don’t redefine them. <table> <thead> <tr> <th>Exemplary</th> <th>Satisfactory</th> <th>Must Improve</th> </tr> </thead> <tbody> <tr> <td>• Short problems are solved using simple anonymous lambda expressions, not named helper functions.</td> <td>• Most short problems are solved using anonymous lambdas, but there are some named helper functions.</td> <td>• Most short problems are solved using named helper functions; there aren’t enough anonymous lambda expressions.</td> </tr> <tr> <td>• When possible, inner functions use the parameters and let-bound names of outer functions directly.</td> <td>• An inner function is passed, as a parameter, the value of a parameter or let-bound variable of an outer function, which it could have accessed directly.</td> <td>• Functions in the initial basis are redefined in the submission.</td> </tr> <tr> <td>• The initial basis of μScheme is used effectively.</td> <td>• Functions in the initial basis, when used, are used correctly.</td> <td></td> </tr> </tbody> </table> **Functional correctness** In addition to the usual testing, we’ll evaluate the correctness of your translation in problem A. We’ll also want appropriate list operations to take constant time. <table> <thead> <tr> <th>Correctness</th> <th>Exemplary</th> <th>Satisfactory</th> <th>Must Improve</th> </tr> </thead> </table> | | ● The translation in problem A is correct. | | ● Your code passes every one of our stringent tests. | | ● Testing shows that your code is of high quality in all respects. | ● The translation in problem A is almost correct, but an easily identifiable part is missing. | | | ● Testing reveals that your code demonstrates quality and significant learning, but some significant parts of the specification may have been overlooked or implemented incorrectly. | ● The translation in problem A is obviously incorrect, | | | | ● Or course staff cannot understand the translation in problem A. | | | | ● Testing suggests evidence of effort, but the performance of your code under test falls short of what we believe is needed to foster success. | | | | ● Testing reveals your work to be substantially incomplete, or shows serious deficiencies in meeting the problem specifications (serious fault). | | | | ● Code cannot be tested because of loading errors, or no solutions were submitted (No Credit). | <table> <thead> <tr> <th>Performance</th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>● Empty lists are distinguished from non-empty lists in constant time.</td> <td></td> <td>● Distinguishing an empty list from a non-empty list might take longer than constant time.</td> </tr> </tbody> </table> **Proofs and inference rules** For your calculational proof, use induction correctly and exploit the laws that are proved in the book. ### Exemplary - Proofs that involve predefined functions appeal to their definitions or to laws that are proved in the book. - Proofs that involve inductively defined structures, including lists and S-expressions, use structural induction exactly where needed. ### Satisfactory - Proofs involve predefined functions but do not appeal to their definitions or to laws that are proved in the book. - Proofs that involve inductively defined structures, including lists and S-expressions, use structural induction, even if it may not always be needed. ### Must Improve - A proof that involves an inductively defined structure, like a list or an S-expression, does **not** use structural induction, but structural induction is needed.
{"Source-Url": "https://www.cs.tufts.edu/comp/105/homework/hofs.pdf", "len_cl100k_base": 6810, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34771, "total-output-tokens": 7614, "length": "2e12", "weborganizer": {"__label__adult": 0.0006628036499023438, "__label__art_design": 0.0008020401000976562, "__label__crime_law": 0.0006680488586425781, "__label__education_jobs": 0.039642333984375, "__label__entertainment": 0.00017333030700683594, "__label__fashion_beauty": 0.0003740787506103515, "__label__finance_business": 0.00037288665771484375, "__label__food_dining": 0.0010137557983398438, "__label__games": 0.0019550323486328125, "__label__hardware": 0.00133514404296875, "__label__health": 0.0006632804870605469, "__label__history": 0.0005092620849609375, "__label__home_hobbies": 0.0003476142883300781, "__label__industrial": 0.0008211135864257812, "__label__literature": 0.000942230224609375, "__label__politics": 0.000446319580078125, "__label__religion": 0.0010976791381835938, "__label__science_tech": 0.0164337158203125, "__label__social_life": 0.0004153251647949219, "__label__software": 0.00569915771484375, "__label__software_dev": 0.92333984375, "__label__sports_fitness": 0.000789642333984375, "__label__transportation": 0.000995635986328125, "__label__travel": 0.0003342628479003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28252, 0.01257]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28252, 0.52806]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28252, 0.89318]], "google_gemma-3-12b-it_contains_pii": [[0, 501, false], [501, 2932, null], [2932, 3940, null], [3940, 5373, null], [5373, 7488, null], [7488, 10222, null], [10222, 12074, null], [12074, 14688, null], [14688, 17310, null], [17310, 19691, null], [19691, 21697, null], [21697, 23872, null], [23872, 25598, null], [25598, 27522, null], [27522, 28252, null]], "google_gemma-3-12b-it_is_public_document": [[0, 501, true], [501, 2932, null], [2932, 3940, null], [3940, 5373, null], [5373, 7488, null], [7488, 10222, null], [10222, 12074, null], [12074, 14688, null], [14688, 17310, null], [17310, 19691, null], [19691, 21697, null], [21697, 23872, null], [23872, 25598, null], [25598, 27522, null], [27522, 28252, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 28252, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28252, null]], "pdf_page_numbers": [[0, 501, 1], [501, 2932, 2], [2932, 3940, 3], [3940, 5373, 4], [5373, 7488, 5], [7488, 10222, 6], [10222, 12074, 7], [12074, 14688, 8], [14688, 17310, 9], [17310, 19691, 10], [19691, 21697, 11], [21697, 23872, 12], [23872, 25598, 13], [25598, 27522, 14], [27522, 28252, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28252, 0.03561]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
846fb263246b7b4cdab6703314d9d72354695d7c
AQP: An Open Modular Python Platform for Objective Speech and Audio Quality Metrics Jack Geraghty jack.geraghty@ucdconnect.ie School of Computer Science, University College Dublin Dublin, Ireland Jiazheng Li jiazheng.li@ucdconnect.ie School of Computer Science, University College Dublin Department of Computer Science, University of Warwick Warwick, United Kingdom Alessandro Ragano alessandro.ragano@ucdconnect.ie School of Computer Science, University College Dublin Dublin, Ireland Andrew Hines andrew.hines@ucd.ie School of Computer Science, University College Dublin Dublin, Ireland ABSTRACT Audio quality assessment has been widely researched in the signal processing area. Full-reference objective metrics (e.g., POLQA, ViSQOL) have been developed to estimate the audio quality relying only on human rating experiments. To evaluate the audio quality of novel audio processing techniques, researchers constantly need to compare objective quality metrics. Testing different implementations of the same metric and evaluating new datasets are fundamental and ongoing iterative activities. In this paper, we present AQP – an open-source, node-based, light-weight Python pipeline for audio quality assessment. AQP allows researchers to test and compare objective quality metrics helping to improve robustness, reproducibility and development speed. We introduce the platform, explain the motivations, and illustrate with examples how, using AQP, objective quality metrics can be (i) compared and benchmarked; (ii) prototyped and adapted in a modular fashion; (iii) visualised and checked for errors. The code has been shared on GitHub to encourage adoption and contributions from the community. ACM Reference Format: 1 INTRODUCTION Predictive models that can estimate the quality of speech and audio signals have been widely adopted to develop, evaluate and monitor multimedia applications. Originally developed for the telecoms industry voice communications systems, they are now applied in many standard audio signal processing [13] and machine learning algorithms [14, 15] as well as data wrangling [16] and visualisation [17]. The trends towards open science and repeatable research have encouraged sharing of code and datasets on platforms such as Github1 and Zenodo2, allowing results in research publication to be easily reproduced and validated. Full-reference quality metrics consist of several high-level blocks where each block serves a certain scope. Some common blocks are: (1) preprocessing such as normalizing the input level or aligning the reference and the degraded signal, (2) computing an internal representation of the clean and the degraded signal, (3) calculating a similarity score between the two representations, (4) using a regression model to map the similarity score to MOS. Often, quality metrics have been improved by replacing a certain block that was originally used. Replacing blocks is typically desired given that a certain block might work for some datasets but it does not for the development of codecs [1], evaluation of hearing aids [2] and monitoring and quality assurance for streaming services like Netflix or tele-meetings such as Google Meet [3] as well as for other audio domains such as speech enhancement and music. Models have been developed to supplement subjective quality assessment, where groups of people are asked to listen to sample audio and rate the quality, producing a mean opinion score (MOS) that is an aggregated average user rating. The models are referred to as objective models and vary in design based on their application requirements. Some models have been standardised and widely adopted under ITU-T recommendations, e.g. PESQ [4] and POLQA [5] for speech and PEAQ [6] for audio. Others have been released as open-source tools, e.g. ViSQOL [7]. These models are full-reference i.e., they use both the degraded signal and the reference signal. More recently, deep learning-based models that only use the degraded signal are gaining popularity [8, 9, 10]. The earlier models were developed in either C/C++ for speed or MATLAB for research prototyping and were based on monolithic codebases that were difficult to adapt or extend. Models such as NIQA [9], CDPM [11], SESA [8] and WARP-Q [12] have been developed using standard python libraries. Python has matured and become widely adopted in both research and industry deployment with packages and libraries available to implement many standard audio signal processing [13] and machine learning algorithms [14, 15] as well as data wrangling [16] and visualisation [17]. The trends towards open science and repeatable research have encouraged sharing of code and datasets on platforms such as Github1 and Zenodo2, allowing results in research publication to be easily reproduced and validated. Full-reference quality metrics consist of several high-level blocks where each block serves a certain scope. Some common blocks are: (1) preprocessing such as normalizing the input level or aligning the reference and the degraded signal, (2) computing an internal representation of the clean and the degraded signal, (3) calculating a similarity score between the two representations, (4) using a regression model to map the similarity score to MOS. Often, quality metrics have been improved by replacing a certain block that was originally used. Replacing blocks is typically desired given that a certain block might work for some datasets but it does not for --- 1https://github.com 2https://zenodo.org others. Also, more challenges arise because recent techniques such as generative speech codecs show new degradation that have to be evaluated with quality metrics [18, 19]. For instance, the first version of ViSQOL uses a Bark-based spectrogram [3], which was then replaced by a gammatone-based spectrogram [20]. Recently another ViSQOL implementation has been proposed [7] where the replaced block is in the alignment stage instead of the spectrogram calculation stage. Similarly, other metrics such as PEAQ exhibited improved versions by replacing some blocks [21]. Tools that allow researchers to quickly prototyping, testing, and comparing different implementations of quality metrics are missing. These operations are generally time-consuming and researchers will benefit from having a pipeline to speed up and improve the reproducibility of their work. In this paper, we present an Audio Quality Platform (AQP) soft- ware implementation addressing the stated problems – a centralised, comparable and reproducible pipeline for testing and comparison of predictive audio quality metrics. The platform is available on GitHub\(^3\). Our pipeline follows recent signal processing trends about the urgent need to provide tools for quick deployment and high reproducibility [22]. Since Python is widely used in the research community, we completed this pipeline fully in Python and only uses minimum Python standardized libraries such as librosa\([13]\] and Numpy \([23]\). This design allows the most compatibility for environment setup and the most function interpretability during the research. Our pipeline allows researchers to modify, remove or add on the functionalities inside each stage of our pipeline to produce a new combination of different evaluation and validation methods. The pipeline can be easily set up via a simple JSON script. Researchers can complete their research via our pipeline and release their experiment script for better community accessibility and re- producibility. Our pipeline also provides visualisation functionality which allows print out the structure of experiments with flowcharts. To the best of our knowledge, our pipeline is the first open source, free to use platform based on standard python libraries that enable use for test harness and model prototyping. 2 BACKGROUND 2.1 Speech and Audio Quality Models Full reference objective metrics compare a representation of the reference and degraded signal to evaluate the differences and es- timate the perceived quality of the degraded signal \([3, 24]\). In this section, we briefly discuss PESQ and WARP-Q which we use as a case study in this paper. PESQ was originally designed to work with narrow-band tele- phone speech and narrow-band speech codecs \([4]\) and was the first widely adopted quality metric. The pre-processing part emulates a telephone handset. Signal disturbances are computed and mapped to MOS. PESQ employs an asymmetry weighting that gives more importance to the added disturbances instead of the attenuated parts on the internal representation of the degraded signal. WARP-Q is a the state-of-the-art full-reference metric that uses dynamic time warping cost for MFCC speech representations \([18]\). WARP-Q was designed for waveform matching, parametric and generative neural vocoder based codecs as well as channel and envi- ronmental noise. WARP-Q shows better performance in correlation, codec quality ranking and versatile general quality assessment than traditional metrics \([18]\). 2.2 Software Tools for Reproducible Open Research Structured machine learning platforms are playing important role for open researches nowadays, it allows research community to develop their experiment in a standard environment. Basic machine learning libraries like Scikit-learn \([25]\) has motivated fundamental machine learning researchers to carry out easily with shared code bases. More recently, the development of advanced model-based libraries such as Hugging-face Transformer \([26]\) and OpenCV \([27]\) have benefited researchers in natural language processing and com- puter vision. We can see the value of such standard platforms for audio quality research. Python and MATLAB have been widely used as a source pro- gramming language for signal processing research and application development, which have been proved with state-of-the-art models like WARP-Q \([18]\), ViSQOL \([3]\). Signal processing research can be carried out at a higher level thanks to the improvement of basic sci- entific libraries such as librosa \([13]\), Numpy \([23]\) and Matplotlib \([17]\). We see a Python based speech and audio processing pipeline in the area of speech and audio signal processing as an important contribution to open research. A software design pattern is so-called a general repeatable solu- tion to a commonly occurring problem in software design \([28, 29]\). The modular components in our designed pipeline enable easy con- figuration and visualisation. The design of each functional modular also supports developers to test various models in parallel. These advantages of our software structure are so-called creational design patterns and structural design patterns \([29]\). Standardized input and output API setup allows developers to reuse components in the pipeline for duplicate tasks \([29]\). For example, the same processing methods can be applied to both reference and degraded signals. It also enables developers recreation when situations like changing filter banks. Benefited from design patterns, this pipeline structure improves the interpretability - researchers can easily compare the results from different components to debug or try with various setups. It also increases the maintainability - allows developers to keep different versions of experiments and easily validate/test them with- out structural redesign. Such design enables comparability - users can simply set up to leverage parallelisable parts. Such structure allows the research community hands-on easily and friendly for basic-level understanding and reproduction of experiments, it also assists with higher-level customisation and results re-creation. The expectation of reproducible research has gained traction within the signal processing community \([30]\). Some ACM con- ferences, including MMSys, offer different badges that indicate the level of repeatability, reproducibility and replicability of a submis- sion\([31]\). The badges are awarded upon a review of the submission. We anticipate a similar boom in the use of standardised pipeline structure for reproducibility as recently seen in other domains that have adopted machine learning\([32, 27]\). 3 AQP PLATFORM The following section describes the architecture of the platform and how it is utilized to provide the high level of modularity required. Then a description of the core nodes of the pipeline and their use cases are given. These are: the LoopNode, for looping over data, the EncapsulationNode, for managing sub-pipelines and the SinkNode, for managing the control flow of the pipeline. The section concludes with an overview of how the pipeline is executed, configured and visualized. The core architecture of the pipeline is built upon the Directed Acyclic Graph (DAG) data structure. A DAG is an implementation of a directed graph, but with no directed cycles. The acyclic property of the graph is used to prevent infinite loops occurring during the execution of the pipeline, without it, explicit control flow logic would need to be required to prevent these loops from occurring. The node based structure of the DAG is used to provide the high level of modularity required by the pipeline. Each node in the graph is used to encapsulate some logic that will be performed on signal data. Functionality is encapsulated as nodes into logical blocks. By re-ordering/introducing/removing nodes from the pipeline, different configurations can be tested without having to modify the codebase. Data is passed through the pipeline using a Python dictionary. When a node is being executed it can retrieve data from the dictionary, add/update data already in the dictionary or do nothing to it. This method increases the modularity of the pipeline, as no node requires knowledge of the rest of the pipeline. This also improves the testability of nodes, a node can be tested in isolation as the only requirement is a dictionary, which can be mocked easily. **Base Node**: All nodes in the pipeline inherit from the Node base class. This class contains the common properties for all nodes and indicates that all nodes must implement the `execute` function. All nodes have at least two required fields, an ID (for connecting a node with its children) and a type (for debugging purposes), as well as two optional parameters, `output_key` and `draw_options`. The `output_key` is used for assigning the result calculated back to the result dictionary. The `draw_options` are used to provide additional arguments for drawing a node when creating a DOT file representation for the graph. **Loop Node**: Audio quality metrics often deal with multiple audio channels being active for a given signal. All of these channels need to be evaluated in some form before a final value can be associated with the signal. The functionality used is identical for each channel, so the ability to loop over channels (or some other variable) is required. The LoopNode is used to achieve this functionality. When creating a LoopNode a sub-pipeline of nodes is defined and then during the execution of that node an iterable (e.g. list of active channels) is retrieved from the result dictionary. The sub-pipeline is executed for each entry in the iterable, with the result of each iteration being stored in a separate dictionary. The final dictionary is then assigned to the main result. **Encapsulation Node**: This node allows nested configurations where a pipeline in a JSON file that can be used within another graph as part of a larger pipeline. This allows for a large group of nodes, e.g. the main VISPQOL functionality, to be grouped into a single file and reused in other configurations without having to redefine it. When the execute function of the node is called, it executes all of the nodes contained within the pipeline of the EncapsulationNode. Apart from the reusability the use of the EncapsulationNode also serves as a method of keeping pipeline configuration files succinct and well organised, simplifying for abstraction and visualisation. **Sink Node**: Comparing different configurations in a single pipeline is represented as a node having multiple children, where each branch indicates a different configuration to evaluate. After each of these branches have been evaluated the results of each branch need to be available in a single, shared node for further use. The SinkNode offers the functionality to collect a set of results from different branches and prevent further execution of the graph until all these results are collected. Until the SinkNode has seen the expected number of results to collect it will return None, indicating to the pipeline to evaluate a different branch before proceeding with the children of the SinkNode. Once it has seen the expected number of results, execution of the pipeline continues as normal. **Pipeline Execution**: Nodes in the pipeline get executed in a depth first manner. Each node maintains a list of all its children. Provided the execution of a node was successful, then all of its children are executed next. The `execute` function of a node should return the result dictionary if execution was successful and None otherwise. If the return value is None, then the current node’s children do not get evaluated, instead the next node on the current level of the DAG is executed. Pseudocode for the execution of the pipeline is given in Algorithm 1. ``` Algorithm 1 Modified Depth First Traversal Require: node, result stack ← [] stack.append(node) while len(stack) > 0 do current_node ← stack.pop() children ← current_node.children if (r ← current_node.execute(result)) is None then continue end if for i ← len(children) − 1, 0 do stack.append(children[i]) end for end while ``` **Configuration**: The pipeline is defined through JSON. Each node in the DAG corresponds to an entry in the JSON file. This entry contains all the necessary parameters for creating a node (children, output_key, etc.). An entry’s key is used as the ID for the node it is describing. On startup the configuration file is loaded, deserialized to node objects, and then the DAG is constructed using the root node and then children of each node. Defining the pipeline through JSON enables the testing of different pipeline configurations without having to touch the codebase of the pipeline. **Visualization**: Having a visual representation of the pipeline is useful for both debugging and conveying what steps are involved in the pipeline. AQP creates a DOT file for the pipeline created by using NetworkX [33]. This DOT file can then be used to generate a where the node creating the signal features Mel-frequency cepstral coefficients (MFCCs) were substituted with a Mel spectrogram node. This setup illustrates how AQP can be used as an end-to-end research platform for testing and comparing performance for different configurations. The evaluation dataset is loaded at the beginning of the pipeline from a CSV file, the quality metric algorithms are then run on the dataset entries with the dataset input being updated with the quality scores and finally, the results are graphed and a LaTeX table generated with the results. The block diagram seen in Figure 1 was used as a reference for re-implementing the WARP-Q metric in the pipeline’s architecture. Three AQPs were implemented specifically for WARP-Q, the WarpQVadNode for identifying the active voice patches, the MFCCNode for computing the (MFCCs) and the cepstral mean and variance normalisation (CMVN) of the input signals and the WarpQSDTWNode, which performs the SDTW algorithm and calculates the final quality score. These three nodes were then defined within an EncapsulationNode. By encapsulating the WARP-Q algorithm within a collection of nodes it is possible to add/remove/substitute nodes to evaluate the impact that such a change has on the final quality score produced by the algorithm, e.g removing the VAD. Another benefit is that it is possible to test different variations of the parameters of a node against the base implementation of WARP-Q, or any other quality metric. Algorithms that do not need to be changed for experimental purposes can quickly and easily be deployed within the pipeline for benchmarking purposes. For example, the PESQ metric was originally implemented in C and we used a python wrapper of PESQ [34]. It was not separated into nodes for each of the algorithm’s functional blocks but simply wrapped inside a single AQP node so that it could be deployed in the pipeline. The pipeline used to perform this case study can be seen in Figure 2. In this study, the GenSpeech dataset, which consists of paths to audio signals, codec used and the MOS for the test signal, is loaded into the pipeline in the form of a pandas dataframe. Pairs consisting of a reference signal path and a test signal path are then retrieved from this dataframe and loaded as Numpy arrays using librosa. The signals are then passed to both the WARP-Q nodes and PESQ node where the respective algorithms are run. When each algorithm is finished the final result is then added to the corresponding entry in the dataframe. Once all pairs of signals are processed the dataframe is written to disk and then passed to the output nodes. The output nodes used in this configuration create a simple plot of the sample MOS vs the calculated MOS and then a LaTeX table showing the Pearson and Spearman correlation between the sample MOS and calculated MOS. The output nodes used to demonstrate that the pipeline can be used as an end-to-end platform, including data visualization and presentation (e.g. Figure 3). Someone using this platform can implement their own output nodes to match their intended use case. A simple configuration file, the parameters of a block can be easily modified, without accessing the internal code. Objective quality metrics have exhibited continuous modifications to be adapted to the new audio processing algorithms and applications. By using this platform, someone can quickly prototype the new model with different configurations and testing against other models. For example, recent audio processing techniques that introduce new degradations such as generative speech codecs will benefit from the proposed platform. The presented platform provides a lightweight, standardised method of taking input data and producing outputs for audio quality metric experiments. Future work will involve providing more audio quality metrics and datasets in the base of the proposed platform. The presented platform provides a lightweight, standardised method of taking input data and producing outputs for audio quality metric experiments. Future work will involve providing more audio quality metrics and datasets in the base of the proposed platform. The presented platform provides a lightweight, standardised method of taking input data and producing outputs for audio quality metric experiments. Future work will involve providing more audio quality metrics and datasets in the base of the proposed platform. The presented platform provides a lightweight, standardised method of taking input data and producing outputs for audio quality metric experiments. Future work will involve providing more audio quality metrics and datasets in the base of the proposed platform. The presented platform provides a lightweight, standardised method of taking input data and producing outputs for audio quality metric experiments. Future work will involve providing more audio quality metrics and datasets in the base of the proposed platform. package so researchers can easily test their own quality metrics and different configurations of existing ones (e.g. for spatial audio models [36] or no reference models [37]). Further error checking for pipeline configurations will be introduced. While the examples presented did not rely on machine learning nodes AQP will simplify the integration of machine learning models into objective audio quality models allowing different training scenarios to be deployed and evaluated. 6 ACKNOWLEDGEMENTS This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under Grant Numbers 18/CRT/6224, 17/RC/2289, P2 and 17/RC-PhD/3483. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. REFERENCES
{"Source-Url": "http://wrap.warwick.ac.uk/168249/1/WRAP-AQP-open-modular-Python-platform-objective-speech-audio-quality-metrics-2022.pdf", "len_cl100k_base": 4856, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20877, "total-output-tokens": 8309, "length": "2e12", "weborganizer": {"__label__adult": 0.0007309913635253906, "__label__art_design": 0.0007305145263671875, "__label__crime_law": 0.0005731582641601562, "__label__education_jobs": 0.0008797645568847656, "__label__entertainment": 0.00039505958557128906, "__label__fashion_beauty": 0.0002796649932861328, "__label__finance_business": 0.00021946430206298828, "__label__food_dining": 0.0005626678466796875, "__label__games": 0.000980377197265625, "__label__hardware": 0.0028018951416015625, "__label__health": 0.0015850067138671875, "__label__history": 0.0003345012664794922, "__label__home_hobbies": 0.00012791156768798828, "__label__industrial": 0.00066375732421875, "__label__literature": 0.0004394054412841797, "__label__politics": 0.0005316734313964844, "__label__religion": 0.000858306884765625, "__label__science_tech": 0.210205078125, "__label__social_life": 0.00022602081298828125, "__label__software": 0.01267242431640625, "__label__software_dev": 0.7626953125, "__label__sports_fitness": 0.0005369186401367188, "__label__transportation": 0.00066375732421875, "__label__travel": 0.00023293495178222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33368, 0.02462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33368, 0.36543]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33368, 0.87281]], "google_gemma-3-12b-it_contains_pii": [[0, 5879, false], [5879, 12619, null], [12619, 19058, null], [19058, 24028, null], [24028, 28426, null], [28426, 33368, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5879, true], [5879, 12619, null], [12619, 19058, null], [19058, 24028, null], [24028, 28426, null], [28426, 33368, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33368, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33368, null]], "pdf_page_numbers": [[0, 5879, 1], [5879, 12619, 2], [12619, 19058, 3], [19058, 24028, 4], [24028, 28426, 5], [28426, 33368, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33368, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
6eb8d795ce6cde94001c7d47f33a147af0ad1c58
A Domain-Specific Language for Animation COS 441 Slides 8 Slide content credits: Paul Hudak's School of Expression Ranjit Jhala (UCSD) The last few weeks - the principles of functional programming • defining new functions: functional abstraction for code reuse • defining new types: type abstraction • higher-order programming: using functions as data • the same algorithm over different data: parametric polymorphism • related operations over different types: ad hoc polymorphism via type classes This time: - Bringing it all together: developing a domain-specific language for functional animation SHAPES, REGIONS & PICTURES data Shape = Rectangle Side Side | Ellipse Radius Radius | RtTriangle Side Side | Polygon [Vertex] deriving (Show) type Side = Float type Radius = Float type Vertex = (Float, Float) data Shape = Rectangle Side Side | Ellipse Radius Radius | RtTriangle Side Side | Polygon [Vertex] deriving (Show) type Side = Float type Radius = Float type Vertex = (Float, Float) s1 = Rectangle 3 2 s2 = Ellipse 1 1.5 s3 = RtTriangle 3 2 s4 = Polygon [(-2.5, 2.5), ,(-3, 0), ,(-1.7,-1.0), ,(-1.1,0.2), ,(-1.5,2.0) ] Regions - Regions are compositions of basic shapes: data Region = \hspace{1cm} Shape Shape -- primitive shape | Translate Vector Region -- translated region | Scale Vector Region -- scaled region | Complement Region -- inverse of region | Region `\text{\textbackslash Union}` Region -- union of regions | Region `\text{\textbackslash Intersect}` Region -- intersection of regions | Region `\text{\textbackslash Xor}` Region -- XOR of regions | Empty -- empty region deriving Show type vector = (Int, Int) Regions - Regions are compositions of basic shapes: ```haskell data Region = Shape Shape -- primitive shape | Translate Vector Region -- translated region | Scale Vector Region -- scaled region | Complement Region -- inverse of region | Region `Union` Region -- union of regions | Region `Intersect` Region -- intersection of regions | Region `Xor` Region -- XOR of regions | Empty -- empty region deriving Show type vector = (Int, Int) r1 = Shape s1 r2 = Shape s2 r3 = Shape s3 r4 = Shape s4 reg0 = (Complement r2) `Union` r4 reg1 = r3 `Union` (r1 `Intersect` r0) ``` Regions • Notice that regions are recursive data structures; consequently, they can be arbitrarily complex: \[ \text{step} = \text{Shape} \left( \text{Rectangle} \ 50 \ 50 \right) \] \[ \text{stairs} \ k = \\ \quad \text{if} \ k \leq 0 \ \text{then} \ \text{Empty} \\ \quad \text{else} \ \text{Translate} \ (k \times 20, k \times 20) \ (\text{step} \ \text{`Union`} \ \text{stairs} \ (k - 1)) \] stairs 4 = Pictures • Pictures add color to regions ```haskell data Picture = Region Color Region | Picture `Over` Picture | EmptyPic deriving Show type Color = Red | Yellow | ... ``` • Some pictures: ```haskell pic1 = Region Red reg1 r5 = Shape $ Rectangle 1 1 r6 = Shape $ Ellipse 0.5 0.5 reg2 = (Scale (2,2) r6) `Union` (Translate (2,1) r6) `Union` (Translate (-2,0) r5) pic2 = Region Yellow reg2 pic3 = pic2 `Over` pic1 ``` the SOE libraries have implemented a draw function for us: ``` type Title = String draw :: Title -> Picture -> IO () ``` try it: ``` main1 = draw "Picture 1" pic1 main2 = draw "Picture 2" pic2 main3 = draw "Picture 3" pic3 ``` go to demo FROM STATIC PICTURES TO DYNAMIC ANIMATIONS Animation • We create animations by exploiting persistence of vision and rendering a series of images: 1. Initialize image 2. Render image 3. Pause 4. Change image 5. Go to 1. • At a low level, this is what will happen, but we'd like to build a library of combiners (ie: functions) that can be reused and that allow us to build complex animations from simpler parts We are going to represent an animation using a function: \[ \text{type Animation}\ a = \text{Time} \to a \] \[ \text{type Time} = \text{Float} \] At every instant in time, the animation function generates an object with type \( a \). Since the animation type is polymorphic, we'll be able to animate many different kinds of things: \[ \text{type PictureAnimation} = \text{Time} \to \text{Picture} \] \[ \text{type ShapeAnimation} = \text{Time} \to \text{Shape} \] \[ \text{type StringAnimation} = \text{Time} \to \text{String} \] A first animation - Once you've thought of the right type, defining basic animations is easy: rubberBall :: Animation Shape rubberBall = \( t \rightarrow \text{Ellipse} (\sin t) (\cos t) \) More Animations revolvingBall :: Animation Region revolvingBall = \t -> Translate (sin t, cos t) ball where ball = Shape (Ellipse 0.2 0.2) • Composition at work! • By making animations functions, we can compose them using ordinary function application or function composition: rubberBall :: Animation Shape rubberBall = \( t \rightarrow \text{Ellipse}(\sin t)(\cos t) \) revolvingBall :: Animation Region revolvingBall = \( t \rightarrow \text{Translate}(\sin t, \cos t)\) ball where ball = Shape(Ellipse 0.2 0.2) planets :: Animation Picture planets t = p1 `Over` p2 where p1 = Region Red $ Shape(rubberBall t) p2 = Region Yellow $ revolvingBall t More Animations • We can animate anything: ticker :: Animation String ticker t = "The time is :" ++ show t • An animation is any time-varying value A **Graphic** is a data structure representing a static picture that can be rendered efficiently. To render any animation, we need two things: - A function to convert an **Animation a** to an **Animation Graphic** - A function to render any **Animation Graphic** The second is supplied by the SOE library: ```haskell animate :: Title -> Animation Graphic -> IO () ``` The first can be developed provided we have some basic **Graphic generators**: ```haskell shapeToGraphic :: Shape -> Graphic regionToGraphic :: Region -> Graphic pictureToGraphic :: Picture -> Graphic text :: Point -> String -> Graphic withColor :: Color -> Graphic -> Graphic ``` A simple example: \[ \text{blueBall} :: \text{Animation Graphic} \] \[ \text{blueBall} = \text{withColor Blue . shapeToGraphic . rubberBall} \] Check: does it have the right type? \[ \text{rubberBall} :: \text{Time -> Shape} \] \[ \text{shapeToGraphic} :: \text{Shape -> Graphic} \] \[ \text{withColor Blue} :: \text{Graphic -> Graphic} \] \[ \text{withColor Blue . shapeToGraphic . rubberBall} :: \text{Time -> Graphic} \] \[ \text{= Animation Graphic} \] Let's try to run it • Let's look at some more: \[ \text{main4} = \text{animate } \"\text{Shape}\" \$ \text{withColor Blue} . \text{shapeToGraphic} . \text{rubberBall} \] \[ \text{main5} = \text{animate } \"\text{Text}\" \$ \text{text (100,200)} . \text{ticker} \] \[ \text{main6} = \text{animate } \"\text{Region}\" \$ \text{withColor Yellow} . \text{regionToGraphic} . \text{revolvingBall} \] \[ \text{main7} = \text{animate } \"\text{Picture}\" \$ \text{picToGraphic} . \text{planets} \] Implementing Animate - Some details of the animator (see script for more): ```haskell animate title anim = runGraphics $ do w <- openWindowEx title (Just (0,0)) (Just (xWin, yWin)) drawBufferedGraphic t0 <- timeGetTime animateLoop w t0 anim animateLoop w t0 anim = do t <- timeGetTime let ft = intToFloat (fromInteger (toInteger (t - t0))) / 1000 setGraphic w (anim ft) spaceCloseEx w $ animateLoop w t0 anim ``` - Set up window - Begin animation loop with initial time - Compute next time - Draw the picture at the computed time - Check for termination signal - Continue GOING FURTHER: A DSL FOR ANIMATIONS An Embedded DSL for Animations • So far, we've built animations bottom-up with $\text{Time} \to \text{a}$ functions • But: – we can't (easily) transform or modify existing animations – we can't (easily) compose existing, fully-formed animations – we don't treat animations as abstract objects • The next step: – Treat animations as abstract objects and define canonical transformers for them – Work entirely at the level of animations, hiding the implementation details – Our implementation might be called "a cool library" but ... we hide the underlying details so thoroughly we'll call the library an embedded, domain-specific language. – Haskell, with it's lightweight syntax and facilities for reuse and abstraction, is a terrific platform for developing new DSLs DSL Design Strategy • Choose primary abstract objects – define special types to represent them – in our case: a special abstract Behavior type • Define operations over the abstract objects – make the above abstract objects instances of well-chosen type classes where appropriate so we can use compact, intuitive notation for manipulating our objects – in our case: make behaviors instances of type classes for graphical and numeric manipulation type Behavior a type Coordinates = (Behavior Float, Behavior Float) run :: Behavior Picture -> IO () red :: Behavior Color ell :: Behavior Radius -> Behavior Radius -> Behavior Shape shape :: Behavior Shape -> Behavior Region reg :: Behavior Color -> Behavior Region -> Behavior Picture over :: Behavior Picture -> Behavior Picture -> Behavior Picture sin :: Behavior Float -> Behavior Float tx :: Coordinates -> Behavior Picture -> Behavior Picture timeTx :: Behavior Time -> Behavior a -> Behavior a rewind :: Behavior a -> Behavior a lift0 :: a -> Behavior a lift1 :: (a -> b) -> Behavior a -> Behavior b lift2 :: (a -> b -> c) -> Behavior a -> Behavior b -> Behavior c Examples • A stationary ball: \[ \text{demo1} = \text{run \$ reg yellow \$ ballB} \] • Bouncing the ball: \[ \text{demo2} = \text{run \$ reg yellow \$ tx (0, \sin \text{time}) ballB} \] • Bouncing a triangle: \[ \text{demo2} = \text{run \$ reg yellow \$ tx (0, \sin \text{time}) pentaB} \] • Bouncing anything yellow: \[ \text{bounce b} = \text{reg yellow \$ tx (0, \sin \text{time}) b} \] Examples • Colors can vary with time. Why stick with constant yellow? flash :: Behavior Color demo4 = run $ reg flash $ tx (0, sin time) ballB • Any animation can be composed with any other demo5 = run $ a1 `over` a2 where a1 = reg red $ tx (0, sin time) ballB a2 = reg yellow $ tx (sin time, 0) pentaB Examples - We can define new kinds of motions and apply them to many different kinds of objects ```haskell turn :: (Deformable a) => Float -> a -> a lift2 :: (a -> b -> c) -> Behavior a -> Behavior b -> Behavior c lift2 turn :: Behavior Float -> Behavior a -> Behavior a demo6 = run $ a1 `over` a2 where a1 = reg red $ tx (0, sin time) ballB a2 = reg yellow $ lift2 turn angle pentaB angle = pi * sin time ``` angle is a behavior. notice the overloading: type classes! Examples - We can manipulate time itself! Thereby delaying, slowing down or speeding up animations. ```plaintext demo7 = run $ a1 `over` a2 where a1 = reg red $ tx (sin time, cos time) ballB a2 = timeTx (2 + time) a1 ``` notice the overloading: type classes! ```plaintext demo8 = run $ a1 `over` a2 where a1 = reg red $ tx (sin time, cos time) ballB a2 = timeTx (2 * time) a1 ``` a delayed animation composed with itself a fast-forwarded animation Examples • We can even put time in reverse and run an animation backwards. (Makes me wonder if we could do some DVR programming in Haskell ...) demo0 = run $ a1 `over` a2 where a1 = reg red $ tx (sin time, cos time) ballB a2 = timeTx (-1 * time) a1 run backwards BUILDING THE DSL Whereas an animation was just a synonym for a function type, a behavior is abstract: ``` newtype Behavior a = Beh (Time -> a) ``` There are a couple of reasons: - we would like to control the invariants governing Behaviors - we would like to hide implementation details from clients - we will be using some type classes, and type classes don't work properly with type synonyms - why? Intuitively because a synonym is completely interchangeable with its definition. Hence, we can't define a different behavior for the synonym than its definition. (If we could, they wouldn't be interchangeable.) Note: A newtype is a data type with just 1 constructor and no performance overhead for using it newtype Behavior a = Beh Time -> a animateB :: String -> Behavior Picture -> IO () animateB s (Beh f) = animate s (picToGraphic . f) run = animateB "Animation Window" Bootstrapping - Recall the map function: It took an ordinary function and made it into a function over lists: \[ \text{map} :: (a \rightarrow b) \rightarrow ([a] \rightarrow [b]) \] - One might say that map "lifts" an ordinary function up in to the domain of list-processing functions - Likewise, we will want to "lift" ordinary functions up in to the domain of behavior-processing functions: \[ \text{lift1} :: (a \rightarrow b) \rightarrow \text{Behavior} \ a \rightarrow \text{Behavior} \ b \] \[ \text{lift1} \ f \ (\text{Beh} \ g) = \text{Beh} \ (\lambda t \rightarrow f \ (g \ t)) \] - Lift is a way to include all of Haskell's powerful function-definition facilities within our newly developed DSL • Lift1 works with single-argument functions. We may need to do heavier lifting: lift2 :: (a -> b -> c) -> Behavior a -> Behavior b -> Behavior c lift2 f (Beh a) (Beh b) = Beh $ \_t -> f (a t) (b t) lift3 :: (a -> b -> c -> d) -> Behavior a -> Behavior b -> Behavior c -> Behavior d lift3 f (Beh a) (Beh b) (Beh c) = Beh $ \_t -> f (a t) (b t) (c t) • You can think of a constant, like the color Red, as a 0-argument function. We'll want to lift constants too: lift0 :: a -> Behavior a lift0 x = Beh $ \_t -> x a constant function; it returns x all the time Bootstrapping • Since lists are so common in Haskell, we'll lift list-processing functions too • Explore the details in your spare time: liftXs :: ([t] -> a) -> [Behavior t] -> Behavior a liftXs f bs = Beh (\t -> f (map (\(Beh b) -> b t) bs)) • But notice, even without looking at the code, how much information you get out of the type of the function: liftXs :: ([t] -> a) -> ([Behavior t] -> Behavior a) • There's really only 1 reasonable thing that liftXs could do, given its type Numeric Behaviors - Our examples involve managing coordinates, scaling factors and timewarp; we need support for numeric behaviors - Let's define standard numeric operations over behaviors by making it an instance of the Num Class ```haskell instance Num a => Num (Behavior a) where (+) = lift2 (+) (*) = lift2 (*) negate = lift1 negate abs = lift1 abs signum = lift1 signum fromInteger = lift0 . fromInteger ``` Numeric Behaviors • Unsure what (+) on Behaviors does? Run through an example using computation by calculation instance Num a => Num (Behavior a) where (+) = lift2 (+) ... lift2 :: (a -> b -> c) -> Behavior a -> Behavior b -> Behavior c lift2 f (Beh a) (Beh b) = Beh $ \t -> f (a t) (b t) lift0 :: a -> Behavior a lift0 x = Beh $ \t -> x one = Beh (\t -> 1) time = Beh (\t -> t) (+) time one = lift2 (+) time one = lift2 (+) (Beh (\t -> t)) (Beh (\t -> 1)) = Beh (\t -> (+) ((\t -> t) t) ((\t -> 1) t)) = Beh (\t -> (+) t 1) = Beh (\t -> t + 1) It just adds the numbers from the same time instant! instance Floating a => Floating (Behavior a) where pi = lift0 pi sqrt = lift1 sqrt exp = lift1 exp log = lift1 log sin = lift1 sin cos = lift1 cos tan = lift1 tan asin = lift1 asin acos = lift1 acos atan = lift1 atan sinh = lift1 sinh cosh = lift1 cosh tanh = lift1 tanh asinh = lift1 asinh acosh = lift1 acosh atanh = lift1 atanh Once again, check our work by calculating \[ \text{instance Floating a} \Rightarrow \text{Floating (Behavior a)} \text{ where} \\ \quad \sin = \text{lift1} \sin \\ \quad \ldots \\ \] \[ \text{lift1} :: (a \rightarrow b) \rightarrow \text{Behavior a} \rightarrow \text{Behavior b} \\ \quad \text{lift1} f (\text{Beh } g) = \text{Beh } (\lambda t \rightarrow f (g t)) \\ \] \[ \text{time} :: \text{Behavior Time} \\ \quad \text{time} = \text{Beh } (\lambda t \rightarrow t) \\ \] \[ \sin \text{ time} = \text{lift1} \sin \text{ time} \\ \quad = \text{lift1} \sin (\text{Beh } (\lambda t \rightarrow t)) \\ \quad = \lambda t \rightarrow \sin ((\lambda t \rightarrow t) \ t) \\ \quad = \lambda t \rightarrow \sin t \] Add in Operations for Colors, Pictures, Regions reg = lift2 Region shape = lift1 Shape poly = liftXs Polygon ell = lift2 Ellipse red = lift0 Red yellow = lift0 Yellow green = lift0 Green blue = lift0 Blue tx (Beh a1, Beh a2) (Beh r) = Beh (\t -> Translate (a1 t, a2 t) (r t)) - Ok, at this point, you've got to admit that whoever came up with the concept of "lifting" and the idea of defining the liftN functions was pretty smart -- they are getting a lot of play! Creating Behavioral Shapes • Our basic ball: \[ \begin{align*} \text{ballB} &: \text{Behavior Region} \\ \text{ballB} &= \text{shape } \ell 0.2 0.2 \end{align*} \] • Our basic pentagon: \[ \begin{align*} \text{pentaB} &: \text{Behavior Region} \\ \text{pentaB} &= \text{shape } \text{poly} (\text{map lift0 vs}) \\ \text{where vs} &= [ (0.0, 0.8) \\ &\quad , (0.3,-0.5) \\ &\quad , (-0.3,-0.5)] \end{align*} \] • A revolving balls and pentagons: \[ \begin{align*} \text{revolveRegion} &= \text{tx} (\sin \text{time}, \cos \text{time}) \\ \text{revBallB} &= \text{revolveRegion ballB} \\ \text{revPentaB} &= \text{revolveRegion pentaB} \end{align*} \] Power Tools: Conditional Behaviors • We can really start building a whole new language when we start adding conditional behaviors: ```haskell cond :: Behavior Bool -> Behavior a -> Behavior a -> Behavior a cond = lift3 $ \b x y -> if b then x else y ``` • Behavioral comparisons: ```haskell (>*)) = lift2 (>) (<*)) = lift2 (<) ``` • Alternating behaviors: ```haskell flash = cond (cos time >* 0) red yellow flash' = cond (cos time >* 0) green blue ``` Power Tools: Domain-Specific Type Classes • Are there operations that apply to several different abstractions within our DSL? • What about the concept of “over” – one shape, region, picture or behavior “over” top of another? ```haskell class Combine a where empty :: a over :: a -> a -> a ``` • Write functions to layer all elements of a list: ```haskell overMany :: Combine a => [a] -> a overMany = foldr over empty ``` Power Tools: Domain-Specific Type Classes class Combine a where empty :: a over :: a -> a -> a • Write instances of the new class for pictures and behaviors instance Combine Picture where empty = EmptyPic over = Over instance Combine a => Combine (Behavior a) where empty = lift0 empty over = lift2 over class Combine a where empty :: a over :: a -> a -> a instance Combine Picture where ... instance Combine a => Combine (Behavior a) where ... - Play with the new type classes: overMany = foldr over empty anim5 = animateB "Many Spheres" $ overMany [b1,b2,b3] where b1 = reg flash $ tx ((sin time)-1, cos time) ballB b2 = reg flash' $ tx ((sin time)+1, cos time) ballB b3 = reg flash'' $ tx (2 * sin time, cos time) pentaB More Demos - Check out the use of conditional animations and new type classes in these programs: anim2 anim3 anim4 ... anim9 - Read through the rest of the animation notes SUMMARY! Summary • Defining a new embedded DSL involves – defining **key abstract types** to be used by the client programs – defining **reuseable operations** over those abstract types • Along the way, we saw: – heavy use of **functions as data** – the idea of **lifting** a Haskell function to a new abstract domain – the use of **type classes** • new instances for existing classes: related operations on new types • new classes: new domain-specific operations • Historical note: Programming language researchers from 90s onward spent years defining and refining the basic principles of DSL design and looking for the right reusable, modular abstractions. And the research continues. Moreover, getting the specifics right is a fun, ongoing challenge in many domains.
{"Source-Url": "http://www.cs.princeton.edu/~dpw/cos441-11/notes/slides08-animation.pdf", "len_cl100k_base": 5804, "olmocr-version": "0.1.50", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 73892, "total-output-tokens": 7841, "length": "2e12", "weborganizer": {"__label__adult": 0.0003938674926757813, "__label__art_design": 0.000476837158203125, "__label__crime_law": 0.00021016597747802737, "__label__education_jobs": 0.0006518363952636719, "__label__entertainment": 7.420778274536133e-05, "__label__fashion_beauty": 0.00012028217315673828, "__label__finance_business": 0.0001500844955444336, "__label__food_dining": 0.0003893375396728515, "__label__games": 0.0004124641418457031, "__label__hardware": 0.0005936622619628906, "__label__health": 0.00032830238342285156, "__label__history": 0.0002005100250244141, "__label__home_hobbies": 8.314847946166992e-05, "__label__industrial": 0.0003025531768798828, "__label__literature": 0.00015807151794433594, "__label__politics": 0.0002219676971435547, "__label__religion": 0.0003840923309326172, "__label__science_tech": 0.0031375885009765625, "__label__social_life": 9.632110595703124e-05, "__label__software": 0.002819061279296875, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0003268718719482422, "__label__transportation": 0.0004134178161621094, "__label__travel": 0.00022482872009277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20060, 0.01874]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20060, 0.92347]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20060, 0.74469]], "google_gemma-3-12b-it_contains_pii": [[0, 137, false], [137, 627, null], [627, 654, null], [654, 845, null], [845, 1245, null], [1245, 1776, null], [1776, 2366, null], [2366, 2777, null], [2777, 3207, null], [3207, 3449, null], [3449, 3492, null], [3492, 3870, null], [3870, 4405, null], [4405, 4597, null], [4597, 4738, null], [4738, 5251, null], [5251, 5405, null], [5405, 6059, null], [6059, 6540, null], [6540, 7014, null], [7014, 7604, null], [7604, 7640, null], [7640, 8425, null], [8425, 8880, null], [8880, 9567, null], [9567, 9973, null], [9973, 10291, null], [10291, 10782, null], [10782, 11248, null], [11248, 11523, null], [11523, 11540, null], [11540, 12237, null], [12237, 12406, null], [12406, 13123, null], [13123, 13686, null], [13686, 14182, null], [14182, 14609, null], [14609, 15217, null], [15217, 15580, null], [15580, 16298, null], [16298, 16766, null], [16766, 17423, null], [17423, 17881, null], [17881, 18311, null], [18311, 18643, null], [18643, 19079, null], [19079, 19270, null], [19270, 19279, null], [19279, 20060, null]], "google_gemma-3-12b-it_is_public_document": [[0, 137, true], [137, 627, null], [627, 654, null], [654, 845, null], [845, 1245, null], [1245, 1776, null], [1776, 2366, null], [2366, 2777, null], [2777, 3207, null], [3207, 3449, null], [3449, 3492, null], [3492, 3870, null], [3870, 4405, null], [4405, 4597, null], [4597, 4738, null], [4738, 5251, null], [5251, 5405, null], [5405, 6059, null], [6059, 6540, null], [6540, 7014, null], [7014, 7604, null], [7604, 7640, null], [7640, 8425, null], [8425, 8880, null], [8880, 9567, null], [9567, 9973, null], [9973, 10291, null], [10291, 10782, null], [10782, 11248, null], [11248, 11523, null], [11523, 11540, null], [11540, 12237, null], [12237, 12406, null], [12406, 13123, null], [13123, 13686, null], [13686, 14182, null], [14182, 14609, null], [14609, 15217, null], [15217, 15580, null], [15580, 16298, null], [16298, 16766, null], [16766, 17423, null], [17423, 17881, null], [17881, 18311, null], [18311, 18643, null], [18643, 19079, null], [19079, 19270, null], [19270, 19279, null], [19279, 20060, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20060, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20060, null]], "pdf_page_numbers": [[0, 137, 1], [137, 627, 2], [627, 654, 3], [654, 845, 4], [845, 1245, 5], [1245, 1776, 6], [1776, 2366, 7], [2366, 2777, 8], [2777, 3207, 9], [3207, 3449, 10], [3449, 3492, 11], [3492, 3870, 12], [3870, 4405, 13], [4405, 4597, 14], [4597, 4738, 15], [4738, 5251, 16], [5251, 5405, 17], [5405, 6059, 18], [6059, 6540, 19], [6540, 7014, 20], [7014, 7604, 21], [7604, 7640, 22], [7640, 8425, 23], [8425, 8880, 24], [8880, 9567, 25], [9567, 9973, 26], [9973, 10291, 27], [10291, 10782, 28], [10782, 11248, 29], [11248, 11523, 30], [11523, 11540, 31], [11540, 12237, 32], [12237, 12406, 33], [12406, 13123, 34], [13123, 13686, 35], [13686, 14182, 36], [14182, 14609, 37], [14609, 15217, 38], [15217, 15580, 39], [15580, 16298, 40], [16298, 16766, 41], [16766, 17423, 42], [17423, 17881, 43], [17881, 18311, 44], [18311, 18643, 45], [18643, 19079, 46], [19079, 19270, 47], [19270, 19279, 48], [19279, 20060, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20060, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
56403494fbd30be29779d1a67cb9cdd4e97f6482
Secure Coding. Practical steps to defend your web apps. Copyright SANS Institute Author Retains Full Rights Interested in learning more? Check out the list of upcoming events offering "Defending Web Applications Security Essentials (DEV522)" at http://software-security.sans.org http://software-security.sans.org/events/ We are seeing nothing less than an evolutionary shift as security infrastructure moves to software-defined models that improve speed and scale, and afford enterprise IT more agility and capabilities than ever before. Application development and deployment are driving this shift, and as the pace of development increases, organizations have a real need to ensure application security is embedded in all phases of the development and deployment life cycle, as well as in the cloud during operations. Much like other areas of security, the responsibility for application security varies in the cloud widely, depending on the model in place. In a software-as-a-service (SaaS) model, the provider is entirely responsible for application security in almost every case. With a platform-as-a-service (PaaS) model, the provider supplies the underlying systems and templates, so it has a significant degree of control and responsibility—although any applications developed by the consumer are necessarily the consumer’s own responsibility, and that extends to the security. With an infrastructure-as-a-service (IaaS) model, entire workloads and their contents (including application components) are the responsibility of the consumer. In this paper, we delve into the changing nature of application development and security as organizations are building and deploying applications for the cloud. We’ll cover the various phases of a modern application pipeline and discuss some of the security controls that organizations should consider implementing in each. We’ll also touch on a number of other critical areas such as privilege management, containers and orchestration, and automation. How the SDLC Is Changing The software development life cycle (SDLC) has moved to a methodology that prioritizes collaboration and more frequent (yet smaller) updates to application stacks. Standards for code quality and security, as well as application workload configuration, should be defined and published so that all teams have something to measure throughout the entire application life cycle. Ideally, organizations will lock down cloud workloads as much as possible, running a minimum of necessary services. They should also revisit configuration requirements to ensure that any cloud-based infrastructure is resilient. To shift toward a more collaborative culture, security teams need to integrate with the developers responsible for promoting code to cloud-based applications. Security teams can impress upon development and operations that they bring a series of tests and “quality conditions” to bear on any production code push without slowing the process. Security teams should work with quality assurance (QA) and development to define certain parameters and key qualifiers (such as bug count and severity) that need to be met before any code is promoted. In addition, security teams need to determine which tools they can use to integrate into the application pipeline. They also need to identify areas and controls that may need to be updated or adapted to work in a Continuous Integration and/or Continuous Delivery model (covered in the next section). It is likely that new standards for many security prevention, detection and response capabilities should be revisited, as well. Examples of these areas include encryption, privileged user management, network security access controls, event management, logging policies and incident response strategy. Once initial processes, policies and standards have been defined and agreed upon, the security team should focus on automation and seamless integration of controls and processes at all stages of the deployment pipeline. The Modern CI/CD Pipeline Many organizations are adopting Continuous Integration (CI) and Continuous Delivery (CD) for their cloud application pipelines. CI is often the most feasible part of the application development life cycle to be targeted by a team looking to speed up and implement more collaborative development practices. With CI, all developers have their code regularly integrated into a common mainline code base. This practice helps to prevent isolation of code with individual developers and can also lead to more effective control over code in a central repository. CD is usually exhibited through small, incremental and frequent code pushes (often to stage or test environments), as opposed to the more traditional way of pushing code as large releases to production every few weeks or months. Modern development practices (e.g., Scrum, Kanban, Crystal, etc.) often release code more frequently than older models (e.g., waterfall) in an SLDC. CD means you deliver code to production in an automated pipeline, which is less common in traditional enterprises. Modern cloud application pipelines strive for a number of goals and focal areas: - **Automated provisioning**—The more automated the provisioning of resources and assets is, the more rapidly the SDLC and operations model can operate. - **No-downtime deployments**—Because cloud services are based on service-oriented costing models, downtime is less acceptable. - **Monitoring**—Constant monitoring and vigilance of code and operations help to streamline and improve quality immensely. - **Rapid testing and updates**—The sooner code flaws can be detected, the less impact they’ll have in a production environment. Rapid and almost constant testing needs to occur for this to happen. - **Automated builds and testing**—More automation in the testing and QA processes will help to speed up all activities and improve delivery times. Protection for application workloads requires a dedicated commitment to security at many levels of any organization. A sound governance model that includes collaborative discussions about code quality, system builds, architecture and network controls, identity and access management, and data security is critically important to developing the standards for controls and security posture (mentioned earlier). Ideally, the following types of roles will be a part of any cloud application security and development model: - Application development teams - Cloud architecture and engineering teams - Security architecture and operations teams - IT in infrastructure teams (server engineering, database management and more) - Compliance and legal teams (where appropriate) - Business unit management (where appropriate) Make sure that your security teams discuss: - **Standard and planned coding and release cycles**—If the development teams plan on doing CI, how will the code be centrally stored and managed? Security teams should focus on code scrutiny and auditing the code storage/management platform and tools. - **Tools in use for development, testing and deployment**—Automated testing suites are ideal, but security teams need to understand the tools the development teams plan to use so that they can become familiar with platform security, logging and privilege/credential management. • **How security can best integrate with the teams**—Ideally, security teams will have some understanding of development practices, and will know how to write test scripts and infrastructure-as-code templates where applicable. • **Expected standards and behaviors**—If there are no standards to adhere to, what will the team seek to enforce? Think about standards for secure coding, configuration benchmarks (like CIS and others) and vulnerability scan results (what is acceptable to be released). In addition, security teams should define policies for components, networks and architecture where they can. In other words, they should ask: Where can security create policies that are embedded and applied automatically? Examples might include: - Configurations for instances and images used in development and production - App deployment and automation security - Expected and accepted standards (What does a successful and secure component or deployment look like? Start with the end in mind to ensure you have a target goal.) One additional area of IT that will likely need to adapt is change management. In traditional IT environments, change requests are often created for weekly or biweekly change windows, where IT staff make changes during the scheduled times (usually off-hours). In a fast-moving cloud application environment, much more rapid changes will need to be allowed. Teams will usually need to adapt by deciding ahead of time which severity of changes will be allowed to occur without prior approval or review versus those that will need more attention. Collaboration platforms can also be useful for enabling more rapid discussions about proposed changes as needed. ### Security in the CI/CD World When integrating into a cloud-focused application development model, security teams need to focus on the following: - **Code security**—How is code being scanned for vulnerabilities? - **Code repositories**—How is code being checked in and checked out, and by whom? - **Automation tools**—What tools are in use to automate builds, deployments, etc.? How can security integrate with these? - **Orchestration platforms**—How are orchestration tools being used to coordinate and automate infrastructure and cloud components? - **Gateways and network connectivity**—How can the teams ensure secure connectivity to the cloud for deployments? Authentication/authorization and privileged user monitoring and management are critical, too. While this sounds obvious, cloud application development pipelines tend to include high-privilege users doing lots of activities, and overallocation of privileges can quickly become an issue without oversight and planning. When planning for cloud application development, security teams first need to work with application development groups to perform threat modeling and risk assessment for the deployment types that they envision. By performing a threat modeling exercise, security and development teams can better understand the types and sensitivity levels of the assets they protect, how to manage and monitor them in the cloud, and the most likely threat vectors for those assets. The type of data that is stored, transmitted and processed makes a difference when assessing the risk of systems and applications in the cloud. Some data types dictate specific security controls, as well as provisioning into compliant cloud provider environments. Risk assessment and analysis practices should be updated to continually review the following: - Cloud provider security controls, capabilities and compliance status - Internal development and orchestration tools and platforms - Operations management and monitoring tools - Security tools and controls, both on premises and in the cloud After risk reviews, and keeping the shared responsibility model in mind (meaning cloud providers and consumers share responsibility for security at different layers of the stack), security teams should have a better understanding of what controls they currently have, what controls they need to modify to successfully operate in the cloud, and what the most pressing concerns are (as they change). It’s almost a guarantee that some security controls—tools, processes, policies, etc.—won’t operate the way they did on premises, or won’t be available in cloud service provider environments in the same format or with the same capabilities. ### Security for the CI/CD Pipeline In the modern CI/CD pipeline for cloud application development and deployment, one of the most pressing needs for all teams is automation, far beyond what we’ve traditionally seen in enterprise data centers. With cloud deployment moving faster than ever, security and development teams need to automate static code security scans, dynamic platform build and QA application and vulnerability tests. They also need to automate most (if not all) configuration and operations tasks, including web application firewall (WAF) deployments and network access controls (NACs). For cloud deployments, all application development teams, as well as security teams, also need to embrace API integration/use. Providers like Amazon Web Services (AWS) operate a completely software-based infrastructure that may offer sophisticated APIs for creating workloads, adding security controls around those workloads, updating and --- 1 This paper mentions the names of products and services to provide real-life examples of how security tools can be used. The use of these examples is not an endorsement of any product or service. integrating new code and images for containers, and much more. In keeping with the theme of automation, scripted and programmatic methods of automating deployments need to make heavy use of provider APIs. Security teams have a number of security controls and areas of emphasis to consider for all phases of the application development and deployment pipelines, as shown in Figure 1 and discussed in the following sections. </> Code/Develop Ideally, your organization already follows secure coding practices. Security and development teams need to discuss standards for languages and frameworks to make sure risk is acceptable before deployment. This objective can be a tall order, and secure coding and development practices are still not all that commonplace today. Look into static code analysis tools, and ensure the code is secured within repositories: • Are check-in and check-out procedures defined? • Do solid role-based access controls exist? Cloud providers often have options available for code storage and management that include authentication with strong identity management and robust logging/tracking of activity. AWS CodeCommit is a fully managed source control service that hosts secure Git-based repositories that encrypts all files both in transit and at rest, integrates with AWS Identity and Access Management (IAM) for controlling privileges and access to code stores, and logs all activity in AWS CloudTrail. Additionally, AWS CodeCommit has a wide range of APIs available that can enable automation and integration with third-party static code analysis tools for code analysis and review by security teams. Code can be automatically scanned upon check-in, and bug/vulnerability reports can be sent automatically to the appropriate teams. Build Building code and workload stacks for cloud applications should incorporate automated and intelligent security controls as well. This stage should include: • Validated code • An approved build architecture and controls • Automated build testing for compiled code Above and beyond the aforementioned automation and security controls and processes, we need automated reporting that goes to the proper parties for review. This is what will ultimately contribute to a more effective vulnerability management program across... the environment. Much like the previous phase of development (code/develop), the build phase can often be securely implemented within cloud provider environments. AWS CodeBuild is a fully managed CI service that compiles source code, runs tests and produces software packages that are ready to deploy. Managing encryption of build artifacts is critical, and AWS CodeBuild integrates with AWS Key Management Service (KMS). AWS CodeBuild also integrates with AWS IAM for control over privileges to builds and compiled code, and all activity is also logged to AWS CloudTrail. ### Package Packaging is the phase of application development when the build is updated with additional software packages, some of which may be open source or from in-house repositories. It is important for development and security teams to audit open source modules for flaws, then discuss methods to protect code repositories automatically. A regular schedule for threat and vulnerability updates with the development and operations teams should be decided upon and incorporated into defined processes. Some traditional vulnerability scanning vendors have adapted their products to work within cloud provider environments, often relying on APIs to avoid manual requests to perform more intrusive scans on a scheduled or ad hoc basis. Another option is to rely on host-based agents that can scan their respective virtual machines continually or as needed. Ideally, systems will be scanned on a continuous basis, with reporting of any vulnerabilities noted in real or near real time. AWS Systems Manager can be used to manage package repositories and secure build images with up-to-date patches and libraries. Tools like Trend Micro Deep Security can help to automate application protection and package validation for workloads, too. ### Test The testing phase is one that can be highly automated. Consider both static and dynamic tools, depending on builds. Keys for security teams during the testing phase are: - Run security testing that’s as seamless as possible (avoid interfering with QA if you can help it). - Define test cases and tools. - Define acceptable outcomes that meet policy. - Automate tools and teach developers/QA engineers to run them. The last point is a crucial one—security teams need to hand off tools to the application developers wherever possible and not insert themselves into every process. Involvement is key, but running test tools is something the application teams can do. Security should only perform pen tests and continuous monitoring activities regularly once policies and standards are defined. Using open source build testing tools like Test Kitchen and Vagrant can simplify internal policy validation before you push them, and also in an ongoing fashion. To coordinate penetration tests and routine checks to validate policies’ effectiveness, ask: - Are only required ports open? - Are credentials secured? - Are encryption keys secured? - Are privileges assigned properly? Really, any specific elements of your configuration standard or expected posture should be continually validated and assessed using automated orchestration tools and platforms. Many third-party dynamic application scanning and pen testing service providers have fully integrated into the cloud. These tests can be run upon build check-in, image update or manually as needed, with fully automated reporting sent to the right teams. **Deploy/Upgrade** In this phase, security teams are focused on: - **Documentation**—Note any bugs that are outstanding; document plans to fix and when. - **Communication**—Coordinate with development and operations teams to instantiate any controls needed for remediation or stopgaps. - **Life cycle**—Ensure an approved policy for bug remediation is in place and monitored for future release cycles. Even though you’ll still have bugs, make sure to fix any of a certain severity before you push applications and systems out the door. Deployment involves more on the operations side. Ideally, controlled and automated deployments will be coordinated and controlled by operations with input from the application development teams involved. Where does security fit? - Nothing new is added/changed once approved builds are ready. - Deployment is done to the appropriate location/endpoints. - Deployment is performed over a secure channel for cloud (TLS/SSH). - Checks exist to ensure a failed deployment rolls back. It is critical for security teams to be invested and involved in the development stage. Secure network channels should be established for any deployment activities, which likely involves the use of dedicated circuits like AWS Direct Connect, VPN tunnels using IPSec and/or secure certificate-based HTTPS with strong cryptographic TLS implementations. Image validation—which will heavily rely on automation and a combination of vulnerability scanning and host-based agents that can validate all libraries, binaries and configuration elements used in the application workloads—should also take place at this phase. Orchestration engines are useful for some of these tasks, as are cloud-native tools like AWS OpsWorks that can reliably and securely handle the configuration and assessment of application images. Operate This final stage primarily focuses on protection of applications with tools like NACs and WAFs, as well as monitoring, logging and alerting. Define security use cases for production operations by answering the following questions: - What events should trigger alerts? - What events should trigger automated remediation? - What event severities should be in place? - What controls are needed to properly secure the environment? For starters, teams should define deployment attributes that can be monitored continuously. Examples of quick wins for monitoring include the following: - Types of instances allowed to be deployed (size and build) - Image expected for deployment - Location/source of deployment (such as IP address or account/subscription) - IAM or other user invoked in operations These attributes should all be known and relatively inflexible, and can easily be used as simple trigger points for alerting or even automated rollback or preventative actions. For example, if an instance type of `m1.small` is deployed, and the only approved type is `t2.micro`, this trigger could cause the workload to shut down entirely. Cloud-native or third-party web application firewalls like AWS WAF can easily be set up to block malicious application attacks like SQL injection, cross-site scripting (XSS) and others. In addition, they can perform manual or automated blocking of IP addresses based on threat intelligence that incorporates reputation analysis. WAFs can generate detailed logs, too, which security teams can then stream back to a central analysis engine like a SIEM platform. Best Practices To summarize, Table 1 describes the key security areas of focus in the modern cloud application development pipeline. <table> <thead> <tr> <th>Phase</th> <th>Focus</th> </tr> </thead> <tbody> <tr> <td>Code/Develop</td> <td>Look for static code analysis tools that are in place and performing (ideally) automated code scans for checked-in code. Reports from these scans should be sent to stakeholders that include security teams and/or application developers.</td> </tr> <tr> <td>Build</td> <td>Tools like Jenkins can be used to create builds, and they often have many plug-ins and local controls that should be tuned. What types of builds are allowed, and where are the images stored? A secure location where image security and integrity are controlled is paramount for this phase.</td> </tr> <tr> <td>Package</td> <td>Code will need to be packaged for installation on builds, and this should be done through automated tools that also have the appropriate permissions and access controls (keys to check out code, for example).</td> </tr> <tr> <td>Test</td> <td>The test phase should include Dynamic Application Security Testing (DAST) tools, as well as (possibly) traditional network vulnerability scans and various flavors of pen tests.</td> </tr> <tr> <td>Deploy/Upgrade</td> <td>Only approved builds with packages/software that passes testing should be deployed over a secure channel.</td> </tr> <tr> <td>Operate</td> <td>Now we’re in operations, where we should have “guardrails” set up like the appropriate account/subscription separation, IAM policies, network controls and logging/monitoring.</td> </tr> </tbody> </table> Additional Development Security Concepts for Cloud Along with core security controls and practices in each major phase of a modern development pipeline, some additional topics and concepts should be in place. Think of these as overarching concepts that apply throughout the entire life cycle. Figure 2 illustrates these concepts, which we cover in the following sections. Secrets Management A critical aspect of managing security in a cloud environment is to carefully limit and control the accounts and privileges assigned to resources. All users, groups, roles and privileges should be carefully discussed and designated to resources on a need-to-know basis. The best practice of assigning the least privilege model of access should also be applied whenever possible. Any privileged accounts (such as root and the local administrator accounts) should be monitored closely—if not disabled completely or used only in break-glass procedures. In addition to privilege management in configuration definitions, application development teams need to ensure no sensitive material like encryption keys or credentials are stored in definition files, on systems that are exposed or in code that could be exposed. As encryption and data protection strategies are increasingly automated along with other development activities, it’s critical to make sure the proverbial keys to the kingdom are protected at all times. In the cloud, this can be easily accomplished with a variety of tools like AWS Key Management Service (KMS) and AWS Secrets Manager. API Security As mentioned earlier, APIs are integral to building a robust and automated development pipeline. The security posture of APIs should be documented by providers, and all APIs should be strongly controlled through IAM policies. Use of APIs should be carefully monitored, too, with full logging to AWS CloudTrail and other logging engines. Application development teams need to ensure no sensitive material like encryption keys or credentials are stored in definition files, on systems that are exposed or in code that could be exposed. Privilege Management and IAM Strong privilege management is a necessity in fast-moving application pipelines. Integration with secrets management tools and a granular IAM policy engine like AWS IAM is crucial, along with federation capabilities and integration with directory services. Security teams should help to define the appropriate least privilege access models needed for all stages of application development and deployment, and then implement this in a centralized tool/service whenever possible. A fragmented privilege management and IAM implementation strategy often leads to poor operational oversight of users, groups and permissions, so a single policy engine should be used if at all possible. In addition to these overarching technology concepts, some newer technologies are also being heavily used in application development and deployments today, including containers and serverless applications, discussed next. Containers and Container Management/Orchestration Containers are rapidly becoming a common means of quickly deploying application workloads in both internal and cloud environments. Containers are created on a shared OS workload, and both the runtime container image and the underlying OS platform need to be secured and maintained much like other images described earlier. Having a secure repository for container images like Amazon Elastic Container Registry (ECR), as well as orchestration tools that can be used for starting, stopping and managing container deployments securely like Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS), is important for enterprises using containers in the cloud. Encryption and IAM controls for images, as well as strong logging for all activities should be priorities. Serverless Applications and Security A final type of technology that many application development teams are employing is serverless, which offloads the entire workload (container and OS instance) to the provider’s backplane, allowing developers to create microservices applications that only require application code to be uploaded and operated within the cloud provider environment. Serverless security should involve static code review (numerous third-party providers can integrate into serverless environments like AWS Lambda to scan the code), privilege and permission control over all serverless applications with IAM, and complete logging of all serverless application updates and execution using tools like AWS CloudTrail. Use Case For modern hybrid application development pipelines, security needs to be integrated in a number of places. Imagine a fictional organization, ACME Corporation, that needs to integrate security into its hybrid cloud application pipelines with both on-premises resources and those running in AWS. Internal code repositories are synchronized from on-premises code repository tools with AWS CodeCommit across an AWS Direct Connect channel, where all code is encrypted and protected with strong IAM policies that restrict code access and updates to a limited team of developers. All code updates, check-ins and check-outs are logged and recorded in AWS CloudTrail. A third-party static code analysis tool is integrated into AWS and automatically scans all code that is updated and checked in. Reports are automatically sent to security and development team members to review the criticality of bugs discovered for remediation. AWS CloudFormation templates are used to create builds with approved Amazon Machine Images (AMIs) and container images stored in the Amazon ECR, which is also carefully controlled through IAM policies. In the build and update phases, a dynamic vulnerability scanning platform with agents and network scanning capabilities is integrated to scan all application builds for libraries, binaries and OS configurations to ensure no vulnerabilities are present before deployment. Reports are again automatically generated and sent to team members for review. If the reports show that all images meet pre-approved standards, the images are then pushed into deployment with defined orchestration using Amazon EKS and Amazon EC2 instances with AWS Systems Manager installed for monitoring and administration. Once deployed, AWS WAF is enabled to protect applications from malicious application attacks. Summary For modern application pipelines, there are a plethora of tools available from cloud providers and third-party companies to help automate strong security controls through the entire development and deployment process. A strong governance structure is critical to ensure all stakeholders are involved and on board with the new tools and processes needed, and security operations teams will need to help define standards for code and images, as well as build strong protective and detective controls in the cloud environment. About the Author Dave Shackleford, a SANS analyst, senior instructor, course author, GIAC technical director and member of the board of directors for the SANS Technology Institute, is the founder and principal consultant with Voodoo Security. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering. A VMware vExpert, Dave has extensive experience designing and configuring secure virtualized infrastructures. He previously worked as chief security officer for Configuresoft and CTO for the Center for Internet Security. Dave currently helps lead the Atlanta chapter of the Cloud Security Alliance. Sponsor SANS would like to thank this paper’s sponsor: [aws marketplace logo] ## Upcoming SANS App Sec Training <table> <thead> <tr> <th>Event</th> <th>Location</th> <th>Dates</th> <th>Format</th> </tr> </thead> <tbody> <tr> <td>SANS 2020</td> <td>Orlando, FL</td> <td>Apr 03, 2020 - Apr 10, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Amsterdam May 2020</td> <td>Amsterdam, Netherlands</td> <td>May 11, 2020 - May 18, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Silicon Valley - Cupertino 2020</td> <td>Cupertino, CA</td> <td>Jun 22, 2020 - Jun 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Copenhagen August 2020</td> <td>Copenhagen, Denmark</td> <td>Aug 24, 2020 - Aug 29, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Network Security 2020</td> <td>Las Vegas, NV</td> <td>Sep 20, 2020 - Sep 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS OnDemand</td> <td>Online</td> <td>Anytime</td> <td>Self Paced</td> </tr> <tr> <td>SANS SelfStudy</td> <td>Books &amp; MP3s Only</td> <td>Anytime</td> <td>Self Paced</td> </tr> </tbody> </table>
{"Source-Url": "https://software-security.sans.org/resources/paper/reading-room/secure-app-pipelines-aws", "len_cl100k_base": 6067, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31266, "total-output-tokens": 6622, "length": "2e12", "weborganizer": {"__label__adult": 0.0003178119659423828, "__label__art_design": 0.0002137422561645508, "__label__crime_law": 0.0007171630859375, "__label__education_jobs": 0.0007429122924804688, "__label__entertainment": 4.9114227294921875e-05, "__label__fashion_beauty": 0.00012981891632080078, "__label__finance_business": 0.0004737377166748047, "__label__food_dining": 0.00024271011352539065, "__label__games": 0.00039768218994140625, "__label__hardware": 0.0008349418640136719, "__label__health": 0.00037789344787597656, "__label__history": 9.775161743164062e-05, "__label__home_hobbies": 7.56382942199707e-05, "__label__industrial": 0.0003285408020019531, "__label__literature": 0.00012624263763427734, "__label__politics": 0.0001838207244873047, "__label__religion": 0.00022423267364501953, "__label__science_tech": 0.01062774658203125, "__label__social_life": 8.183717727661133e-05, "__label__software": 0.00954437255859375, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.00020182132720947263, "__label__transportation": 0.0003414154052734375, "__label__travel": 0.0001379251480102539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32704, 0.00615]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32704, 0.07898]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32704, 0.93139]], "google_gemma-3-12b-it_contains_pii": [[0, 323, false], [323, 2004, null], [2004, 5077, null], [5077, 7311, null], [7311, 9990, null], [9990, 12845, null], [12845, 15144, null], [15144, 17921, null], [17921, 20403, null], [20403, 23622, null], [23622, 25716, null], [25716, 28222, null], [28222, 30582, null], [30582, 31345, null], [31345, 32704, null]], "google_gemma-3-12b-it_is_public_document": [[0, 323, true], [323, 2004, null], [2004, 5077, null], [5077, 7311, null], [7311, 9990, null], [9990, 12845, null], [12845, 15144, null], [15144, 17921, null], [17921, 20403, null], [20403, 23622, null], [23622, 25716, null], [25716, 28222, null], [28222, 30582, null], [30582, 31345, null], [31345, 32704, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32704, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32704, null]], "pdf_page_numbers": [[0, 323, 1], [323, 2004, 2], [2004, 5077, 3], [5077, 7311, 4], [7311, 9990, 5], [9990, 12845, 6], [12845, 15144, 7], [15144, 17921, 8], [17921, 20403, 9], [20403, 23622, 10], [23622, 25716, 11], [25716, 28222, 12], [28222, 30582, 13], [30582, 31345, 14], [31345, 32704, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32704, 0.11515]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0fa7461562e51eb4407498642692367610c2cd53
Median and Selection 1 Introduction So far we have covered the master theorem, which can be used for recurrences of a certain form. Recall that if we have a recurrence $T(n) = aT\left(\frac{n}{b}\right) + O(n^d)$ where $a \geq 1, b > 1$, then $$T(n) = \begin{cases} O(n^d \log n) & \text{if } a = b^d \\ O(n^d) & \text{if } a < b^d \\ O(n^{\log_b a}) & \text{if } a > b^d \end{cases}$$ Many algorithms that result from the divide-and-conquer paradigm yield recurrence relations for their runtimes that have the above form – namely algorithms that divide the problem into equal-sized sub-pieces at each recursion. Today, we will introduce a problem where the master theorem cannot be applied: the problem of finding the $k$-th smallest element in an unsorted array. First, we show it can be done in $O(n \log n)$ time via sorting and that any correct algorithm must run in $\Omega(n)$ time. However, it is not obvious that a linear-time selection algorithm exists. We present a linear-time selection algorithm, with an intuition for why it has the desired properties to achieve $O(n)$ running time. The two high-level goals of this lecture are 1) to cover a really cool and surprising algorithm, and 2) illustrate that some divide-and-conquer algorithms yield recurrence relations that cannot be analyzed via the “Master Method/Theorem”, yet one can (often) still successfully analyze them. 2 Selection The selection problem is to find the $k$-th smallest number in an array $A$. Input: array $A$ of $n$ numbers, and an integer $k \in \{1, \ldots, n\}$. Output: the $k$-th smallest number in $A$. One approach is to sort the numbers in ascending order, and then return the $k$-th number in the sorted list. This takes $O(n \log n)$ time, since it takes $O(n \log n)$ time for the sort (e.g. by MergeSort) and $O(1)$ time to return $k$-th number. 2.1 Minimum Element As always, we ask if we can do better (i.e., faster in big-O terms). In the special case where \( k = 1 \), selection is the problem of finding the minimum element. We can do this in \( O(n) \) time by scanning through the array and keeping track of the minimum element so far. If the current element is smaller than the minimum so far, we update the minimum. Algorithm 1: SelectMin(\( A \)) \[ \begin{align*} m &\leftarrow \infty \\ n &\leftarrow \text{length}(A) \\ \text{for } i = 1 \text{ to } n &\text{ do} \\ \quad &\text{if } A[i] < m \text{ then} \\ \quad &\quad m \leftarrow A[i] \\ \text{return } m \end{align*} \] In fact, this is the best running time we could hope for. Definition 1. A deterministic algorithm is one which, given a fixed input, always performs the same operations (as opposed to an algorithm which uses randomness). Proposition 2. Any deterministic algorithm for finding the minimum has runtime \( \Omega(n) \). Proof. Intuitively, the claim holds because any algorithm for the minimum must look at all the elements, each of which could be the minimum. Suppose a correct deterministic algorithm does not look at \( A[i] \) for some \( i \). Then the output cannot depend on \( A[i] \), so the algorithm returns the same value whether \( A[i] \) is the minimum element or the maximum element. Therefore the algorithm is not always correct, which is a contradiction. So there is no sublinear deterministic algorithm for finding the minimum. So for \( k = 1 \), we have an algorithm which achieves the best running time possible. By similar reasoning, this lower bound of \( \Omega(n) \) applies to the general selection problem. So ideally we would like to have a linear-time selection algorithm in the general case. 3 Linear-Time Selection In fact, a linear-time selection algorithm does exist. Before showing the linear time selection algorithm, it’s helpful to build some intuition on how to approach the problem. The high-level idea will be to try to do a Binary Search over an unsorted input. At each step, we hope to divide the input into two parts, the subset of smaller elements of \( A \), and the subset of larger elements of \( A \). We will then determine whether the \( k \)-th smallest element lies in the first part (with the “smaller” elements) or the part with larger elements, and recurse on exactly one of those two parts. How do we decide how to partition the array into these two pieces? Suppose we have a black-box algorithm ChoosePivot that chooses some element in the array \( A \), and we use this pivot to define the two sets—any \( A[i] \) less than the pivot is in the set of “smaller” values, and any \( A[i] \) greater than the pivot is in the other part. We will figure out precisely how to specify this subroutine ChoosePivot a bit later, after specifying the high-level algorithm structure. The algorithm ChoosePivot does not affect the correctness of the algorithm as we will see in Section 3.6. Rather, it only affects the runtime. For clarity we’ll assume all elements are distinct from now on, but the idea generalizes easily. Let \( n \) be the size of the array and assume we are trying to find the \( k \)-th element. \[ \text{Algorithm 2: Select}(A, n, k) \] \[ \begin{align*} \text{if } n = 1 & \text{ then} \\ & \text{return } A[1] \\ p & \leftarrow \text{ChoosePivot}(A, n) \\ A_\prec & \leftarrow \{ A(i) \mid A(i) \prec p \} \\ A_\succ & \leftarrow \{ A(i) \mid A(i) \succ p \} \\ \text{if } |A_\prec| = k - 1 & \text{ then} \\ & \text{return } p \\ \text{else if } |A_\prec| > k - 1 & \text{ then} \\ & \text{return Select}(A_\prec, |A_\prec|, k) \\ \text{else if } |A_\prec| < k - 1 & \text{ then} \\ & \text{return Select}(A_\succ, |A_\succ|, k - |A_\prec| - 1) \end{align*} \] At each iteration, we use the element \( p \) to partition the array into two parts: all elements smaller than the pivot and all elements larger than the pivot, which we denote \( A_\prec \) and \( A_\succ \), respectively. Depending on what the size of the resulting sub-arrays are, the runtime can be different. For example, if one of these sub-arrays is of size \( n - 1 \), at each iteration, we only decreased the size of the problem by 1, resulting in total running time \( O(n^2) \). If the array is split into two equal parts, then the size of the problem at iteration reduces by half, resulting in a linear time solution. (We assume ChoosePivot runs in \( O(n) \).) **Proposition 3.** If the pivot \( p \) is chosen to be the minimum or maximum element, then Select runs in \( \Theta(n^2) \) time. **Proof.** At each iteration, the number of elements decreases by 1. Since running ChoosePivot and creating \( A_\prec \) and \( A_\succ \) takes linear time, the recurrence for the runtime is \( T(n) = T(n - 1) + \Theta(n) \). Expanding this, \[ T(n) \leq c_1 n + c_1 (n - 1) + c_1 (n - 2) + \ldots + c_1 = c_1 n(n + 1)/2 \] and \[ T(n) \geq c_2n + c_2(n-1) + c_2(n-2) + ... + c_2 = c_2n(n+1)/2. \] We conclude that \( T(n) = \Theta(n^2). \) **Proposition 4.** If the pivot \( p \) is chosen to be the median element, then Select runs in \( O(n) \) time. **Proof.** Intuitively, the running time is linear since we remove half of the elements from consideration each iteration. Formally, each recursive call is made on inputs of half the size, namely, \( T(n) \leq T(n/2) + cn. \) Expanding this, the runtime is \[ T(n) \leq cn + cn/2 + cn/4 + ... + c \leq 2cn, \] which is \( O(n) \). So how do we design ChoosePivot that chooses a pivot in linear time? In the following, we describe three ideas. ### 3.1 Idea #1: Choose a random pivot As we saw earlier, depending on the pivot chosen, the worst-case runtime can be \( O(n^2) \) if we are unlucky in the choice of the pivot at every iteration. As you might expect, it is extremely unlikely to be this unlucky, and one can prove that the expected runtime is \( O(n) \) provided the pivot is chosen uniformly at random from the set of elements of \( A. \) In practice, this randomized algorithm is what is implemented, and the hidden constant in the \( O(n) \) runtime is very small. ### 3.2 Idea #2: Choose a pivot that creates the most “balanced” split Consider ChoosePivot that returns the pivot that creates the most “balanced” split, which would be the median of the array. However, this is exactly selection problem we are trying to solve, with \( k = n/2! \) As long as we do not know how to find the median in linear time, we cannot use this procedure as ChoosePivot. ### 3.3 Idea #3: Find a pivot "close enough" to the median Given a linear-time median algorithm, we can solve the selection problem in linear time (and vice versa). Although ideally we would want to find the median, notice that as far as correctness goes, there was nothing special about partitioning around the median. We could use this same idea of partitioning and recursing on a smaller problem even if we partition around an arbitrary element. To get a good runtime, however, we need to guarantee that the subproblems get smaller quickly. In 1973, Blum, Floyd, Pratt, Rivest, and Tarjan came up with the Median of Medians algorithm. It is similar to the previous algorithm, but rather than partitioning around the exact median, uses a surrogate "median of medians". We update ChoosePivot accordingly. Algorithm 3: ChoosePivot(A, n) Split A into \( g = \lceil n/5 \rceil \) groups \( p_1, \ldots, p_g \) for \( i = 1 \) to \( g \) do \( p_i \leftarrow \text{MergeSort}(p_i) \) \( C \leftarrow \{ \text{median of } p_i \mid i = 1, \ldots, g \} \) \( p \leftarrow \text{Select}(C, g, g/2) \) return \( p \) What is this algorithm doing? First it divides \( A \) into segments of size 5. Within each group, it finds the median by first sorting the elements with MergeSort. Recall that MergeSort sorts in \( O(n \log n) \) time. However, since each group has a constant number of elements, it takes constant time to sort. Then it makes a recursive call to Select to find the median of \( C \), the median of medians. Intuitively, by partitioning around this value, we are able to find something that is close to the true median for partitioning, yet is ‘easier’ to compute, because it is the median of \( g = \lceil n/5 \rceil \) elements rather than \( n \). The last part is as before: once we have our pivot element \( p \), we split the array and recurse on the proper subproblem, or halt if we found our answer. We have devised a slightly complicated method to determine which element to partition around, but the algorithm remains correct for the same reasons as before. So what is its running time? As before, we’re going to show this by examining the size of the recursive subproblems. As it turns out, by taking the median of medians approach, we have a guarantee on how much smaller the problem gets each iteration. The guarantee is good enough to achieve \( O(n) \) runtime. ### 3.3.1 Running Time #### Lemma 5. \( |A_\leq| \leq 7n/10 + 5 \) and \( |A_\geq| \leq 7n/10 + 5 \). **Proof.** \( p \) is the median of \( p_1, \ldots, p_g \). Because \( p \) is the median of \( g = \lceil n/5 \rceil \) elements, the medians of \( \lceil g/2 \rceil - 1 \) groups \( p_i \) are smaller than \( p \). If \( p \) is larger than a group median, it is larger than at least three elements in that group (the median and the smaller two numbers). This applies to all groups except the remainder group, which might have fewer than 5 elements. Accounting for the remainder group, \( p \) is greater than at least \( 3 \cdot (\lceil g/2 \rceil - 2) \) elements of \( A \). By symmetry, \( p \) is less than at least the same number of elements. Now, \[ |A_\geq| = \# \text{ of elements greater than } p \\ \leq (n - 1) - 3 \cdot (\lceil g/2 \rceil - 2) \\ = n + 5 - 3 \cdot \lceil g/2 \rceil \\ \leq n - 3n/10 + 5 \\ = 7n/10 + 5. \quad (1) \] By symmetry, $|A_<| \leq 7n/10 + 5$ as well. Intuitively, we know that 60% of half of the groups are less than the pivot, which is 30% of the total number of elements, $n$. Therefore, at most 70% of the elements are greater than the pivot. Hence, $|A_>| \approx 7n/10$. We can make the same argument for $|A_<|$. The recursive call used to find the median of medians has input of size $\lceil n/5 \rceil \leq n/5 + 1$. The other work in the algorithm takes linear time: constant time on each of $\lceil n/5 \rceil$ groups for MergeSort (linear time total for that part), $O(n)$ time scanning $A$ to make $A_<$ and $A_>$. Thus, we can write the full recurrence for the runtime, $$T(n) \leq \begin{cases} c_1 n + T(n/5 + 1) + T(7n/10 + 5) & \text{if } n > 5 \\ c_2 & \text{if } n \leq 5. \end{cases}$$ How do we prove that $T(n) = O(n)$? The master theorem does not apply here. Instead, we will prove this using the substitution method. ### 3.4 Solving the Recurrence of Select Using the Substitution Method For simplicity, we consider the recurrence $T(n) \leq T(n/5) + T(7n/10) + cn$ instead of the exact recurrence of Select. To prove that $T(n) = O(n)$, we guess: $$T(n) \leq \begin{cases} d \cdot n_0 & \text{if } n = n_0 \\ d \cdot n & \text{if } n > n_0 \end{cases}$$ For the base case, we pick $n_0 = 1$ and use the standard assumption that $T(1) = 1 \leq d$. For the inductive hypothesis, we assume that our guess is correct for any $n < k$, and we prove our guess for $k$. That is, consider $d$ such that for all $n_0 \leq n < k$, $T(n) \leq dn$. To prove for $n = k$, we solve the following equation: $$T(k) \leq T(k/5) + T(7k/10) + ck \leq dk/5 + 7dk/10 + ck \leq dk$$ $$9/10d + c \leq d$$ $$c \leq d/10$$ $$d \geq 10c$$ Therefore, we can choose $d = \max(1, 10c)$, which is a constant factor. The induction is completed. By the definition of big-O, the recurrence runs in $O(n)$ time. 3.5 Issues When Using the Substitution Method Now we will try out an example where our guess is incorrect. Consider the recurrence \( T(n) = 2T\left(\frac{n}{2}\right) + n \) (similar to MergeSort). We will guess that the algorithm is linear. \[ T(n) \leq \begin{cases} d \cdot n_0 & \text{if } n = n_0 \end{cases} \] We try the inductive step. We try to pick some \( d \) such that for all \( n \geq n_0 \), \[ n + \sum_{i=1}^{k} dg(n_i) \leq d \cdot g(n) \] \[ n + 2 \cdot d \cdot \frac{n}{2} \leq dn \] \[ n(1 + d) \leq dn \] \[ n + dn \leq dn \] \[ n < 0, \] However, the above can never be true, and there is no choice of \( d \) that works! Thus our guess was incorrect. This time the guess was incorrect since MergeSort takes superlinear time. Sometimes, however, the guess can be asymptotically correct but the induction might not work out. Consider for instance \( T(n) \leq 2T(n/2) + 1 \). We know that the runtime is \( O(n) \) so let’s try to prove it with the substitution method. Let’s guess that \( T(n) \leq cn \) for all \( n \geq n_0 \). First we do the induction step: We assume that \( T(n/2) \leq cn/2 \) and consider \( T(n) \). We want that \( 2 \cdot cn/2 + 1 \leq cn \), that is, \( cn + 1 \leq cn \). However, this is impossible. This doesn’t mean that \( T(n) \) is not \( O(n) \), but in this case we chose the wrong linear function. We could guess instead that \( T(n) \leq cn - 1 \). Now for the induction we get \( 2 \cdot (cn/2 - 1) + 1 = cn - 1 \) which is true for all \( c \). We can then choose the base case \( T(1) = 1 \). 3.6 Correctness of the Algorithm Recall that the choice of pivot only affects the runtime, and not the correctness of the algorithm. Here, we prove formally, by induction, that Select is correct. We will use strong induction. That is, our inductive step will assume that the inductive hypothesis holds for all \( n \) between 1 and \( i - 1 \), and then we’ll show that it holds for \( n = i \). Remark 6. You can also do this using regular induction with a slightly more complicated inductive hypothesis; either way is fine. Inductive Hypothesis (for $n$). When run on an array $A$ of size $n$ and an integer $k \in \{1, \ldots, n\}$, Select returns the $k$-th smallest element of $A$. Base Case ($n = 1$). When $n = 1$, the requirement $k \in \{1, \ldots, n\}$ means that $k = 1$; that is, Select($A, k$) is supposed to return the smallest element of $A$. This is precisely what the pseudocode above does when $|A| = 1$, so this establishes the Inductive Hypothesis for $n = 1$. Inductive Step. Let $i \geq 2$, and suppose that the inductive hypothesis holds for all $n$ with $1 \leq n < i$. Our goal is to show that it holds for $n = i$. That is, we would like to show that When run on an array $A$ of size $i$ and an integer $k \in \{1, \ldots, i\}$, Select($A, k$) returns the $k$-th smallest element of $A$. Informally, we want to show that assuming that Select “works” on smaller arrays, then it “works” on an array of length $n$. We do this below: Suppose that $1 \leq k \leq i$, and that $A$ is an array of length $i$. There are three cases to consider, depending on $p = \text{ChoosePivot}(A, i)$. Notice that in the pseudocode above, $p$ is a value from $A$, not an index. Let $A_{<}, A_{>}, p$ be as in the pseudocode above. - **Case 1.** Suppose that $|A_{<}| = k - 1$. Then by the definition of $A_{<}$, there are $k - 1$ elements of $A$ that are smaller than $p$, so $p$ must be the $k$-th smallest. In this case, we return $p$, which is indeed the $k$-th smallest. - **Case 2.** Suppose that $|A_{<}| > k - 1$. Then there are more than $k - 1$ elements of $A$ that are smaller than $p$, and so in particular the $k$-th smallest element of $A$ is the same as the $k$-th smallest element of $L$. Next we will use the inductive hypothesis for $n = |A_{<}|$, which holds since $|A_{<}| < i$. Since $1 \leq k \leq |A_{<}|$, the inductive hypothesis implies that Select($A_{<}, k$) returns the $k$-th smallest element of $A_{<}$. Thus, by returning this we are also returning the $k$-th smallest element of $A$, as desired. - **Case 3.** Suppose that $|A_{<}| < k - 1$. Then there are fewer than $k - 1$ elements that are less than $p$, which means that the $k$-th smallest element of $A$ must be greater than $p$; that is, it shows up in $A_{>}$. Now, the $k$-th smallest element in $A$ is the same as the $(k - |A_{<}| - 1)$-st element in $A_{>}$. To see this, notice that there are $|A_{<}| + 1$ elements smaller than the $k$-th that do not show up in $A_{>}$. Thus there are $k - (|A_{<}| + 1) = k - |A_{<}| - 1$ elements in $A_{>}$ that are smaller than or equal to the $k$-th element. Now we want to apply the inductive hypothesis for $n = |A_{>}|$, which we can do since $|A_{>}| < i$. Notice that we have $1 \leq k - |A_{<}| - 1 \leq |A_{>}|$; the first inequality holds because $k > |A_{<}| + 1$ by the definition of Case 3, and the second inequality holds because it is the same as $k \leq |A_{<}| + |A_{>}| + 1 = n$, which is true by assumption. Thus, the inductive hypothesis implies that Select($A_{>}, k - |A_{<}| - 1$) returns the $(k - |A_{<}| - 1)$-st element of $A_{>}$. Thus, by returning this we are also returning the $k$-th smallest element of $A$, as desired. Thus, in each of the three cases, Select($A, k$) returns the $k$-th smallest element of $A$. This establishes the inductive hypothesis for $n = i$. **Conclusion.** By induction, the inductive hypothesis holds for all $n \geq 1$. Thus, we conclude that Select($A, k$) returns the $k$-th smallest element of $A$ on any array $A$, provided that $k \in \{1, \ldots, |A|\}$. That is, Select is correct, which is what we wanted to show.
{"Source-Url": "https://stanford-cs161.github.io/winter2022/assets/files/lecture4-notes.pdf", "len_cl100k_base": 5805, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30212, "total-output-tokens": 6411, "length": "2e12", "weborganizer": {"__label__adult": 0.00038552284240722656, "__label__art_design": 0.00030422210693359375, "__label__crime_law": 0.0005097389221191406, "__label__education_jobs": 0.0006628036499023438, "__label__entertainment": 8.690357208251953e-05, "__label__fashion_beauty": 0.00018990039825439453, "__label__finance_business": 0.00019478797912597656, "__label__food_dining": 0.0005846023559570312, "__label__games": 0.0009355545043945312, "__label__hardware": 0.0020294189453125, "__label__health": 0.0009756088256835938, "__label__history": 0.0003001689910888672, "__label__home_hobbies": 0.00017964839935302734, "__label__industrial": 0.0006690025329589844, "__label__literature": 0.0002627372741699219, "__label__politics": 0.000324249267578125, "__label__religion": 0.0007262229919433594, "__label__science_tech": 0.06402587890625, "__label__social_life": 8.988380432128906e-05, "__label__software": 0.006488800048828125, "__label__software_dev": 0.9189453125, "__label__sports_fitness": 0.0004456043243408203, "__label__transportation": 0.0006861686706542969, "__label__travel": 0.0002332925796508789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19368, 0.02446]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19368, 0.60928]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19368, 0.86749]], "google_gemma-3-12b-it_contains_pii": [[0, 1855, false], [1855, 4255, null], [4255, 6781, null], [6781, 9200, null], [9200, 11745, null], [11745, 13656, null], [13656, 15761, null], [15761, 18937, null], [18937, 19368, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1855, true], [1855, 4255, null], [4255, 6781, null], [6781, 9200, null], [9200, 11745, null], [11745, 13656, null], [13656, 15761, null], [15761, 18937, null], [18937, 19368, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19368, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19368, null]], "pdf_page_numbers": [[0, 1855, 1], [1855, 4255, 2], [4255, 6781, 3], [6781, 9200, 4], [9200, 11745, 5], [11745, 13656, 6], [13656, 15761, 7], [15761, 18937, 8], [18937, 19368, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19368, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
ed5e7b32cde892a51c527b9f841aa5517c11de36
**Staff Scheduling for Inbound Call Centers and Customer Contact Centers** Alex Fukunaga, Ed Hamilton, Jason Fama, David Andre, Ofer Matan, and Illah Nourbakhsh The staff scheduling problem is a critical problem in the call center (or, more generally, customer contact center) industry. This article describes DIRECTOR, a staff scheduling system for contact centers. DIRECTOR is a constraint-based system that uses AI search techniques to generate schedules that satisfy and optimize a wide range of constraints and service-quality metrics. DIRECTOR has successfully been deployed at more than 800 contact centers, with significant measurable benefits, some of which are documented in case studies included in this article. Staff scheduling is the following classic operations research problem: Given a set of employees, assign them to a schedule such that they are working when they are most needed but ensuring that certain constraints are maintained (for example, employees must work no more than 40 hours a week and must have at least 12 hours between work shifts). Even the simplest variations of this problem are known to be NP-complete (Garey and Johnson 1978). Although staff scheduling has long been an important operations research problem, scheduling has recently become an important component of an emerging class of business software applications known as workforce management software. The need for effective work-force management systems has been driven primarily by the recent, rapid growth of the call center—customer contact center industry, in which efficient deployment of human resources is of crucial, strategic importance. Traditionally, in this industry, staff scheduling has been performed using ad hoc methods and operations research techniques (Cleveland and Mayben 1997). However, we found that this domain is particularly amenable to the application of constraint-based and heuristic scheduling techniques from AI. This article describes Blue Pumpkin DIRECTOR, a recently developed staff scheduling system, which is currently being used by hundreds of contact centers. First, we describe the staff scheduling problem for call centers and contact centers. Then, we describe the design and implementation of DIRECTOR. Finally, examples of successful deployments of the application are given. **Staff Scheduling in Contact Centers** When a consumer calls a software vendor to ask for technical support or if he/she calls a credit card company with a billing inquiry, the call is often routed to an inbound call center (or, more generally, contact center), a large, centralized pool of trained agents (contact center employees) who are qualified to address the customer’s inquiry. If all agents who can handle the call are busy, then the customer’s call waits in a queue until an agent becomes available. Naturally, long wait times result in frustrated, dissatisfied customers, and it is therefore important for call centers to be staffed so that the wait times experienced by customers are acceptable. At the same time, businesses want to avoid overstaffing (having idle agents when few customer calls arrive) to minimize the cost of operating the call center and maximize overall business profitability. It is well known that acquiring a new customer is several times more expensive (in terms of marketing and sales expenses) than deriving revenues from an existing customer. Therefore, maintaining customer satisfaction by achieving good service levels has a significant impact on corporate revenues. In addition, personnel costs account for 60 to 70 percent of the operational cost of a contact center. Efficient contact center staff scheduling is therefore important to a business both from the perspective of revenue (“the top line”) and from operating margins and profitability (“the bottom line”). Internal corporate call centers are the centralized customer-service organizations that serve as the foci of customer contact for businesses. There is also a large industry of outsourced call centers. Businesses regularly outsource some of their customer-service functions to outsourcers, who are committed by the terms of a service-level agreement in the contract to achieve specified service goals (for example, outsourcer X agrees to handle manufacturer Y’s sales inquiries and promises that 80 percent of the calls will be answered within 20 seconds). Therefore, efficient staff scheduling is particularly critical for these outsourcers, so that they can deliver the contractually agreed-on service levels while they operate profitably. Although most interactive contact between customers and businesses still takes place through the telephone, customer contact through other media such as e-mail, online chat, and instant messaging is rapidly increasing. A contact center is a generalization of a call center, where agents handle these other media, in addition to traditional media such as phone calls and faxes. Contact centers offer some new challenges for staff scheduling systems, as described later. Because the call center industry is not well known in the AI and computer science communities, it is worth noting some relevant market statistics. In the beginning of 2001, there were over 82,000 contact centers (employing over 1.5 million agents) in the United States alone, expected to almost double by 2004.3, 5, 6 Approximately seven percent of U.S. call centers were using a work-force management system. Note that the market penetration of workforce management software is still very low, in part because modern work-force management systems with the full capabilities and ease of use required by the call center market are relatively new. However, because of the clear economic benefits, the market for work-force management software is growing rapidly (the annual revenues for the call center work-force management software market were $175 million in 2001, expected to grow to more than $500 million by 2006). The contact center scheduling problem poses a challenging problem. Meeting the demand profile implied by the forecasts of incoming calls and contacts is by itself a difficult combinatorial optimization problem, especially considering that the forecasts are probabilistic. At a minimum, a 1-week schedule with a 15-minute granularity must be generated. Typically, contact centers have hundreds of agents that need to be scheduled; some have thousands of agents. In addition to service goals, numerous hard and soft constraints reflecting the contact center’s operational constraints, local labor rules, and employee preferences must be satisfied. The agents’ schedules must be specified at a minimum of 15-minute granularity; in addition to specifying the start time and duration of a work shift, all the “off-telephone” activities such as breaks also need to be scheduled. Furthermore, the recent advent of multi-skilled scheduling and multicontact scheduling (see later) has significantly complicated the problem of optimizing service goals. Traditional methods (manual scheduling and mathematical programming approaches) have been unable to keep up with the rapidly evolving, increasingly difficult scheduling requirements of the modern contact center. The typical contact center scheduling process can be described as follows: Schedules are usually generated on a weekly basis, with a granularity of 15 minutes. A forecast of incoming contact volume (number of calls in a 15-minute period) and expected handling time of the contacts are used to generate a demand profile. A standard goal for call center operations is to achieve a certain service level; that is, answer X percent of calls within Y seconds and minimize overstaffing. Guided by the forecast and service goal, the scheduler (traditionally, a human contact center manager) generates a schedule that satisfies various hard constraints (for example, labor laws, company policies) and optimizes service goals and satisfies as soft constraints as much as possible. The **DIRECTOR System** We now describe the DIRECTOR application. After a brief discussion of the overall system architecture, we describe the major components most relevant to the algorithmic and AI aspects of the system. System Architecture From a scheduling-centric point of view, DIRECTOR consists of the scheduling engine, which loads an input scenario and generates a schedule that satisfies hard constraints and optimizes schedule quality metrics; an infrastructure for persisting scheduling scenario input and output in a relational database; and a graphic user interface (GUI) (figures 1 and 2). In addition, there is a major software component required for integration with automatic call distributors (ACDs), which are the hardware and software routers that route incoming calls and contacts to the appropriate agent in the contact center. Work-force management software systems for contact centers include many more functions, such as the real-time monitoring of agent adherence to the published schedule and an extensive reporting facility; however, these other features in DIRECTOR are beyond the scope of this article, which focuses on the scheduling functions. The current version of DIRECTOR (3.1) is implemented as a set of Microsoft COM components, mostly implemented in C++. Figure 3 shows the DIRECTOR ENTERPRISE architecture. It is a traditional client-server system, which consists of a back-end database (Microsoft SQL server or ORACLE relational database) running on a server, and a client, which consists of business logic components (including the scheduling engine) and GUI components. The next version of DIRECTOR ENTERPRISE (to be released in 2002) is based on a more modern multitiered web-oriented architecture (a relational database, a J2EE application server running business logic and other middle-tier services, and a “thin” web-based GUI client). In addition, there is another version of Blue Pumpkin DIRECTOR, called DIRECTOR ESSENTIAL, which is designed for use by small-and medium-sized contact centers (typically with fewer than 100 agents). Its scheduling engine is implemented in C++, the scheduling scenarios are stored in a Microsoft ACCESS relational database, and the GUI is implemented in Visual Basic. The emphasis of ESSENTIAL is on ease of use and installation. DIRECTOR ESSENTIAL is actually the predecessor of DIRECTOR ENTERPRISE, and development on ESSENTIAL has continued, focusing on its target user base of smaller contact centers. Many algorithmic ideas used in ENTERPRISE originated in ESSENTIAL. In the rest of this article, we focus on the ENTERPRISE version because it provides a superset of the features of ESSENTIAL. Using DIRECTOR to schedule contact center agents generally involves the following workflow: First, a model of the contact center is built in the client and is stored in the relational database. The main model elements are the characteristics of the contact center, the agents (resources), and the operational constraints. Then, rules and constraints that apply to the agents (for example, how many hours a week he/she can work, which days he/she is available, what times he/she prefers to work) are entered and linked. Typically, this part of the scheduling scenario is relatively static from week to week. For each week, the user (contact center manager) generates a forecast of the incoming calls and contacts (the demand pro- Although it might be possible to improve the accuracy of the forecasts by applying more sophisticated learning techniques, users report satisfaction with the current approach. **Service Goals, Computation of Agent Requirements, and Modeling Overstaffing and Understaffing** Given the forecast for a contact queue, the next step in scheduling is to specify a service goal for the queue. The following are some service goals: answer 90 percent of the incoming calls within 20 seconds, send a reply to 99 percent of the e-mail inquiries within 24 hours, answer calls within 30 seconds on average; limit abandoned calls to 5 percent of the incoming calls (calls are abandoned when a customer hangs up the phone before an agent becomes available to talk to the customer), and no agent should be idle more than 25 percent of the time. Combinations of these goals are possible; for example, answer 80 percent of all incoming calls within 30 seconds, no more than 5 percent of the calls can be abandoned, and no agent should be idle more than 20 percent of the time. In a scenario where there is a single queue of calls, and any agent in the contact center can answer the call, it is possible to compute an agent requirement for a time period, that is, the number of agents who must be working during the time period to satisfy the service goal, given the forecast. This is a classical M/M/s queuing model, and agent requirements are computed by applying the well-known Erlang-C formula from operations research and queueing theory (c.f. Kleinrock [1976]) and some straightforward extensions. Given a candidate schedule, we say a time interval is understaffed if the number of agents scheduled to be working during the interval is less than the agent requirements and overstaffed if there are fewer agents scheduled than required. By computing the overstaffing and understaffing for each time interval in the scheduling period, we have the basis for an objective function for evaluating a candidate schedule with respect to service goals. Figure 2 shows DIRECTOR screenshots showing the schedule (and service goals and results) for a very small, example scenario. Now, consider the following case: There are two queues: (1) the widget sales inquiry queue and (2) the widget tech support queue. There are three agents, Bob (who is qualified to answer sales inquiries), John (qualified to answer technical support inquiries), and Mary (qualified to answer either sales or support inquiries). This multiskilled scenario differs from the previously described single-queue case because it is no longer possible to straightforwardly compute how overstaffed or understaffed the schedule is for a particular time interval because of the interaction between the queues. For example, suppose all agents are initially available, and three calls arrive in rapid succession. The first call arrives on the sales queue and is answered by Bob. The second call arrives on the tech support queue and is immediately followed by a third call, which is a sales call. If John answers the call, the third call will be answered by Mary. However, suppose that Mary answers the second call. Then, the third call will be put on hold (even though John is available, he is not able to respond to sales calls). These interactions between the agents, their skills, the order of calls arriving on the queues, and the way in which the calls are routed make it very difficult to answer whether the schedule is understaffed or overstaffed. In fact, there is currently no known, closed-form formula (such as the Erlang-C formula) for computing the service level for the multiskilled scheduling problem (Koole and Mandelbaum 2001). It is possible to compute the service level by simulating the schedule and the call-routing algorithm. However, simulations are expensive (in the context of generating and optimizing a schedule by a generate-and-test framework such as iterative repair). Another important case where the traditional operations research approaches do not apply is when modeling queues are significantly different from telephone queues, such as e-mail contact queues (and similar types of media such as faxes). E-mail contacts differ from telephone calls in several important ways. First, the service goal usually involves much longer time periods than telephone calls (an e-mail reply is usually expected within a day or so, but people expect telephone calls to be answered within seconds or minutes). Second, e-mail inquiries are usually partitioned into many, sparse, virtual queues. Third, although a telephone call is abandoned and leaves a queue when the customer becomes frustrated after waiting too long on hold, e-mail contacts are never abandoned. Because of these factors, the standard Erlang formulas are not applicable when modeling scheduling agents to staff e-mail queues. An increasing number of contact centers now handle a mixture of telephone and e-mail contacts simultaneously. For example, a contact center agent might typically answer telephone calls from the set of queues for which he/she is skilled, and when no calls are pending, he/she would reply to e-mail inquiries. Therefore, a modern contact center agent can no longer be modeled as a generic staffing unit that can simply be aggregated into the input of an Erlang-C formula. A scheduling system for the modern contact center must simultaneously solve both the multisupplied scheduling and the nontelephone-media scheduling problems described earlier, in addition to the traditional single-telephone queue scheduling problem. This complexity makes it difficult to apply traditional operations research approaches (mathematical programming) because all known existing solutions (proprietary algorithms in commercial systems, including DIRECTOR) rely on some form of simulation model. Therefore, constraint-based and iterative scheduling approaches from AI are appealing techniques for the contact center scheduling problem. Constraints Employees have various constraints that determine how and when they can be scheduled. Some constraints are a result of the policies of the contact center. Some constraints are mandated either by law or by labor union agreements. Other constraints reflect the personal preferences of the staff. The primitive building block of a schedule is a shift, which represents a class of object representing a contiguous span of time for which an agent is scheduled to answer telephone calls. A shift can contain a number of off-telephone activities during which he/she is not available to pick up calls (for example, 1-hour meal breaks, 15-minute breaks). The basic constraints in DIRECTOR specify parameters such as the duration and possible start times of shifts and the duration and possible start times of off-telephone activities. For example, we can specify that an “8-hour standard shift” is 8 hours long, starts between 9 AM and 1 PM. Furthermore, we can specify that this class of shift contains a lunch break, 1-hour off-telephone activity, which begins between 3 and 4 hours after the start of the shift as well as a 15-minute “break” that can be scheduled at any time during the shift. DIRECTOR builds on these building blocks with shift pattern constraints that constrain which shifts can be worked on which day. For example, I can say that Joe can either work an 8-hour standard shift or a 4-hour special shift on Monday, must work a 4-hour special shift on Tuesday, and must not work any shifts on Sundays. The user can also specify constraints on the number of occurrences of various objects, for example, Bob must work between 3 and 4 weekend shifts a month, Alice must work no more than 80 hours every 2 weeks, and John cannot work more than 5 consecutive days in a row. Most constraints involve only a single agent. However, there are constraints that can involve more than one employee. For example, we can specify that John, Mary, and Robert must all have the same number of weekend shifts between 1/1/02 and 6/1/02. Agents can express their preferences about their own schedules, and these preferences are treated as soft constraints by DIRECTOR. One type of preference is a rank ordering on the start times of the shifts, for example, John prefers to start between 8 and 9 AM on Mondays, but if that’s not possible, he prefers to start between 9 and 10 AM and would really prefer not to start shifts in the evenings. Agents can also express preferences about the set of shifts they work; for example, I would much rather work on the day shifts Monday through Friday than on the night shifts. Although most planning and scheduling systems with a highly expressive constraint system use a programming language-like textual modeling language to specify constraints, such a modeling language would make the system excessively complex for the intended users of our system who are not engineers. The most commonly used rules are specified using various GUI elements, and the less frequently used constraints are entered using a pseudo–natural language “sentence builder” interface, similar to those used by some commercial rule-based systems such as the Versata LOGIC SUITE and ILOG RULES. This interface enables most of the end users of DIRECTOR to specify complete scheduling scenarios with little, if any, assis- The constraint system in DIRECTOR is very expressive and can express almost all constraints currently required by the contact center market (because of the lack of space, we limited this discussion to a basic subset that illustrates the capabilities of the system). The Scheduling Algorithm Once the scenario is defined, the process of schedule generation and optimization can begin. The major design goal of the DIRECTOR scheduling algorithm is to allow users to quickly generate satisfactory schedules with the absolute minimum amount of hassle. Therefore, the scheduling algorithm needs to be an extremely robust “black box” with acceptable performance. The only user-adjustable parameter that influences the scheduling algorithm’s behavior is a switch that determines whether the algorithm terminates after satisfying an internal termination criterion or continues to search for better solutions until explicitly interrupted by the user (normal scheduling mode versus schedule until interrupted mode). Internally, the scheduling problem is formulated as a hybrid constraint-satisfaction–global optimization problem. There is a global objective function, which is a prioritized vector of scoring terms. For each class of constraint, there is a corresponding score term that represents the degree to which this class of constraint is being violated. The score terms corresponding to hard constraints have higher priority than soft constraints and terms corresponding to service goals. For each agent, there is a slot variable, which represents the shift (if any) that the agent is scheduled to work on this day. Instantiating a shift in a slot results in the instantiation of variables representing off-telephone activities (thus, there is a one-level abstraction hierarchy consisting of slot and off-telephone activity variables). A schedule is therefore a complete assignment of variables to values. The scheduling algorithm tries to generate a schedule with a maximal score. The DIRECTOR scheduling algorithm is a hybrid algorithm, combining elements from standard iterative repair and heuristic global optimization algorithms. The foundation of the DIRECTOR scheduler is a library of search algorithms, including depth-first backtracking, beam search, and iterative sampling. A search algorithm takes a set of variables and returns a new set of value bindings for those variables that maximize the value of the global objective function. The objective function is incrementally updated after each variable binding, which enables a flexible framework where arbitrary search pruning and backtracking control policies can be implemented in the search algorithms. We currently make heavy use of a heuristic algorithm inspired by simulated annealing. In this framework, the simplest scheduling algorithm would be ``` Instantiate a search algorithm that takes as input all the slots for all the agents, then run the search algorithm until some termination criterion is met. ``` Although this strategy (using the annealing algorithm as the search algorithm) actually works for small, relatively unconstrained scenarios, brute-force search is insufficient to solve large problems with difficult constraints. Therefore, the DIRECTOR algorithm is an iterative procedure, which repeatedly selects some set of variables and optimizes the value bindings by applying some search algorithm to the limited search space. more flexibility the algorithm has with respect to meeting the service goals, which makes the problem easier in some sense. Besides the scheduling algorithm itself, a great deal of effort has gone into the development of efficient data structures and algorithms that enable the incremental computation of the objective function. The major computational bottleneck in DIRECTOR is incremental, on-demand recalculation of the service goal terms in the objective function. For example, when the start time of a shift is changed from 8 AM to 9 AM, what is the impact on the service goals? For a single telephone queue scenario, this computation is relatively inexpensive (but still the major bottleneck); for multiskilled scenarios with e-mail queues, this evaluation becomes a major bottleneck, which must be alleviated using various lazy evaluation, caching, and approximation algorithms. As we noted already, almost all hard constraints involve only one agent, meaning that in practice, satisfying hard constraints is relatively easy for the majority of the scenarios encountered by DIRECTOR. Most of the search effort is spent optimizing the soft constraints such as the service goals and agent preferences. Therefore, the current scheduling algorithm does not attempt to perform much constraint propagation, focusing instead on brute-force, rapid generation and evaluation of candidate schedule states. This approach contrasts with constraint-directed refinement search methods (c.f. Jonsson et al. [2000]; Smith et al. [2000]), which make heavy use of constraint propagation. In addition to the standard scheduling problem described earlier, there are a number of related scheduling problems that are addressed by DIRECTOR. We describe some of these in the following subsections. **Event Scheduling** In addition to scheduling agent work schedules, DIRECTOR also schedules various events attended by one or more of the agents. Examples of events are training sessions and group meetings. Traditional, manual meeting scheduling systems such as Microsoft OUTLOOK rely on the user finding a time when all attendees are available. More advanced, agent-based systems (c.f. Maes [1994]) automatically schedule a meeting and notify attendees but only consider the availability and preferences of the attendees. However, in contact centers, it is dangerous to schedule an event based only on availability or individual preferences because it can have a direct, negative impact on the center’s service goals. When scheduling events after the agents’ schedules have already been finalized, DIRECTOR takes into consideration the impact on service goals. In other words, DIRECTOR will schedule an event at a time where all attendees are available and when the contact queues on which the agents are working are least understaffed. In addition, if the agent schedules are not finalized yet, DIRECTOR goes one step further and simultaneously reschedules the agent schedules and the event schedules to minimize the negative impact on service goals. **Work-Force Planning** To date, we have assumed a version of the scheduling problem in which the task is to generate schedules for a group of existing agents. A related scheduling problem is, Given a forecast of future contacts, a set of employee class profiles that represent typical subclasses of agents (and are linked to various constraints) and some additional constraints (for example, restrictions on the percentage of class profile instances, budget constraints), generate a schedule consisting of phantom agents (instances of the employee class profiles) that optimizes the global objective function. This work-force planning problem is important for users who need to plan future hiring of contact center agents; that is, how many agents need to be hired, and what skills should they have? In some sense, this optimization problem is more difficult than the standard staff scheduling problem because of the combinatorial explosion. Suppose that there are two employee class profiles. Profile 1 represents an agent who can only answer widget sales calls, costs $15 an hour, and works 40 hours a week. Profile 2 represents an agent who answers both widget sales and technical support calls, works 20 hours a week, and earns $25 an hour. There are many combinations of instances of profile 1 and profile 2, and for each combination, there is a different optimal schedule. DIRECTOR solves this problem with a modified version of its standard scheduling algorithm, but work-force planning is a new application where there is clearly a need for further research. **Multiweek Constraints and Scheduling** Currently, DIRECTOR schedules one week at a time because a week is a natural unit, and weekly scheduling is standard contact center industry practice. Most contact centers create and publish schedules on a weekly basis, regardless of whether they use work-force management software. However, there are various constraints that have a time period other than one week; for example, Joe must work between two to three weekend shifts every four weeks. The DIRECTOR scheduling algorithm handles such multiweek... constraints by assuming that the shifts can be distributed evenly among four weeks, but it is clear that such heuristics can fail. It might seem that if we scheduled all four weeks at a time that these multiweek constraints would not be an issue as long as the algorithm scales up. However, aside from any algorithmic problems related to scheduling longer time periods, there is a modeling problem in that the longer the time period being scheduled, the higher the probability that assumptions about the forecast and agent availability (because of unscheduled absences) become invalid (or the data required to make reasonable assumptions might be unavailable). Therefore, scheduling with multiweek constraints is another area where we will focus further research and development efforts in the future. **Application Deployment and Case Studies** Blue Pumpkin DIRECTOR (including both the ENTERPRISE version and the ESSENTIAL version) is currently in use at more than 800 contact centers combined in a wide range of industries; over 110,000 contact center agents are being scheduled by DIRECTOR. DIRECTOR ENTERPRISE (the version of DIRECTOR that is the focus of this article) is in use by approximately 400 customers, including 3M, Apple Computer, Federal Express, GE, AT&T, Kaiser Permanente, Time Warner Cable, Verizon, and Yahoo!. DIRECTOR ENTERPRISE is also widely used by major outsourced contact centers, which handle inbound calls for companies such as AOL and Canon. The typical DIRECTOR ENTERPRISE user is a large contact center with 150 to 1000 agents. In addition, ESSENTIAL (described in the System Architecture subsection) is also in use by more than 400 customers, including AOL/Computerserve Europe, Peoplesoft, Airborne Express, and EDS. DIRECTOR ESSENTIAL users are typically small to mid-sized contact centers with fewer than 200 agents. Like other enterprise-class business application software, deployment of DIRECTOR involves a team of implementation specialists and includes some end user training. It is worth noting that in most cases, the deployment complexity is in integrating the software with the ACD (see System Architecture subsection) and setting up the server. In many cases, the end users create the scheduling scenarios (including all constraints) and run the scheduling algorithm by themselves, using the DIRECTOR GUI. In some cases, it only requires several hours of training for a contact center manager to become proficient with DIRECTOR ESSENTIAL. For DIRECTOR ENTERPRISE, it is typically several days before the users become proficient in modeling and scheduling. For complex scenarios, Blue Pumpkin consultants assist the users with building the first models, but subsequent models are usually built by the customers themselves. We believe that this relative simplicity represents a significant step forward in the popularization of constraints and AI scheduling technology. Here, we describe several case studies of customers using DIRECTOR ENTERPRISE. **Borders Group** Borders Group is a leading global retailer of books, music, movies, and related items. The seasonal nature of the Borders Group’s business, combined with a multiskilled contact center, made optimizing its workforce a formidable challenge. Borders Group plans for its staffing needs well in advance of the holiday season where customer expectation is higher than usual. Meeting these expectations is critical because Borders Group transacts a high volume of its business during the holiday season. During this period, there is a surge of more than 35 percent in call volume, making optimizing available resources and staff essential. After deploying DIRECTOR, Borders Group evaluated various staffing scenarios to design a work-force optimization strategy that accurately reflected all Borders’ business goals. Based on a selected schedule generated by DIRECTOR, Borders Group knew how many seasonal workers to hire, covering which hours and requiring what skills—making the hiring process much easier. In addition, by focusing on the two most needed skills instead of cross-training agents on multiple skills, Borders Group was able to get seasonal staff on the telephones 33 percent faster, allowing them to be productive in 1 week instead of 3. DIRECTOR enabled Borders Group to increase agent productivity by 53 percent, with a 33 percent reduction in expenses by allocating agent time more effectively over operating hours. Customer-service levels of 88 percent were achieved during the holiday period, with most calls answered in under 10 seconds. Borders Group claims that “[DIRECTOR] enabled us to clearly drive down our costs and deliver a high level of customer service not experienced before at Borders Group” (Charlie Moore, director of Customer Service, Borders Group). Borders was also able to reduce turnover of nonseasonal employees from 15 percent to 10 percent. These factors contributed to a 25 percent reduction in overall recruiting and training expenses. SGI SGI recently created a virtual contact center by installing a new switch that connected its four facilities located throughout the country. In the past, SGI developed schedules manually, relying on local critical needs assessment to develop a plan. Now they needed a more efficient and accurate method for accommodating the complexities of a work force physically located in four time zones. SGI also decided to bring all customer contact in house, increasing call volumes by 50 percent to 2500 to 3000 calls a week. Budget constraints discouraged increasing the percentage of staff to accommodate the added influx of new calls. Thus, SGI needed to improve service metrics without increasing its budget. When call volumes doubled from bringing all contacts in house, head count had been a concern. However, by using DIRECTOR to generate schedules, the new volumes were handled with only an eight-percent increase in staffing. The new optimized plan resulted in a 37-percent increase in agent productivity. SGI was also able to improve customer-service levels by 40 percent and avoid millions of dollars in additional agent-related expenses. In addition, SGI increased caller satisfaction ratings by 47 percent. Timberline Software Timberline Software Corporation is an international supplier of accounting and estimating software for construction and property management companies. Timberline's work-force manager for client services previously spent a full 40-hour work week creating a 1-week schedule. Despite her long hours, creating the schedule manually could not accommodate last-minute changes and made it difficult to predict future staffing needs. DIRECTOR enabled Timberline to reduce the schedule creation time by 80 percent. This time savings allows Timberline management to focus on other duties, such as reporting, forecasting, and analysis. Prior to deploying DIRECTOR, one of Timberline’s greatest challenges was predicting future staffing needs. Using its traditional manual scheduling model, it predicted that it would need to increase its staff to 138 full-time specialists in 2000 to support its call volume. However, once it performed the analysis using DIRECTOR, it discovered it only needed as few as 107 full-time specialists. This reduction in future staffing represents substantial potential savings for Timberline, totaling more than $1,000,000. Compaq Canada CONSUMER HELPDESK Compaq’s Canadian CONSUMER HELPDESK had already been recognized for operational excellence as “Call Center of the Year” by industry media. Recently, by deploying DIRECTOR, it was able to optimize its work-force processes even further and saw an immediate increase in customer-service performance and, correspondingly, in financial returns. In just the first quarter after deployment, Compaq Canada experienced the following performance and productivity improvements: call abandonment rate decreased 65.3 percent, average hold time decreased 57.3 percent, net service levels increased 16.3 percent, operational expenses decreased 15 percent, point-of-sale revenue for an agent increased by 17 percent, and gross margins increased 18 percent. Conclusions Staff scheduling has always been a problem of great practical importance. The recent growth in the contact center industry has highlighted the need for effective staff-scheduling systems. With their numerous complexities, real-world staff-scheduling problems have proven to be a fruitful application for AI-based techniques. This article described Blue Pumpkin DIRECTOR, a staff-scheduling system for contact centers. DIRECTOR represents a significant application of AI techniques to solve a critical problem for an important industry. DIRECTOR ENTERPRISE and its predecessor, DIRECTOR ESSENTIAL, have successfully been deployed at more than 800 contact centers worldwide and have provided significant, quantified benefits to their users. In addition, DIRECTOR is used daily (for scenario creation, modification, and scheduling) by call center managers with less than a week of training. This result demonstrates that powerful, expressive, constraint-based systems can successfully be used by users without an engineering or operations research background. Acknowledgments DIRECTOR represents the work of a large engineering and product marketing team at Blue Pumpkin Software. Thanks to Serdar Uckun, Rich Frainier, and Steve Chien for helpful comments and suggestions on this article. Notes 2. Note that by centralized, we refer to organizational centralization. Call centers are frequently geographically distributed, with calls being routed to the most appropriate resource around the world. One of the challenges in modern call center scheduling is creating a coordinated schedule that uses resources from distributed call centers. 7. The standard period for which DIRECTOR is used to generate schedules is one week. 8. For clarity, we restrict this discussion to the simple scenario when agents only answer telephone calls. The definition of shifts and shift activities is slightly more complex when considering that agents can partition their time among several media types (for example, we can specify that an agent only answers telephone calls during a shift, or he/she can fully “blend” his/her phone and e-mail answering activities during a shift). 9. Preferences are entered either using the call center manager’s DIRECTOR GUI client or the agents themselves using a web-based interface. 10. The underlying, structured scenario model in DIRECTOR can be manipulated as an XML document. However, it is hidden from end users. References Alex Fukunaga is currently a senior member of the technical staff in the Artificial Intelligence Group at the Jet Propulsion Laboratory (JPL), California Institute of Technology. He was previously a senior software engineer at Blue Pumpkin Software. He received his M.S. and A.B. in computer science from the University of California at Los Angeles and Harvard University, respectively. His current research interests include search, combinatorial optimization, automated planning and scheduling, and machine learning. He has published over 30 technical papers. Edward Hamilton is a senior software engineer at Blue Pumpkin Software. Since joining Blue Pumpkin in 1997, he has been working on the STAFF SCHEDULING ENGINE. Prior to working at Blue Pumpkin, he developed a conference scheduling system for Intel. He received a B.A in mathematics and B.S in computer science from the University of California at Santa Cruz. His interests include computer chess, stock market prediction, and financial data sonification. Jason Fama is currently a software engineer at Blue Pumpkin Software. He is working on the next iteration of the Blue Pumpkin DIRECTOR software and its scheduling algorithm. He previously worked at Rockwell Palo Alto Laboratory. He received B.A.s in computer science and economics from the University of California at Berkeley. David Andre received his Ph.D. from the University of California at Berkeley in the fall of 2002 with a specialization in machine learning and reinforcement learning. Now at BodyMedia, Inc., he is heading up the Informatics Group, which is responsible for making BodyMedia’s body monitor detect the wearer’s context, energy expenditure, and other interesting biometric measures. Andre is a Hertz fellow, has written more than 50 technical papers, holds 6 patents, and is the author of a book on the automated synthesis of complex structures. Ofer Matan is a cofounder of Blue Pumpkin Software. His research interests include statistical modeling, optimization, and machine learning techniques. Matan has conducted research in optical character recognition and computer learning systems at Bell Labs. He received his Ph.D. in computer science from Stanford University. Ilah R. Nourbakhsh is an assistant professor of robotics at The Robotics Institute at Carnegie Mellon University. He received his Ph.D. in computer science from Stanford University in 1996. He is cofounder of the Toy Robots Initiative at The Robotics Institute. His current research projects include electric wheelchair sensing devices, robot learning, theoretical robot architecture, believable robot personality, visual navigation, and robot locomotion. His past research has included protein structure prediction under the GENOME project, software reuse, interleaving planning and execution, and planning and scheduling algorithms. He is a cofounder and chief scientist at Blue Pumpkin Software, Inc., and Mobot, Inc.
{"Source-Url": "http://www.aaai.org/ojs/index.php/aimagazine/article/download/1667/1565", "len_cl100k_base": 8122, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34005, "total-output-tokens": 9512, "length": "2e12", "weborganizer": {"__label__adult": 0.00044655799865722656, "__label__art_design": 0.0007042884826660156, "__label__crime_law": 0.0006346702575683594, "__label__education_jobs": 0.0078125, "__label__entertainment": 0.00027561187744140625, "__label__fashion_beauty": 0.00028824806213378906, "__label__finance_business": 0.0178070068359375, "__label__food_dining": 0.000518798828125, "__label__games": 0.001900672912597656, "__label__hardware": 0.0026683807373046875, "__label__health": 0.0007143020629882812, "__label__history": 0.0003504753112792969, "__label__home_hobbies": 0.0003559589385986328, "__label__industrial": 0.0023860931396484375, "__label__literature": 0.0003063678741455078, "__label__politics": 0.0002923011779785156, "__label__religion": 0.0003688335418701172, "__label__science_tech": 0.125, "__label__social_life": 0.00020170211791992188, "__label__software": 0.356201171875, "__label__software_dev": 0.479248046875, "__label__sports_fitness": 0.0003905296325683594, "__label__transportation": 0.0008521080017089844, "__label__travel": 0.00030517578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45443, 0.01362]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45443, 0.13952]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45443, 0.93778]], "google_gemma-3-12b-it_contains_pii": [[0, 3239, false], [3239, 8228, null], [8228, 11421, null], [11421, 12483, null], [12483, 15684, null], [15684, 20745, null], [20745, 24167, null], [24167, 29331, null], [29331, 34331, null], [34331, 38988, null], [38988, 45443, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3239, true], [3239, 8228, null], [8228, 11421, null], [11421, 12483, null], [12483, 15684, null], [15684, 20745, null], [20745, 24167, null], [24167, 29331, null], [29331, 34331, null], [34331, 38988, null], [38988, 45443, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45443, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45443, null]], "pdf_page_numbers": [[0, 3239, 1], [3239, 8228, 2], [8228, 11421, 3], [11421, 12483, 4], [12483, 15684, 5], [15684, 20745, 6], [20745, 24167, 7], [24167, 29331, 8], [29331, 34331, 9], [34331, 38988, 10], [38988, 45443, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45443, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
6727268fe2fca0a08bebffb886792386dec10437
[REMOVED]
{"Source-Url": "https://courses.cs.duke.edu/fall21/compsci316/lectures/21-qo.pdf", "len_cl100k_base": 6297, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 71252, "total-output-tokens": 7969, "length": "2e12", "weborganizer": {"__label__adult": 0.0006122589111328125, "__label__art_design": 0.0009765625, "__label__crime_law": 0.0009479522705078124, "__label__education_jobs": 0.10308837890625, "__label__entertainment": 0.00021064281463623047, "__label__fashion_beauty": 0.0004229545593261719, "__label__finance_business": 0.00165557861328125, "__label__food_dining": 0.000885009765625, "__label__games": 0.0012521743774414062, "__label__hardware": 0.0010175704956054688, "__label__health": 0.0013895034790039062, "__label__history": 0.0012645721435546875, "__label__home_hobbies": 0.0004787445068359375, "__label__industrial": 0.0012540817260742188, "__label__literature": 0.00092315673828125, "__label__politics": 0.0005340576171875, "__label__religion": 0.0008654594421386719, "__label__science_tech": 0.1263427734375, "__label__social_life": 0.0007076263427734375, "__label__software": 0.0755615234375, "__label__software_dev": 0.67724609375, "__label__sports_fitness": 0.0005512237548828125, "__label__transportation": 0.000973224639892578, "__label__travel": 0.0006680488586425781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17606, 0.0221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17606, 0.75121]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17606, 0.69594]], "google_gemma-3-12b-it_contains_pii": [[0, 69, false], [69, 322, null], [322, 630, null], [630, 833, null], [833, 1720, null], [1720, 2409, null], [2409, 2904, null], [2904, 3331, null], [3331, 3704, null], [3704, 4174, null], [4174, 4944, null], [4944, 5377, null], [5377, 5629, null], [5629, 5716, null], [5716, 6143, null], [6143, 6474, null], [6474, 7222, null], [7222, 7723, null], [7723, 8412, null], [8412, 8938, null], [8938, 9562, null], [9562, 10131, null], [10131, 10964, null], [10964, 11378, null], [11378, 12001, null], [12001, 12111, null], [12111, 12448, null], [12448, 12860, null], [12860, 13373, null], [13373, 14091, null], [14091, 14638, null], [14638, 15329, null], [15329, 15952, null], [15952, 16453, null], [16453, 16965, null], [16965, 17201, null], [17201, 17606, null]], "google_gemma-3-12b-it_is_public_document": [[0, 69, true], [69, 322, null], [322, 630, null], [630, 833, null], [833, 1720, null], [1720, 2409, null], [2409, 2904, null], [2904, 3331, null], [3331, 3704, null], [3704, 4174, null], [4174, 4944, null], [4944, 5377, null], [5377, 5629, null], [5629, 5716, null], [5716, 6143, null], [6143, 6474, null], [6474, 7222, null], [7222, 7723, null], [7723, 8412, null], [8412, 8938, null], [8938, 9562, null], [9562, 10131, null], [10131, 10964, null], [10964, 11378, null], [11378, 12001, null], [12001, 12111, null], [12111, 12448, null], [12448, 12860, null], [12860, 13373, null], [13373, 14091, null], [14091, 14638, null], [14638, 15329, null], [15329, 15952, null], [15952, 16453, null], [16453, 16965, null], [16965, 17201, null], [17201, 17606, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17606, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17606, null]], "pdf_page_numbers": [[0, 69, 1], [69, 322, 2], [322, 630, 3], [630, 833, 4], [833, 1720, 5], [1720, 2409, 6], [2409, 2904, 7], [2904, 3331, 8], [3331, 3704, 9], [3704, 4174, 10], [4174, 4944, 11], [4944, 5377, 12], [5377, 5629, 13], [5629, 5716, 14], [5716, 6143, 15], [6143, 6474, 16], [6474, 7222, 17], [7222, 7723, 18], [7723, 8412, 19], [8412, 8938, 20], [8938, 9562, 21], [9562, 10131, 22], [10131, 10964, 23], [10964, 11378, 24], [11378, 12001, 25], [12001, 12111, 26], [12111, 12448, 27], [12448, 12860, 28], [12860, 13373, 29], [13373, 14091, 30], [14091, 14638, 31], [14638, 15329, 32], [15329, 15952, 33], [15952, 16453, 34], [16453, 16965, 35], [16965, 17201, 36], [17201, 17606, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17606, 0.05645]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
c43568e93b0d9a2f74009ddebad4890de4618b47
[REMOVED]
{"Source-Url": "http://static-curis.ku.dk/portal/files/76544748/read_only_data.pdf", "len_cl100k_base": 6572, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 32712, "total-output-tokens": 8854, "length": "2e12", "weborganizer": {"__label__adult": 0.0004630088806152344, "__label__art_design": 0.0004553794860839844, "__label__crime_law": 0.0004622936248779297, "__label__education_jobs": 0.000713348388671875, "__label__entertainment": 0.00011992454528808594, "__label__fashion_beauty": 0.00025653839111328125, "__label__finance_business": 0.00040602684020996094, "__label__food_dining": 0.0005769729614257812, "__label__games": 0.0006718635559082031, "__label__hardware": 0.0025882720947265625, "__label__health": 0.0011091232299804688, "__label__history": 0.00045013427734375, "__label__home_hobbies": 0.00018846988677978516, "__label__industrial": 0.0007719993591308594, "__label__literature": 0.0004184246063232422, "__label__politics": 0.00042724609375, "__label__religion": 0.0008039474487304688, "__label__science_tech": 0.1685791015625, "__label__social_life": 0.00011557340621948242, "__label__software": 0.0084381103515625, "__label__software_dev": 0.810546875, "__label__sports_fitness": 0.0004169940948486328, "__label__transportation": 0.000988006591796875, "__label__travel": 0.000286102294921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31510, 0.03768]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31510, 0.47536]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31510, 0.85268]], "google_gemma-3-12b-it_contains_pii": [[0, 683, false], [683, 3112, null], [3112, 6520, null], [6520, 10277, null], [10277, 13402, null], [13402, 15394, null], [15394, 18837, null], [18837, 22455, null], [22455, 25166, null], [25166, 28251, null], [28251, 31510, null]], "google_gemma-3-12b-it_is_public_document": [[0, 683, true], [683, 3112, null], [3112, 6520, null], [6520, 10277, null], [10277, 13402, null], [13402, 15394, null], [15394, 18837, null], [18837, 22455, null], [22455, 25166, null], [25166, 28251, null], [28251, 31510, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31510, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31510, null]], "pdf_page_numbers": [[0, 683, 1], [683, 3112, 2], [3112, 6520, 3], [6520, 10277, 4], [10277, 13402, 5], [13402, 15394, 6], [15394, 18837, 7], [18837, 22455, 8], [22455, 25166, 9], [25166, 28251, 10], [28251, 31510, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31510, 0.04403]]}
olmocr_science_pdfs
2024-11-24
2024-11-24