id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
1d48726e626ac42f0e283c24a034defa1294fdf6
|
Guidelines for Describing Usability Problems
Davide Bolchini
TEC-Lab, University of Lugano
Via G. Buffi 13 – 6900 TI Lugano, Switzerland
davide.bolchini@lu.unisi.ch
Sebastiano Colazzo
Hypermedia Open Center, Politecnico di Milano
Via Ponzio 64/A – 20133 Milano, Italy
colazzo@elet.polimi.it
Abstract
This paper explains some recommendations on how to communicate usability findings and, in particular, on how to describe usability problems so that the reporting to the various stakeholders (developers, designers or client) might be done in a more effective and efficient fashion. On the basis of state-of-the-art research on usability, the authors illustrate with examples dos and don’ts in characterizing the problems emerged during usability analysis. The examples shown from real usability reports mainly concern content-intensive web applications characterized by a significant complexity in user requirements addressed (a very open set of user profiles and goals), information architecture, navigation capabilities and presentation strategies. However, the essence of the paper may be used independently from any usability method employed and across a variety of interactive application domains and families.
1 Introduction and Motivation
Discovering usability breakdowns is just half of the work in carrying out a usability evaluation. Since the outcome of a usability evaluation has to be communicated somehow to people who are different from those who did the analysis, communicating usability problems effectively to the relevant stakeholders (client, designers, and development team) becomes a crucial concern for the problems to be taken care of and for the recommendations to be understood as a useful input for the redesign work. The outcome of an excellent usability analysis (performed by whatever method, being it a user testing or an inspection) may be compromised by poor communication of the usability problems. In fact, problems should be described and communicated in such a way that stakeholders are convinced of their relevance and gather all the necessary elements to orchestrate the proper strategies for fixing them.
This paper is not about how to make a usability report (the literature on this theme is abundant), but it provides experience-based guidelines and principles which may help usability experts to deliver an effective description of the usability problems. It is not primarily a question of the communication medium used (being it a written report, a talk in front of the client, a multimedia presentation, etc.) but a forma mentis to mature which may improve not only the way findings are delivered but even the evaluation outcome itself. Anecdotic evidence shows in fact that being “forced” to convince someone about a usability problem is an extremely effective way to understand whether or not the problem is an actual problem, or whether it has been well grounded and explained.
How often does it happen that usability reports miss the point? Why are usability problems hard to understand for stakeholders (even within the same design team)? Why is so hard to convince somebody about the actual usability issue we are trying to describe? Why are stakeholders puzzled about a huge amount of issues reported, not finding a way to interpret the material and plan a consequent action? Some of the causes of these problems concern not merely a communication style or skill, but involve a deeper learning on how to analyse and characterize usability issues, so to facilitate the communication of their essence to the various stakeholders.
On the basis of real-life excerpts from usability reports and running websites, the paper explains guidelines and recommendations for usability experts in order to avoid the abovementioned flaws in describing usability problems. For each of the aspects described, specific suggestions will be formulated, corroborated by examples for making them more vivid and easily applicable.
The reminder of the paper is as follows: in section 2 we introduce the different aspects entailed by the activity of describing usability problems, thus setting the context in which guidelines for usability analysts (especially novice ones) are needed. Section 3, 4, and 5 illustrate the guidelines for respectively improving analysis, characterization
and the communication of usability problems. Finally section 6 offers a synopsis of the recommendations and sets the hints for future research.
2 Communicating Usability Problems
What is a usability problem? In general, we can state that a usability problem is an obstacle to the quality of the user experience. Usability problems should be the key result of any usability analysis, being it performed via inspection methods [4] or user-based methods [1]. Often, usability problems are coupled with a set of design recommendations suggesting possible strategies to cope with the problem described. However, problems and recommendations are two elements very different in nature. Whereas a problem description state the potential or actual impediment for the user or a breakdown of the application, the recommendation moves in the solution world, trying to devise what can be done to overcome the problem identified. Therefore, it is very important to distinguish among these two types of elements, since those who may agree on the problem statement (which is a proper field of the usability expert), may not agree on the solutions provided (which is a proper field of the designers). For better focusing the purpose of our paper, we will concentrate on guidelines for describing usability problems (and not design recommendations), being aware that these two aspects are often coupled together.
Describing problems “the right way” should not be a worry necessarily delayed to the so called “reporting” phase, but it should be a relevant concern as early as possible during usability evaluation. In fact, the different activities concerning the management of usability problems from the usability experts’ perspective (see Figure 1) are strongly intertwined.
Figure 1. The usability problem lifecycle.
Problems have to be discovered, meaning that usability experts should be able to identify those aspects of the user experience which do not work. Experts may use one or a combination of usability methods and techniques or may rely on their experience, assuming that no method can replace the ability and intuition of the expert in “spotting” the real problems. This phase is highly heuristic in the sense that it is the activity (however is carried out) in which the raw material for subsequent analysis should emerge, and the necessary elements to contextualize the problems and understand them should be drawn from. In this phase (whose boundaries cannot be set absolutely and in a clear-cut fashion) the obstacles for the user experience should be circumscribed and detected, even if not yet clearly formulated and reasoned.
To elaborate the ideas, intuition and rough findings of the discovering phase, a further activity entailing an accurate analysis should take place. Here, the various elements composing a usability problem should be decoupled and inspected under different perspectives so to understand the causes and the origins of the issues detected. This analysis is very important, since it often enable usability analysts to make better rationalize their findings, aggregate, generalize their results or plan to go back to discover more.
Note that these activities or phases for the proper management of usability problems are cross-methods or method independent, in the sense that they should be applicable to any structured method or technique in order to manage the finding of the analysis at issue.
Moving from the analysis to the **characterization**, the usability problem management trespasses the boundaries of the mind of the usability experts or of the usability team, and cope with that fact that problems should be understood by external stakeholders. So, characterizing a problem means accurately and completely describing the findings and consistently orchestrating the analysis elements emerged so far for shaping coherent problems statements.
Finally, the **communication** effort is of utmost importance for sending the right message to the various stakeholders [1]. Communicating problems means deciding what to say and how to say it according to the circumstance of reporting and to the addressee, how to prioritise and order the presentation of the findings, what to stress more and what to mitigate, what bridges should be built among the different parts of the analysis, and how to provide convincing arguments to support the results.
### Usability problem lifecycle
<table>
<thead>
<tr>
<th>Guidelines</th>
<th>Analysis</th>
<th>Characterization</th>
<th>Communication</th>
</tr>
</thead>
<tbody>
<tr>
<td>Separation of concerns</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Abstraction level</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Extendibility</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Authority</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Technicality</td>
<td></td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>Prioritization</td>
<td></td>
<td></td>
<td>X</td>
</tr>
</tbody>
</table>
Table 1. Coverage of the guidelines presented in this paper.
The guidelines provided in the following sections may be useful in various activities discussed so far (especially for novice or less-experienced usability analysts), and namely with respect to analysis, characterization and communication. The discovering phase is more method-dependent and therefore guidelines for it are already provided by the variety of the usability methods available in the literature and in the practice. The guiding principles illustrated in the next sections respectively support the aforementioned activity as shown in Table 1.
### 3 Analyzing Usability Problems
#### 3.1 Separate Concerns
**Guideline:** Decouple a usability problem into the various design dimensions it is concerned with (content, navigation, semiotics, graphics, etc.) and distinguishing between problems which are requirements-independent and problems depending on the purpose of the application (e.g. scenarios supported).
Consider that one of the results of an observation is the following: “Navigation is hard in section X”; this problem is a statement that clearly glues different elements that should be distinguished for knowing where to intervene exactly. It is too vague, not well explained and justified. In fact, having such a problem statement, it is not at all clear where the usability problem actually lies. Is it a problem of link names? Is it a problem of link position and order? Is it a problem of link colors? Is it a problem of content structure in the page and across pages? Is it a pure navigational problem? If yes, which aspects of navigation are not working?
A problem often may emerge as an aggregation of different elements, which should be clearly distinguished during analysis in order understand where the actual problem is, to make the different usability elements emerge and to be able to intervene more analytically.
It is known that the user experience (especially if expressed by the user herself) is highly synthetic. The work of the usability expert is to transform this synthetic experience, feeling, impression or reaction into an analytic and reasoned judgment which may isolate the various concerns of the problems.
An important separation of concerns is between problems with strictly concern the design level (also called “technical problems” [5]) and problems strictly related to the application scenarios. Technical problems (those concerning navigation, consistency in layout, in link labels, in information architecture, or technological breakdowns) are typically application independent problems, in the sense that they can be well detected without
knowing the specific purpose and communication goals of the application (see the navigation problem in Figure 2). Other, more crucial problems are those who significantly obstacle the fulfilment or the efficiency of important user scenarios (see content problem in Figure 3). However, in case that the actual application requirements are not known (strategic objectives, communication goals, specific scenarios that have to be supported), and the analysis of the problems emerged strongly depend on this information, analysts should limit to raise issues about the problems encountered (keeping open the possibility of considering it a real problem or not), providing the stakeholders the elements to reason about their requirements in relation with the issue raised. In other words, as shown in Fig. 3, if analysts do not know the goals of the stakeholders, they cannot assume that the museum wants to promote the exhibition and thus supporting the user in gathering the necessary details to plan a visit to the exhibition.
Figure 2. Hermitage Museum website: Orientation clues in this guided tour (e.g. 6th step out of 10) are completely missing (navigation problem) [6].
Among the various concerns to consider when analyzing usability problems, the following should at least be taken into account (see detailed explanation of these aspects in the MILE method [4]):
- **Content**: problems concerning the main messages of the application and all the information contained (produced by the content providers).
- **Semiotics**: problems concerning the interface signs, including link names, labelling, page widgets, and micro content.
- **Information Architecture**: problems concerning the overall partition of the content in the application structure, the organization and balance of the content across sections and pages.
- **Navigation**: problems concerning the connections among pages, i.e. the decisions of linking a given page, a set of pages or a type of page to another one somehow related.
- **Operations/Transactions**: problems concerning improper design of operations available to the user or bad design of the transaction flows.
- **Technology**: problems concerning technological breakdowns (including compatibility, plug-ins) and bad functioning of operations or transaction due to implementation errors.
Each usability problem should be analyzed and examined until each of its elements falls in one of these categories. If there are cross-concern problems, then further elaboration is needed, or a novel category should be introduced and used.
### 3.2 Balance Abstraction Levels
**Guideline:** Describe usability problems keeping a consistent level of granularity, moving from general to specific and without abruptly mixing details with strategic issues.
Macro-problems should not be confused or intermingled with micro-problems. The difference in granularity may be decided according to the degree of impact of the problem on the overall application. It is important to characterize in depth high level and very general problems before digging into the details of the problems concerning specific features. Important details such as “difficulties in subscribing to the mailing list via form” should not hide or being intermingled with issues at a higher level of abstraction such as “purchase service declared but not actually possible”.
The level of abstraction of the problem is a good sign of how deep the analysis is and how usability experts master the results delivered. Moving from general to specific is also effective for having stakeholders agree first on the major issues, and then discussing the details. The level of abstraction in which problems are characterized and reported should be kept consistent and balanced, at least for two reasons:
- a) if analysts start to focus on fine grained issues (not necessarily less important), they risk to lose the “big picture” of the application;
- b) stakeholders are facilitated in following your reasoning, if you move from general concerns to detailed ones.
The same usability problem may be described at different levels of abstraction. Let us consider the example illustrated in Figure 4.
The problem shown may be described at least at three different levels of abstraction:
A1. **The criteria used for the information architecture are not always clear.** See for example the redundancy in section “Service” and “Information” (Figure 4).
A2. **Some link names in the “Service” section overlap almost entirely with some link names in the “Information” section** (see Figure 4). Having this situation, the user is never sure to have consumed all the content available for a given tropic (e.g. museum publications).
A3. **The difference between the sections „Information“ and „Services“ remains unclear**, even more so because the contents of these two sections are partially overlapping and repetitive. In Figure 4, the grey labels are very similar; the white labels are quite similar. The distinction between “Shop online”, “Museum shop with E-Shop functionality”, “Shops inside the museum” and “Ordering images and photographs” is not quite clear and is rather complicating the orientation than clarifying it. Also the difference between “Telephone and Fax Numbers” and “Contact Information” (consisting of a long list of telephone numbers and addresses), is not clear [6].
All three statements are true and accurate in describing the problem in Figure 4. Not all of them are equally relevant in every communication context. A1 is a very general statement characterizing a problem at the information architecture level. As an example, it refers to the specific section illustrated in the picture, without commenting further on. This quite high level of abstraction is good for overview purposes, such as executive summaries or synopsis, or conclusions, to give the essence of the usability problems and then pointing to more detailed descriptions.
A2 describes the problem in the specific case, interpreting the issue at the semiotic level (link labels), and provides grounding related to the impact on the user experience. This middle level of abstraction is quietly focused on the details and of course should be complemented by a description of the “information architecture” concern (it is not enough to change the labels to fix the problem). This statement can be considered as a synthetic description of the specific problem.
A3 adds a number of details to the problem, describing what precisely is not clear in which labels and introducing a further concern, which is the degree in which the content sections are overlapping. This is a low abstraction level which is entirely flattened on the details of the specific case and it is good for discussing in-depth the two specific sections at issue, typically as a comment of the table, if the problem illustrated is unique in its nature.
## 4 Characterizing Usability Problems
### 4.1 Extendibility: Represent Classes of Problems
**Guideline:** In case of complex applications, describe each problem as a representative of a class of problems, and characterize each problem by providing a general statement pointing to specific examples.
Usability experts do not often have the time and resources necessary to analyze thoroughly a complex application. It often happens that problems are recorded and described as they emerge from the analysis, wrongly assuming that the spotted issue is so unique and peculiar. As a consequence the characterization of such problem is only valid for the incidental context in which it emerged, and the consequent recommendation for improvement just addresses the fixing of that specific situation.
Let us consider the following problem description: As entering the category “Jewels”, the user can choose among “handmade jewels” but also “bronze”, “everyday tools”, and “wooden products”, whereas the user expects to find only jewel-related objects, or subcategories of jewels [7].
It is clear that this is an important problem to characterize and point out. However, usability analysts should wonder: is this an isolated problem? What happens in the other 15-20 categories? If the expert does not have the time to inspect thoroughly all the categories and all the sub-levels of it (which may take more than expected), “sample” inspection can do the job, meaning that 3-4 categories are randomly chosen and analyzed to verify the pervasiveness of the problem.
Then, when it comes to precisely characterizing the problem, usability experts should carefully describe it as a general problem (if found in one that more categories), pointing to the specific case of the “jewellery” as an example. In other words, the specific case emerged should be described not as “the” problem to solve but as a representative of a class of problems, which may likely occur in other parts of the application. Even if analysts find just one case of the problem (and do not even have the time to do “sample” inspection), then it is recommendable to generalize the problem and quoting the specific example.
Note how the same problem can be characterized according to this guideline:
**Collection categories have “foreign objects”, which should not belong that the category that are in. This may lead to a confusing and disorienting browsing of the categories offered.**
**Examples:**
- Category “Jewels”, which contains the subcategories “handmade jewels” but also “bronze”, “everyday tools”, and “wooden products”.
- Category “Painting”, which contains Religious Pieces”, “Masks”...
“Sample” evaluation can provide insight for usability breakdowns, provided that these findings are not supposed to be neither isolated nor necessarily “extendable” to all other parts of the application. As shown in the example, combining a statement at a middle-level of abstraction with one or more examples can effectively communicate this message.
In this way, once the stakeholder understands the problem and passes it to the designer to look into it, the designer cannot just fix the specific example (because it may be not sufficient), but he is forced to check and fix all the possible occurrences of the general problem.
### 4.2 Authority: Ground Your Findings
**Guideline:** Give reason of your findings by drawing to elements which can gain credibility, such as the experience of the analysts, the impact on the user experience and the compliance with the standard and convention.
Authority is a crucial success factor of any effective communication. In communicating usability problems, by authority we mean the *source for credibility of the findings*. Where do usability problems draw their credibility from? We can have three main sources or pillars for usability results to be credible:
a) **Experience:** the analysts is recognized as knowledgeable and experienced
b) **Consequences:** usability problems are shown to have an impact on the actual user experience
c) **Anomalies:** usability problems patently infringed standards, good conventions or common practice in the domain
None of these pillars alone can fully gain the needed authority for usability findings. These elements have to be properly combined, and all have to contribute with different weights to consolidate the credibility of the usability problems. In this way, usability problems can acquire their authority and start to become worth noticing for the stakeholders.
Let us consider the following statement: “There is information overload of promotional messages on product page”. Why should this be considered a usability problem? To answer this question, we should start investigating pillar b): is the user actually distracted from her task? If yes, is there any evidence in user testing or in other previous research? What if the user is happier when following a popped-up product that might better meet her need? Even if pillar a) may play an important role, authority is too often delegated solely to the experience of usability gurus, for it is (wrongly) assumed that their indications are “the” usability rules not to be infringed. In this situation, analysts fail to gain full authority since they do not investigate pillar b) and c).
5 Communicating Usability Problems
5.1 Prioritize and Set Importance
<table>
<thead>
<tr>
<th>Problem</th>
<th>UX Gravity</th>
<th>Effort to fix</th>
</tr>
</thead>
<tbody>
<tr>
<td>Problem 1</td>
<td>10</td>
<td>0.1 person/month</td>
</tr>
<tr>
<td>Problem 2</td>
<td>8</td>
<td>0.4 person/month</td>
</tr>
<tr>
<td>Problem n</td>
<td>5</td>
<td>0.2 person/month</td>
</tr>
</tbody>
</table>
A solution for keeping both priority messages is to communicate to the stakeholders the problems in order of gravity for the user experience (e.g. the completion of crucial scenarios is hindered) and then roughly estimate the effort or expertise required for intervening to fix the problems (see Table 2). The table may be ordered by “gravity for the user experience” or by “effort to fix”, thus enabling stakeholder to gather more elements to decide how to act. In fact, the final message to be perceived by the stakeholder is: “What should I do now”? “What should I focus on first and what can I plan as future activities”?
5.2 Technicality: Avoid “Usability” Jargon
The main addresses of the communication of usability problems are not necessarily technical people, or people with a strong computer or usability background. Moreover, we have to abandon the idea that they should know the principles of usability or even the concepts of the methods which experts use. Explaining the problem with plain concepts and wording is an important condition for usability problems to be accepted. The “usability” or “design” jargon must be avoided, and exceptionally used just to convince the stakeholder about the breadth of knowledge of the usability analysts, without compromising the understanding of the key message.
A problem statement like “Technical heuristics T4 showed that the structural navigation within nodes of the entity type “painting” is inconsistent and not predictable” is far to be easily understood by a stakeholder, with
the consequence that they may start considering the usability analysis not relevant for them. Such a problem communication may be rephrased as: “Navigation among the different details of a painting is difficult and disorienting, since links sometimes disappear and their logic is not to easy to understand”.
A problem like “iconic information-overload on homepage hinder proper understanding of application topics coverage” may be translated and explained as “users may be overwhelmed by the numerous images and graphics on the home page, which pose serious obstacles to catch quickly what the site is about”. According to whom the problem communication is addressed (designers, information architects, client, developers, web masters, project managers), proper lexicon should be used to convey the problem meaning.
### 6 Conclusions and Future Work
On the basis of real-life excerpts from usability reports and running websites, the paper have presented some guidelines and recommendations for usability experts that may help in achieving successful and effective description of usability problems. Guidelines support different activities concerning problem description, namely problem analysis, characterization, and communication. The guidelines, which are summarized in Table 3, are extremely relevant for web application usability analysis (especially for content-intensive and complex websites), but are also applicable to a variety of interactive applications. Independently from the specific usability evaluation method used and from the specific communication medium used (a written report, an illustrated presentation or a simple brief), analysts can smoothly integrated these guidelines into their usability practice, improving the overall communication of their findings.
<table>
<thead>
<tr>
<th>Separation of concerns</th>
<th>Decouple a usability problem into the various design dimensions it is concerned with (content, navigation, semiotics, graphics, …) and distinguishing between problems which are requirements-independent and problems depending on the requirements of the application (e.g. scenarios supported).</th>
</tr>
</thead>
<tbody>
<tr>
<td>Abstraction level</td>
<td>Describe usability problems keeping a consistent level of granularity, moving from general to specific and without abruptly mixing details with strategic problems.</td>
</tr>
<tr>
<td>Extendibility</td>
<td>In case of complex applications, describe each problem as a representative of a class of problems, characterize each problem by providing a general statement pointing to specific examples.</td>
</tr>
<tr>
<td>Authority</td>
<td>Give reason of your findings by drawing to elements which can gain credibility, such as the experience of the analysts, the impact on the user experience and the compliance with the standard and convention.</td>
</tr>
<tr>
<td>Prioritization</td>
<td>Communicate your findings in order of importance: according to the circumstances, importance may mean gravity for the user experience or estimated effort needed to fix the problem.</td>
</tr>
<tr>
<td>Technicality</td>
<td>Adapt your concepts and wording to the target audience you are communicating with, in such a way that stakeholders should clearly understand the essence of the problems.</td>
</tr>
</tbody>
</table>
#### Table 3. Synopsis of guidelines for describing usability problems.
Future work will consolidate and expand these guidelines according to the growing project experience, and will devise strategies to validate their effectiveness with respect to situations in which problem communication does not consider these recommendations. To this end, experiments will be set up in which with different groups of novice analysts use the same usability method to discover problems, and then are trained differently as to how they can describe those problem (a group using the presented guidelines and the other without any guideline). Effectiveness of the communication to the stakeholders will be verified and compared against the impressions and opinions of the stakeholders themselves.
Acknowledgements
The authors thank all members of the TEC-Lab (www.tec-lab.ch) at the University of Lugano, in particular Luca Triacca and Marco Speroni, for their precious research and project work on usability evaluation methodologies. We warmly thank Paolo Paolini, TEC-Lab scientific coordinator, for his insights, experience, and ideas in communicating effectively usability problems. Parts of the application examples reported and used to illustrate usability problems have been elaborated on the basis of the usability analysis produced by students [6][7] of the Master TEC-CH (Technology-Enhanced Communication for Cultural Heritage, www.tec-ch.unisi.ch), an innovative Master Program of the University of Lugano.
References
|
{"Source-Url": "http://mypage.iu.edu:80/~dbolchin/public/bolchini_colazzo_hcii05.pdf", "len_cl100k_base": 6004, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22125, "total-output-tokens": 6781, "length": "2e12", "weborganizer": {"__label__adult": 0.0008759498596191406, "__label__art_design": 0.04119873046875, "__label__crime_law": 0.0013017654418945312, "__label__education_jobs": 0.1806640625, "__label__entertainment": 0.0005502700805664062, "__label__fashion_beauty": 0.000659942626953125, "__label__finance_business": 0.00370025634765625, "__label__food_dining": 0.0010881423950195312, "__label__games": 0.0027027130126953125, "__label__hardware": 0.003602981567382813, "__label__health": 0.0029277801513671875, "__label__history": 0.002178192138671875, "__label__home_hobbies": 0.0007472038269042969, "__label__industrial": 0.0012645721435546875, "__label__literature": 0.005306243896484375, "__label__politics": 0.0006036758422851562, "__label__religion": 0.001155853271484375, "__label__science_tech": 0.2464599609375, "__label__social_life": 0.0003399848937988281, "__label__software": 0.10296630859375, "__label__software_dev": 0.39697265625, "__label__sports_fitness": 0.0005860328674316406, "__label__transportation": 0.0015544891357421875, "__label__travel": 0.0005865097045898438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32696, 0.01298]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32696, 0.37651]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32696, 0.92531]], "google_gemma-3-12b-it_contains_pii": [[0, 4322, false], [4322, 7483, null], [7483, 11944, null], [11944, 14269, null], [14269, 16123, null], [16123, 19136, null], [19136, 23365, null], [23365, 25974, null], [25974, 30208, null], [30208, 32696, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4322, true], [4322, 7483, null], [7483, 11944, null], [11944, 14269, null], [14269, 16123, null], [16123, 19136, null], [19136, 23365, null], [23365, 25974, null], [25974, 30208, null], [30208, 32696, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32696, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32696, null]], "pdf_page_numbers": [[0, 4322, 1], [4322, 7483, 2], [7483, 11944, 3], [11944, 14269, 4], [14269, 16123, 5], [16123, 19136, 6], [19136, 23365, 7], [23365, 25974, 8], [25974, 30208, 9], [30208, 32696, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32696, 0.15748]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
4732ba61fdbcb76d4b7f98885434b7c4bf0d1a2a
|
Experience Management Wikis for Reflective Practice in Software Capstone Projects
Eric Ras, Ralf Carbon, Björn Decker, and Jörg Rech
Abstract—Software engineering curriculum guidelines state that students should practice methods, techniques, and tools. A capstone project is one possibility to address this aim. A capstone project helps the students to increase their problem solving competencies, improve their social skills (e.g., communication skills), and gather practical experience. A crux of such projects is that students perform “reflective” practice in order to learn from their experiences. The authors believe that experience gathering and reuse are effective techniques to stimulate reflective activities. An adapted free- and open-source Wiki-based system called software organization platform (SOP) is used to support students in managing their observations and experiences. The system can be used for experience exchange within the team and for experience reuse in forthcoming projects. The results of a case study show that standard Wiki functions improve communication and information sharing by means of explicit observation and experience documentation. A total of 183 documented observations and experiences at the end of the project provide a measure for the amount of reflection students have had during the capstone project. Still, the advantages of using Wikis will decrease when no technical adaptations of the Wiki to the learning objectives and to the software engineering tasks are made. Limitations of the case study, future evaluation steps, and planned developments of SOP will be provided in this paper.
Index Terms—Capstone project, experience management, knowledge-based systems, open-source software, reflection, software engineering.
I. INTRODUCTION
A capstone project (i.e., a practicum project), conducted at the research group Software Engineering of the University of Kaiserslautern (UKL), Germany, is a project that lasts between two to three months and covers all the software development phases. The students get in contact with a real industrial customer. They develop either a complete smaller software system or a new component for an existing system by applying state of the art SE approaches. The goal of these projects is not to teach the students the usage of specific and complex SE tools, but to focus more on SE principles, methods, and techniques. Therefore, more and more easy-to-use, free- and open-source software (FOSS) have been used in the projects during recent years to keep the training period for getting familiar with the tools.
Capstone projects have shown to support self-directed and experiential learning [2], where students reflect and interpret their experiences to build abstractions (e.g., models, principles, strategies, theories, etc.), which are applied and tested in new situations and which provide the foundation for having new experiences.
The Software Engineering Curriculum Guidelines (SE2004) [3] provide learning objectives, sample courses, and key knowledge areas that should be taught to undergraduate students. Two interesting learning outcomes to be highlighted here are: “Show mastery of the software engineering knowledge and skills necessary to begin practice” and “Demonstrate skills such as interpersonal negotiation, effective work habits, leadership, and communication.” SE2004 also mandates that students undertake a capstone project to expose the students to the application domain, and that software engineering (SE) should be taught as a problem solving discipline.
However, many capstone projects risk overloading students because they get overwhelmed with so many new topics, because they have to understand the different roles and responsibilities assigned, and because they have to cope with obstacles (e.g., changing software requirements) [4]. Teachers want to provide realistic projects and conflicting situations as they happen in the real world to prepare students for their jobs [5]. In addition, students are supposed to pass through all the technical development phases and perform project and quality management. Umphress et al. analyzed 49 capstone projects at the graduate and undergraduate level and stated that most students had difficulties in balancing and estimating the workload during the project, that configuration management and defect tracking was neglected, and that team members’ responsibilities were not clear [6]. Students need a systematic guidance during such a capstone project to lower the risk of these problems and ensure that the students are not distracted from their learning objectives and can perform “reflective” practice during problem solving.
A capstone project (i.e., a practicum project), conducted at the research group Software Engineering of the University of Kaiserslautern (UKL), Germany, is a project that lasts between two to three months and covers all the software development phases. The students get in contact with a real industrial customer. They develop either a complete smaller software system or a new component for an existing system by applying state of the art SE approaches. The goal of these projects is not to teach the students the usage of specific and complex SE tools, but to focus more on SE principles, methods, and techniques. Therefore, more and more easy-to-use, free- and open-source software (FOSS) have been used in the projects during recent years to keep the training period for getting familiar with the tools.
as short as possible. In addition, academic institutions have to bear with shrinking budgets, and they are not able to acquire the newest commercial tools or the latest updates [7].
In this paper, a Wiki-based system is used to support the student to practice in a more reflective way and to train his or her metacognitive skills. The approach addresses the focus of this issue by presenting a concrete success story about using a Wiki in SE education for reflective practice and by providing first evaluation results from a case study. The results provide a foundation for more focused controlled evaluations in the future. Another intention for using a Wiki in the capstone project was to further investigate the impact of FOSS Wikis on the effectiveness and efficiency of specific SE activities.
Section II describes the potential of Wikis for higher education and SE activities. After a description of the capstone project setting, the conceptual infrastructure of the FOSS experience management system is described, and the process of how observations evolve to software experiences through reflection and abstraction is shown. Results from a recent case study show the utility of the Wiki for different SE tasks and provide evidence that a Wiki supports reflection activities through experience management.
II. FREE- AND OPEN-SOURCE WIKIS IN EDUCATION
A few years ago, higher education had begun to explore the potential educational value of FOSS. Many educational institutions have been running installations of FOSS learning management systems, such as Sakai or Moodle, for several years. In addition, many academic institutions use FOSS for practicum projects, or they decide to let students participate in FOSS projects to allow them to participate in big projects [7]. Using FOSS is not only a matter for easily accessing the code and adapting the code, FOSS also affects the developed products and how the products are developed, still another reason why FOSS should be used in SE education [8].
Currently, the amount of social FOSS used in education is increasing constantly. FOSS supports people in connecting or collaborating through computer-mediated communication and in forming online communities [9]. Chat rooms and instant messaging are just two common examples of social software. Other social software such as Wikis, which have their origin in SE, have been recognized as beneficial knowledge management and group communication tools in the corporate world. A Wiki system, by definition, is “the simplest online database that could possibly work” [9]. Higher education is starting to investigate the potential of Wikis (and also blogs) regarding their support for learning, communication, and interaction processes [10], [11]. Wikis have been used as platforms for documentation, minutes, glossaries, or repositories for additional learning materials. Their advantages for higher education are fast installation, easy adaptation to educational purposes, no acquisition costs, and intuitive usage.
In addition, Wikis are also used in SE, and some of them have even been used in capstone projects: Trac is a Wiki written in Python that integrates an issue tracker and allows relating Wiki pages to issues, and vice versa [12]; Master of Arts in Special Education (MASE) (agile software engineering) offers plug-ins for agile software development, in particular for iteration planning and integration of automated measurement results [13]; SnipSnap, a Java-based Wiki, allows read-only integration of code documentation and offers support for the integration of Wiki entries into the integrated development environment Eclipse [14]; EclipseWiki is a Wiki integrated into the integrated development environment Eclipse [15]; Fitnesse is a Wiki-based test management framework that allows capturing and running test cases [16]. Recently, the software organization platform (SOP), an adapted Wiki for SE that has also been used for the case study in this article, has demonstrated its usefulness for stakeholder participation in requirements engineering [17].
III. CAPSTONE PROJECT
Since 2001, the research group Software Engineering at the University of Kaiserslautern has conducted an open-source (OS) capstone project once a year in cooperation with an industrial customer and Fraunhofer Institute for Experimental Software Engineering (IESE), Kaiserslautern, Germany.
A. BACKGROUND OF STUDENTS
The computer science (CS) undergraduate curriculum contains practical CS courses and technical and theoretical courses, mathematics courses, and an elective field of study such as electrical engineering or economics. When the students receive their bachelor’s degree after three years, they are able to design small object-oriented systems based on the knowledge obtained in the courses Development of Software Systems I–III, and they are able to implement and test small software systems in teams of three to four people. If the students choose to focus on SE after the third semester of their bachelor studies, they have to enroll in the course Foundations of Software Engineering. In the two-year Master’s program, the students can again choose for the SE option. These students can now sign up for the capstone project. They already know and understand about processes, methods, techniques, and tools that are used to develop large and complex software systems from their bachelor studies. At the end of their Master’s studies, they get a Master’s in CS.
B. LEARNING AND PROJECT OBJECTIVES
The main project goal is to fulfill the interests and requirements provided by the industrial customer and to get the students involved in a real industrial project. Students are fully responsible for eliciting the requirements, designing the prototype, and implementing and delivering the system on time. They will be faced with changing requirements, communication problems, etc. After the project, the students should be able to:
• know and understand the different roles and responsibilities in a software development project, especially the management-oriented roles such as the project manager;
• communicate and interact with a real customer;
• carry out project estimation (i.e., effort, time, quality);
• develop software in a team of 10 to 14 students;
• execute a well-defined software development process;
• judge existing OS components and understand the consequences when they are integrated into a product;
• understand the importance of software and experience documentation for future projects (i.e., to document observations and experiences during the project);
• be aware of their own thinking and decision-making processes;
• reflect about events and changes of situations that originate from performed actions.
C. Roles and Processes
A team of students works full time for two months in a laboratory environment between the summer and winter terms on the capstone project. Working between summer and winter terms was proven to be successful because they can fully concentrate on the project goals and tasks. Research staff of the working group Software Engineering at UKL and of Fraunhofer ISE coach the students during their work.
The students take over all roles in the project. The roles cover management-oriented roles like project manager, quality assurer, product manager, or experience manager and technical roles such as requirements engineer, architect, or tester. Each student must apply for one role during the kickoff meeting of the project. The capstone project follows an iterative development process with two iterations. Both iterations run through the phases requirements analysis, architecture and component design, implementation, and test.
D. Technical Environment of the Capstone Project
The technical laboratory environment, i.e., the tool environment, consists of FOSS and one commercial system:
• integrated development environment Eclipse;
• configuration management Subversion (product repository);
• experience management system SOP, based on MediaWiki;
• a commercial workflow management system.1
The usage of an integrated development environment (IDE) and a product repository is common sense. A workflow management system has been used to guide the students and assure that they follow a prescribed development process. The role of SOP will be described later.
IV. SUPPORTING REFLECTION THROUGH EXPERIENCE MANAGEMENT
According to the last four learning objectives (see Section III-B), reflective skills are key skills to be learned by the students, in addition to the technical skills of SE. The same has been stated by Socha et al. for other engineering disciplines. They mention experiential learning as an effective way to teach these skills, with students continually going through a learning cycle: “practicing, reflecting on the difficulties, discovering new models (or having them introduced by facilitators or other students), and then practicing again” [18]. Reflection is a phase of the well-known learning cycle of Kolb and Fry [19], [20], who investigate the learning process related to learning from experiences and whose research has its foundation in the work of Lewin [21], Dewey [22], and Piaget [23]. Reflection is the prerequisite for learning from experience (e.g., in order to form abstract concepts) and for improving actions and professional practice [20]. Self-regulated learning theories focus on how students could activate, change, and maintain their learning practices. In recent years these theories have concentrated more on information processing and, in particular, on the metacognitive process of self-reflection [24]. Metacognition is related to higher order thinking that involves active control over the cognitive processes engaged in learning [25]. As Anderson and Krathwohl describe in their book on educational objectives, which revises the taxonomy of Bloom [26], metacognitive skills are skills that make the learners aware of their own knowledge and their ability to understand, control, and manipulate their own cognitive processes [27]. Hence, supporting self-reflective processes in a learning environment could enhance the learning benefit of the performed activities and give opportunity to review previous actions and decisions before proceeding to a next activity. Angelo and Cross provide an overview of how teachers can promote metacognition in a classroom [28].
The value of reflection has already been proven in situated cognition theory (e.g., cognitive apprenticeship and anchored instruction). The work of Schön highlights the importance of knowledge resulting from real experiences of professionals [29], [30]. Schön distinguishes between two types of reflection that facilitate the learning and activity of professionals: reflection-in-action and reflection-on-action. Short-term reflection-in-action is performed while people act and experience. The activity is reshaped while the activity is performed. Reflection-on-action is retrospective thinking about an experience after an activity or during an interruption. Other persons could be involved. The latter provides an understanding of practice and is a way practitioners may learn from their experience.
The accomplishment of teaching development and reflective skills is aggravated by the short, two month duration of the projects. This limited time requires the students to start as early as possible with the assigned tasks. How can one ensure that the customer requirements are met at the end of the project, that students practice previously mentioned skills, and that they learn efficiently based on their experiences made during the project?
A FOSS Wiki was adapted for experience in documentation, understanding, and sharing to teach reflective skills and to support coaching and guidance. In addition, the strategic goal was to build up an experience base and to gather data (effort, defects, etc.) systematically to support future OS capstone projects. Sections IV-A–C explain how experience management works in SE, how experience management can support reflection, and how observations and experiences are documented by the students.
A. Experience Management in Software Engineering
The reuse of existing knowledge and experience is one of the fundamental principles in many sciences. Engineers often use existing components and apply established processes to construct complex systems. Without the reuse of well proven components, methods, or tools, the systems have to be rebuilt over and over again.
During the last 30 years, the fields of software reuse and experience management (EM) have been gaining increasing importance. The roots of EM lie in experimental SE ("experience factory (EF)"), in artificial intelligence ("case-based reasoning"),
1The name of the commercial workflow management system is not mentioned because the choice is not motivated by the context of the capstone project.
and in knowledge management. EM includes methods, techniques, and tools for identifying, collecting, documenting, packaging, storing, generalizing, reusing, adapting, and evaluating experience, and for development, improvement, and execution of all knowledge-related processes. The EF is an infrastructure designed to support experience management (i.e., the reuse of products, processes, and experiences from projects) in software organizations [31]. EF supports the collection, preprocessing, and dissemination of experiences. EF separates the project and the experience organization physically or at least logically as shown in Fig. 1. This separation is meant to relieve the project teams from the burden to find, adapt, and reuse knowledge from previous projects and to support them in collecting, analyzing, and packaging valuable new experiences that might be reused in later projects.
For example, if software engineers begin a project (“plan project”), they can use the experience factory to search for reusable experience in the form of reference architectures, design patterns, or process models based upon the project context. In the execution phase (“execute project”), the EF is used to retrieve experience “on demand” (e.g., to support decisions or reuse source code). Furthermore, during the project and at the end of the project, the project is analyzed (e.g., using a retrospective workshop) to extract reusable observations and experiences that might be useful in other projects.
B. Reflection Activities in the Project
Reflection activities are supported by the SOP, an adapted Wiki for information and experience management in software projects (Fig. 2). SOP provides information for guiding the students and documented observations and experiences, in particular during project execution. SOP serves as a means to capture observations and share all kinds of information relevant to the project [32]. Examples of the content of SOP are descriptions of roles and their responsibilities, process descriptions, document templates, documentation guidelines, observations and experiences on software engineering (SE) technologies, etc.
Most reflective activities refer to reflection-on-action, i.e., reflection after an activity or when an activity is interrupted (see dashed ellipses in Fig. 2). They help to reflect about recent observations/experiences, decisions made, and remaining problems. Everyday stand-up meetings, moderated by the coach, are conducted every morning (status of project, discussion about problems of the previous day, etc); an official feedback meeting with the customer takes place after the requirements are documented in the Wiki; an internal review meeting where all the project members attend occurs to discuss a first design of the architecture; after the test, the system is presented to the customer.
The students receive valuable feedback about the system developed (focus on functionality); a goal-oriented retrospective workshop summarizes the first iteration regarding development process and technologies used, teamwork, roles, technical development infrastructure, and decisions about possible improvements for the second iteration. During or after these reflective activities, the students are asked by the experience engineer to document the observations and experiences into the Wiki. The experience engineer defines processes and guidelines on how to gather observations and experience, analyzes the findings, and packages them (see Section V for more details). Appropriate templates, offered by SOP, stimulate the students to reflect about the discussions and their own observations, and to self-reflect about their own knowledge, their learning process, and the applied problem solving strategies, etc. These templates use so-called reflective questions and reflective prompts that help the students to revise the details of the learning experience, to move toward critical thinking, and to create an action plan [33] after documenting the experience. Questions are of a more general nature, and prompts are more focused questions. Both types are related to the attributes of the templates (see Section V for details about the templates). The same reflective actions take place during the second iteration. At the end of the project, a final retrospective workshop is conducted. The purpose of both retrospectives is to gather more observations and experiences from students and to vote on lessons learned and reported.
SOP stimulates reflection-in-action by offering definitions, examples, detailed descriptions, etc. about processes, technologies, roles, etc. (more about the information structures in [32]). In addition to this information, SOP offers the reuse of already available observations, experiences, patterns, and laws that have been gathered in previous projects or during the first iteration. This information not only guides the students through their tasks but also leads them to reflections about current activities and to thoughts about how to adapt the activity.
C. Gathering Experiences
The observations and raw experiences from the projects are further refined by the students in the experience factory (Fig. 3). In SOP, customized templates can be designed by means of using an extended Wiki functionality that used the normal Wiki syntax cascading style sheets (CSS). Currently, SOP offers two templates for experience management: observations (O: name, situation, problem, solution) are suitable for easy and fast documentation of experiences; experiences (E_F: name, situation, cause, solution, known exceptions, benefits, consequences, metadata) enable the student to provide more details about the experience and its context. In general, two main processes are used for refinement: 1) the formalization of the subjective and informal elements; and 2) the generalization of experiences to more abstract and generally applicable representations. Both require the student to perform reflection. As shown in Fig. 3, formalization is used to transform observations (O) into semiformal experiences (E_F) and finally into formal experiences (E_F). Here, the aggregate or semiformal state refers to, for example, a structured template such as a pattern or experience
template [34]. A formal experience represents a precise and clear definition, for example, based on predicate logic [35]. Generalization is used to summarize multiple project-specific experiences (E) into a pattern-aggregate of an experience (E\textsuperscript{PA}) and finally into a law-aggregate of an experience (E\textsuperscript{LA}). The core goal of this step is the decontextualization of the experience from its project, domain, (programming) language, or technology context. The aggregate state “pattern” represents, for example, a design pattern such as “abstract factory” [36], which is applicable not only in one project but in almost all object-oriented software systems. A law is a generally applicable statement, principle, or heuristic that is valid for all software systems, e.g., Brooks’ Law “Adding manpower to a late project makes it later” [37]. The focus during the project was on observations and semiformal experiences. Pattern and laws are defined together with the coaching staff of the project.
Additional metadata for describing the experiences include: type (the type of an experience denotes its origin; typical values for type classification are: process, product, customer, organization, people, or project); source (the source of the experience: external, internal); aggregate state (denotes whether the element can be classified as an observation, experience, pattern or antipattern, or law); formality (formality describes the degree of structure, completeness, precision, and unambiguity using the levels: informal, semiformal, formal).
V. EVALUATION
The goal of the evaluation was to assess the general utility of the FOSS platform for SE and experience management purposes and to investigate whether students learn from their experiences. The evaluation impact model in Fig. 4 shows which aspects were covered by the evaluation. Communication is an essential aspect in any short capstone project where the participants do not know each other beforehand. Technically supported communication...
has an impact on the usage and utility of the platform itself. In addition, SOP is intended to offer an experience management infrastructure. Experience gathering (i.e., through discussions, documentation, prioritizing, abstraction, and formalization) requires that the students reflect upon their experiences. Reflection-in-action is related to reusing existing experience of SOP in the current context. By reusing experience descriptions, the students reflect about their current activity and possible changes to it. Sufficient reflection ensures that the students learn from their experiences.
Each of the participants was asked to complete an online questionnaire covering the aspects of the impact model. Two types of questions have been used: most of the questions had to be answered by the degree of agreement on a four-point scale (i.e., fully agree, partially agree, partially disagree, and fully disagree); the second type of question was multiple choice, where zero or more options from a list could be selected. In addition, the contributions of the participants were analyzed using the Wikistats evaluation tool [38].
Thirteen of the 14 participants (93%) completed the questionnaire. Seventy-seven percent possessed programming experience, while 31% of all participants gained their experience in industrial organizations. All participants stated that they used the Wiki, and 69% used the Wiki regularly, which was also confirmed by the data of Wikistats.
Communication was rated by the participants as follows: The improvement of information exchange was rated within the team and among different teams. Except for the usage of contributions originating outside the team, all aspects achieved 100% full/partial agreement. The results imply that SOP improves the exchange of information, in particular within teams. Furthermore, the participants stated that by using SOP, they received more information than they would get using only verbal communication, and 23% of them agreed fully. The number of edits per day obtained via Wikistats confirmed that SOP improves communication. The number of edits per day reached a maximum of 175 during the requirements phase and dropped to about 100 edits per day later. (Because of privacy issues, a more detailed investigation per role or per person was not possible.)
To evaluate the usage and utility of SOP, the Technology Acceptance Model (TAM) [39] has been used, a thoroughly tested model to assess technology acceptance and of particular importance in capstone projects. These projects are expected to have a rather short run time. Furthermore, the students are expected to focus on the complexity of the project and should not be distracted by tool complexity. The ease of use was rated at a minimum of 77% (sum of answers fully/partially agree). An exception was the last item, sometimes SOP behaves unexpectedly, where 69% disagreed fully or partially. The top three concerns concerning ease of use were learnability, mastering of SOP features, and easy interaction. Based on this data, the conclusion was derived that SOP is easy to handle and to learn. Perceived usage was agreed with fully or partially by a minimum of 69%. The top item was access to information about the project, followed by general usage and contribution of project information. These results show that SOP can be used as a platform to exchange (capstone) project information. Concerning self-predicted future usage, more than 84% agreed fully or partially that they would use SOP for managing experiences in particular and for projects in general. Furthermore, 83% answered that they would use SOP even more if the support for document templates and visualization of related articles were improved. The answers in the questionnaire are backed up by the data gained via Wikistats: More than 360 articles were created during the capstone project, most of them during the requirements phase between mid-August and the beginning of September. Seven participants were also rated as very active (i.e., 100 or more edits within one month).
Perceived utility as the second part of the TAM provided a utility evaluation of SOP independent of concrete SE tasks. Six of nine items had a rating of more than 50% full or partial agreement. The minimum perceived utility was performance with 38% partial agreement. The top three were general utility, quality of products, and improvement in productivity. SOP is, in general, perceived as useful (und usable) in capstone projects. This rating might be caused by important features missing in SOP during the run time of the capstone project. An example was the missing export function from Wiki-Pages to office documents, which was needed several times in the capstone project for reporting to the customer.
The utility of SOP was also investigated regarding its support for SE tasks. The general utility for SE tasks was rated using multiple choice. Five of eight tasks were rated useful by over 50% of the participants (i.e., experience management, requirements, design, quality assurance, and project management). Therefore, the general utility of SOP is high. A reason for the lower rating of the other tasks might be that they are not supported by dedicated functions within the Wiki.
The perceived effort savings were captured according to the tasks in a software project (requirements, design, quality assurance, integration and test, implementation, product and configuration management, project management, and experience management). Concerning these tasks, more than 50% of the participants agreed fully or partially on four tasks that by using SOP they would save effort (Fig. 5). The top three tasks were experience management, project management, and configuration management. For experience management, there was no disagreement that SOP is actually saving effort. This agreement shows that SOP needs to support hands-on, software development tasks better, and that SOP provides support in managerial tasks. Again, the low rating of the effort saving effects might be caused by SOP not supporting these tasks—with one exception: As mentioned above, the effort during the requirement phase was increased because of creation of office documents by hand. In the current SOP version, this export can be performed automatically.
Because of the difficulty to assess the amount of reflection-in-action, other indirect measures regarding the quantity and quality of experience reuse were used. The perceived reusability of experience was rated as follows: That the captured experience will be reusable within the team was agreed with fully or partially by 85%. Seventy-five percent stated that the experience would also be reusable by other teams. That the experience is of no use was disagreed fully and partially by 64%. This report shows that the experience reuse by students is accepted and supposed to be helpful even across capstone
perceived effort savings in SE activities. The quality of reused experience was rated using an inverse scale, since the item asked for quality deficits. All quality issues (understandability, too abstract, not applicable, missing context description, too specific) were disagreed with partially and fully by at least 61%. The largest deficits identified were in the areas of context and abstraction level. The students had problems adapting the documented experience to their own situation. However, the results implied that the overall quality of experience was sufficient, but improvement is necessary. The potential utility of types of experience, i.e., which type of experiences would have been helpful in the project, provided the following results. All types had at least 53% full or partial agreement, and no full disagreement. The top three types were experiences with tools (100%), processes (85%), and SE methods (92%). This feedback will be used to prioritize the types of experience which should be offered to the students in future capstone projects.
Reflection-on-action is supported by the different types of meetings and technically by SOP (i.e., experience gathering). The following questions refer only to SOP. Each documentation of an observation or experience requires self-reflection of the students. Therefore, the number of documented items can be used as an indirect measure for reflection. During the project 178 observations and five experiences were gathered. Wikistats showed the following results: the articles were used throughout the capstone project, and the document structure remained stable. That experience management is supported in general by SOP was agreed with partially/fully by 92%. The top three types of experience management are lessons learned about tools (62%), processes (46%), and products/project management (38%). Other types of experience such as SE techniques, SE methods, SE principles, and fellow project team members achieved 31%.
Concerning newly acquired topics (Fig. 6), the top three items were products, fellow project workers (employees), and tools concerning full or partial agreement. Five of eight topics were rated better than 50%. New topics about processes were used by 53%, while new project management topics were used by 46%. This data indicates that many new topics were used in general. The topics of particular relevance in capstone projects (processes, project management) also had a fair rating of about 50%. However, further improvement should be made.
VI. CONCLUSION AND FUTURE WORK
Wikis have several advantages for higher education, such as fast installation, easy adaptation to educational purposes, no acquisition costs, and intuitive usage. Many of these facts have been proven by the application of SOP. The evaluation showed that SOP is a suitable technical environment for gathering and reusing experiences, supporting requirements engineering in particular [17], accelerating feedback cycles, and for improving communication among students. The results of this case study help to convince other departments of the UKL to use FOSS for educational purposes. Many observations and experiences
have been gathered for future projects. Thus, students developed their reflective skills and learned from their experiences. The evaluation showed that standard Wiki functions support communication and information sharing, but a Wiki must be further adapted to the educational objectives and tasks of a capstone project. Without these extensions, no significant effort savings will be obtained in specific SE tasks. Wikis are not a replacement for continuous coaching and guidance of the students by educational staff; they are merely a means to support these activities.
One limitation of the evaluation is that no direct measures related to learning or reflection in particular have been used. In addition, the evaluation did not contain a formal assessment to find out whether the learning objectives have been met. Therefore, two controlled experiments will be conducted in 2007 that especially focus on identifying factors that have an impact on learning outcome (i.e., response variable) by using SOP. A first fractional factorial design will serve to find the two factors that have the biggest impact on the response variables (i.e., higher number of groups with few subjects in each group) and to build a baseline for the hypotheses of the second experiment. A second factorial experiment will use the identified factors (with a maximum of two alternatives each) to identify significantly the impact on the response variable (i.e., small number of group with higher number of students). The results of these experiments will be published in 2008.
Currently, SOP is further extended to serve also as a learning platform to integrate knowledge management with e-learning [40]. SOP will be able to provide learning content to the students during work. In addition, special emphasis will be put on the improvement of experience reuse by means of learning spaces [41].
To support further research and development of FOSS systems such as SOP, the authors decided to provide a stable version of SOP to the public in summer 2007. Other research institutions could then further develop and evaluate SOP for their own educational purposes and interests.
Fig. 6. Newly acquired topics.
REFERENCES
Design Pattern Formalization Techniques
B. Sc, computer science with a minor in electrical science from the University of Kaiserslautern, Germany.
He is currently a Research Assistant at Fraunhofer Institute for Experimental Software Engineering (IESE), Kaiserslautern, Germany. He is in charge of an empirical software engineering laboratory and is assigned to research and industrial projects in the competence center, “Virtual Office of the Future.” His research focuses on product line engineering, agility and flexibility, service-oriented computing, and open-source software. He has been involved in open-source software projects with the University of Kaiserslautern since 2002. Before joining Fraunhofer IESE in 2005, he worked in the Software Engineering Research Group, University of Kaiserslautern.
|
{"Source-Url": "http://www.researchgate.net/profile/Bjoern_Decker/publication/3052882_Experience_Management_Wikis_for_Reflective_Practice_in_Software_Capstone_Projects/links/00463519f2c546e4d4000000.pdf", "len_cl100k_base": 7415, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28042, "total-output-tokens": 9185, "length": "2e12", "weborganizer": {"__label__adult": 0.0008382797241210938, "__label__art_design": 0.0015420913696289062, "__label__crime_law": 0.0007700920104980469, "__label__education_jobs": 0.2427978515625, "__label__entertainment": 0.00021338462829589844, "__label__fashion_beauty": 0.000514984130859375, "__label__finance_business": 0.000919342041015625, "__label__food_dining": 0.0010042190551757812, "__label__games": 0.0014448165893554688, "__label__hardware": 0.0012350082397460938, "__label__health": 0.000988006591796875, "__label__history": 0.0008611679077148438, "__label__home_hobbies": 0.0003707408905029297, "__label__industrial": 0.000950336456298828, "__label__literature": 0.001117706298828125, "__label__politics": 0.00060272216796875, "__label__religion": 0.0012025833129882812, "__label__science_tech": 0.01824951171875, "__label__social_life": 0.0005936622619628906, "__label__software": 0.01299285888671875, "__label__software_dev": 0.7080078125, "__label__sports_fitness": 0.0007810592651367188, "__label__transportation": 0.0012407302856445312, "__label__travel": 0.0005350112915039062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43366, 0.01651]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43366, 0.53388]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43366, 0.94472]], "google_gemma-3-12b-it_contains_pii": [[0, 5506, false], [5506, 12107, null], [12107, 18418, null], [18418, 24675, null], [24675, 26711, null], [26711, 33637, null], [33637, 36818, null], [36818, 40045, null], [40045, 43366, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5506, true], [5506, 12107, null], [12107, 18418, null], [18418, 24675, null], [24675, 26711, null], [26711, 33637, null], [33637, 36818, null], [36818, 40045, null], [40045, 43366, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43366, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43366, null]], "pdf_page_numbers": [[0, 5506, 1], [5506, 12107, 2], [12107, 18418, 3], [18418, 24675, 4], [24675, 26711, 5], [26711, 33637, 6], [33637, 36818, 7], [36818, 40045, 8], [40045, 43366, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43366, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
3b2a1497f0167c189d3d060792d06ac09dedf2fa
|
Abstract— The current generation of corporate software tools & applications were not designed to support the unique needs of industrial field service work. Business software applications such as project management and time keeping, for example, are typically designed for traditional desktop computing office environments. As such, they assume low user mobility, high network availability and WIMP (Windows, Icons, Menus, and Pointer) user interfaces. These are also agnostic to physical environment context and are loosely integrated with one another, often requiring users to maintain duplicated information records. As a result, Field Services Personnel as Engineers, Superintendents and Craftsmen end up spending significant amount of their work time dealing with the consequences of these inefficiencies. In this paper, we describe a distributed architecture for mobile, contextual and integrated fieldwork software applications (or MCI) designed for mobile and wearable computing platforms. This software architecture defines a contextual and mobility-aware client side API, a flexible integration middleware, and instrumented backend services. We show how MCI can enable the construction of portable, mobile, context-aware and integrated software applications discussing its use in the implementation of SmartOutage, a mobile app used for automating common tasks in Field Engineering work.
Keywords: Distributed Software Architecture, Mobile & Contextual Computing, Application Integration, Field Engineering Automation.
I. INTRODUCTION
Field Engineers (or FEs) are personnel responsible for leading the installation and maintenance of a broad range of industrial equipment such as oil rigs, power generation turbines, drilling equipment, or airplane engines. As such, FE’s work has specific requirements that differentiate them from traditional office work. FE’s work is largely physical, often performed in inhospitable industrial settings. FEs work is also highly manual and mobile in nature, which requires them to carry extra pieces of equipment such as toolboxes, spare parts, manuals and schematics to the equipment assets being serviced. Finally, FEs activity is highly collaborative. FEs are responsible for working with superintendents and craftsmen (or millwrights) in their repair and maintenance activities, and for interacting with company and site managers, gathering requirements and ensuring the proper execution of planned outages, providing periodic status updates and reports. Different software applications are an integral part of FEs’ work routine. These include office suites, for writing reports and producing spreadsheets, Web-based search portals, for obtaining assets (or equipment) schematics and documents, time keeping and task management applications, used to track their work progress, as well as corporate communication tools, including instant messaging, calendar and e-mail. In spite of their importance for the FEs daily activities, these software applications are typically not optimized for the type of environment and the nature of FEs work: they are generally loosely integrated with one another, and are designed for desktop environment, being generally agnostic to the task and physical context. Finally, these traditional applications are not originally designed for mobility: due to environmental conditions as noise, poor network connectivity and lack of power sources, field automation computational systems are usually restricted to specific areas such as field offices and trailers. These offices are typically setup far from the FEs main work place. As a result, this creates the need for multiple trips between workplace and field office during a typical workday, whenever FEs need to search for documents, obtain schematics, communicate with managers and experts, produce and print reports, etc. The time spent in the field office, and not on deck, supervising and instructing craftsmen, quickly adds up, leading to potential delays and increased equipment downtime costs.
Recent developments on mobile and wearable computing technology have enabled novel applications that leverage the potential of the Internet of Things (IoT) [1] and the Industrial Internet [2]. Devices such as mobile smart phones, tablets, wearable computers, and sensors; together with new networking standards including ad-hoc and near field communication protocols [3] have been combined in support of mobile industrial applications.
In this paper, we discuss the MCI software architecture designed to support the development of mobile contextual and integrated software applications for field engineering. In particular, we have applied MCI in the design and implementation of the SmartOutage application, which goal is to better integrate existing corporate tools, already part of FEs daily work, into a single context-aware mobile experience.
In the SmartOutage project, the MCI architecture was utilized to support disconnected operation of client mobile applications and to integrate data from different corporate applications. It also utilizes logical (task-based) and physical...
(sensor-based) contextual information to pre-fetch data from corporate servers and to provide the right information to FEs during their typical workday. The software architecture is based on the concept of cooperating client and server-side software agents that work together to pre-fetch and synchronize data between client and corporate services.
The benefits of this approach are: a simplified API for client application development with automatic handling of data disconnection & synchronization, improved integration of corporate information systems, and ability to leverage on physical and logical context to provide application that deliver the right information to the right user at the right time.
II. BACKGROUND
Related work, in both industry and academia, can be used in support of field engineering requirements of mobility, disconnected operation, application integration, and contextual computing. This section discusses some of those approaches with their actual and potential use in field engineering.
A. Mobility and connectivity in field engineering
The lack of mobility of traditional computational systems, and their need for connectivity have been generally addressed in field engineering by the use of rugged laptop computers, and the setup of field offices.
The use of rugged laptops with WAN/LAN network adapters allows FEs to interact with corporate systems using cellular or WiFi networks. Those laptops, however, are relatively heavy and large to carry along, and the desktop class software applications they run have complex UIs and generally require reliable WAN connections, which is not generally available in many worksites.
Hence, a popular solution is the set up of field offices. Field offices typically come in the form of acclimatized and network-enabled trailers where a fully equipped workspace, with printers, connectivity and desktop computing is provided. Those interim office spaces, however, are expensive to maintain, and are typically located relatively far from the actual pieces of equipment being serviced.
This physical distance requires FEs to go back and forth to the field office to interact with computational systems to obtain information and handle different ad-hoc matters during a typical workday. For example, to search for and print the schematics and maintenance procedure for a part that was found broken, to order new parts for an equipment, to contact experts with maintenance questions, to generate hand-over reports, and to report to distant supervisors. In one of GE worksites, for example, the field office was located a 15 min walk away from the equipment being serviced.
Constant FE trips to field office are translated into less time on deck supervising craftsmen, which may lead to quality issues, tasks delays, and ultimately, in higher costs due to longer equipment downtimes.
B. Application integration
Another source of inefficiencies in field engineering is the lack of application integration. FEs have to interact with different systems, including timekeeping, project management, e-mail, document management and office automation tools. Those are loosely integrated with one another, often-requiring repeated data inputs, which may also lead to information error and duplicated work.
The need for application integration has long been recognized by the software industry, and has approaches that leverage on both client and server-side strategies. On the server-side, different industrial initiatives exist as the Oracle Application Integration Architecture (AIA) (http://www.oracle.com/us/products/applications/conmunications/application-integration) and the work of [4] and of OPC-UA (www.opcfoundation.org).
Oracle AIA provides a common middleware for workflow-drive integration of Oracle products supporting common data exchange formats and complex integration procedures involving multiple systems. Approaches such as AIA, however, are vertical and limited to vendor-specific systems.
OPC-UA is an Industry-driven approach, based on Open Web standards for application integration. It defines a set of standard data exchange formats and utilizes Service-Oriented Architectures based on Web protocols to define exchange APIs. By adopting open protocols, they can be made compatible with a broader set of applications. Another example is the work of [4], where a plug-in oriented architecture is proposed to integrate, or exchange data and control signals between server processes.
Most approaches tend to be application-agnostic and focus on the use of standardized protocols for horizontal integration of applications. They do not address the problem of semantics integration, for example, the combination of information from different related sources, which need to be consistently combined and abstracted. As a consequence, this functionality is typically handled by the business logic of the application.
On the client-side, different application integration approaches such as Web 2.0 mash-ups have been used [5]. Through the use of Java Script libraries and Web standards, they support the combination of information from different sources in the development of Web applications. This approach, however, has the potential to create complex applications and lacks appropriate support for disconnected operation & contextual application development.
C.Disconnected operation
A common solution to the lack or intermittent connectivity problem in field engineering has been the adoption of applications that support disconnected operation.
Both research and industry work has been done in the areas of disconnected operation for mobile software applications. The work on CODA [6] and the work of [7] are examples of approaches for file-based caching and offline mode operation. The work of [8] formalizes those approaches into a general language. Automatic algorithms for pre-fetching data in support of disconnected operation have also been proposed [9]. Those seminal works are based on application-agnostic operating-system abstractions as files and protocol blocks. As such, it is not possible, for example,
to synchronize individual data records or to replay transactional operations on databases.
The work of [10] and [11] employ agent-based protocols in support of disconnected operation in the telecommunication industry. In this computational model, a client agent cooperates with a server agent to ensure the correct delivery of network streams and data packets on highly unreliable networks. Similar to network file system approaches, these approaches operate over application agnostic network abstractions as packets and streams, which limits the optimizations that can be accomplished in those models.
Finally, a popular way to support application-level disconnected operation has been the use of mobile software agents, i.e. the transfer of data and process from the server side to the client as mobile code. The most popular example has been the use of Java Applets in web browsers, and the use of JavaScript embedded on Web pages [12]. Another approach for disconnected operation of thin-client applications has been proposed by [13]. By downloading and executing part of server code to the client, applications preserve their original functions during disconnected operation. These approaches, however, are not always fit for deployment on mobile devices, which have limited power, memory and processing capabilities. The transmission of agents through unreliable networks may not be always possible, and different security measures must be taken in order to safeguard client applications form malicious software.
D. Contextual mobile applications
Field Engineering is a highly regulated domain, where people must abide by strict safety rules, and must perform planned tasks according to schedule and well-known procedures. The highly structured nature of their work opens opportunity for context-driven applications, that leverage the knowledge about user’s tasks and information needs to anticipate user’s needs.
The use of contextual information in mobile computing has gained increased attention in the literature [14]. In particular, contextual information such as knowledge about tasks, user profile and physical location has been used to optimize the amount of data mobile devices should cache during disconnected operation, and to customize the user experience and information to the task at hand.
For example, the work of [15] has studied the use of context information in support of offline operation in mobile devices. The user profile information and the knowledge of user calendar appointments are used to cache data to the users PDA. A rule-based contextual engine for performing application data reconciliation when the device is reconnected to the network is also discussed. This work, however, the user context is mostly defined by logical information including appointments, business contacts and assigned tasks.
The work of [16] describes a contextual engine for mobile devices that account for both physical and logical context, allowing the combination of both physical location and logical application data for the determination of user context. A context engine is described, based on RDF-defined domain models compatible with offline mode in mobile devices.
Research has also been done on the application of mobile contextual applications in physical activities and procedures. For example, the work of [17] describes the results of a case study where medial records are provided to nurses according to their proximity to patients in hospitals. This works demonstrates the potential of context to procedure-oriented environments.
III. MCI Approach
The MCI architecture advances on existing research and industrial approaches by integrating: client and server-side agents to handle disconnected operation, reconciliation and application integration; together with logical and physical context determined by integrating data from: corporate systems such as task and project models, and the user environment such as iBeacons, RFIDs, bar codes, and measurements from smart tools. It also supports multi-modal user interaction in the form of mobile and wearable devices as illustrated in Figure 1.

Applications developed according to MCI architecture execute in mobile phones, tablets and wearable devices that can be easily carried by Field Engineers. These applications provide ready access to information anywhere in the field. In a typical scenario, users start their day by logging into the mobile app; the app then determines the user’s current task assignment based on the corporate task & project management tool, and the current timesheet of the user. Immediately after logging in, data is pre-fetched for the day: documents and schematics that might be required in the context of current work assignments are downloaded from corporate document management system; project and timesheet info are also downloaded and stored in a local cache of the mobile device. The user then leaves for the day journey in an environment that may or may not have reliable connectivity. The mobile devices sense the environment via near field communication such as Bluetooth Low Energy (BLE), or through auxiliary devices such as wearable cameras and virtual reality glasses. Those can quickly scan
QR and bar codes attached to equipment or even recognize pieces of equipment by their picture. The application responds to the context by suggesting task information to the user, including complex task steps, and by allowing the user to update the status and progress of current tasks; when the user approaches equipment being serviced, documents about that machine are made available. Field annotations and observations can also be performed and published to a news feed shared by all FE’s working in the same project. Feeds may include pictures and procedure observations; or EHS (Environmental Health and Safety) hazards spotted on the field. At the end of the day, events recorded in the feed are used for automatic generation of handover reports. Individual timekeeping can also be performed and submitted. All operations are available in both connected or offline modes. Offline data is synchronized opportunistically, once network connectivity is reestablished, using application-specific synchronization strategies for conflict resolution.
Compared to existing approaches, the MCI architecture supports disconnected operation and synchronization at the API level, i.e. it leverages application semantics to resolve conflicts. For example, status updates a combined into a single update to the server; multiple timesheets, submitted during the day, are collated and compared with server-side submission. It also performs pre-fetching of data based on application and physical context. MCI notion of context combines both user profile information as defined by different corporate applications, as well as physical location and proximity to assets (equipment in the field), and existing connectivity. In particular, we rely on environmental beacons and markers, associated to assets, to fetch related information for the user task with that specific asset.
IV. SOFTWARE ARCHITECTURE
As briefly discussed above, and illustrated in of Figure 2, the MCI architecture defines 3 software layers: the client-side layer, that executes in the mobile & wearable devices, the middleware layer, that executes in cloud services, and the corporate applications layer, that provides adapters to existing corporate applications distributed through different sites in the organization. Layers communicate with one another through RESTful Web Service [18] utilizing existing network technology when connection is available.
In the architecture of Figure 2, the left-hand-side boxes represent the software components and sub-systems of our approach; whereas the right-hand side clouds and devices represent the physical hardware components where those software sub-systems execute.

A. Mobile devices layer
The mobile devices layer (shown in Figure 2) supports the development of applications for different mobile devices including tablets, phones and wearables. Mobile applications are developed using open Web standards as JavaScript, CSS and HTML5, and execute within a HTML5 container such as PhoneGap (http://phonegap.com/), or Apache Cordova (http://cordova.apache.org/). The use of Web standards on an OS independent container provides portability for the application, supporting its migration to different devices. These applications interact with a local representation of the middleware services within the client-side API container. This container executes different agents, one for each required middleware service for the application. Interaction with environmental sensors is also important. Bridges to device-specific sensors and capabilities are utilized for this purpose. Existing bridges provide interfaces to local storage and to common sensors such as GPS, Cameras, Bluetooth and others.
From the application developer perspective, the interaction with middleware services and device sensors is mediated by local client-side agents. The term agent is employed here to highlight the collaborative aspect of these services. Agents initiate collaborate sessions with their middleware counterparts to synchronize and pre-fetch data and application-specific messages. In other words, agents are smart components that provide mobility transparency by detecting when the mobile device network connection is on or off. They automatically switching from online to offline operation, and perform opportunistic data synchronization using application-specific policies. The communication between client and middleware agents is performed using HTTP RESTful method calls over the available network.
An important aspect of the system is its support for disconnected operation. This support involves three main modes of operation as illustrated in Figure 3.
1) Connected with contextual pre-fetch
When the mobile device is connected to a wireless network and the communication between client and middleware can take place, the client-side API switches to connected mode. During this mode, API calls are forwarded to the corresponding middleware agents, and information flows from clients to servers using the network. Information produced and consumed by the application is also cached in the local client-side database.
In this mode, client agents also pre-fetches application data, based on the application context. This prefetching can be done either automatically, with the help of contextual information, or manually, explicitly directed by the user. Automatic pre-fetch is performed by the system using application-level and location-based knowledge provided by the context engine. In the SmartOutage application (discussed in section D), for example, users indicate the tasks they will be performing during the day. Tasks have information about assets being serviced, including the serial number of the asset. That information is used to automatically download documents and schematics to the mobile device, and to pre-populate the local project management database and timesheet.
2) Disconnected operation
When the mobile device connectivity becomes unavailable or unreliable, and the middleware agents cannot be reached through normal network connection, the client-side agents automatically switch to disconnected operation. During disconnected operation, all read and write operations on services data is served from the local cache. In particular, write calls are timestamped and moved to a local queue where they stay until the system gets reconnected, and local data state is updated to reflect the change. This work is performed by software agents associated to each middleware service.
3) Opportunistic Synchronization
When the system switches from disconnected to connected operation, synchronization takes place. During this process, the inconsistencies between client and server data are detected and appropriate read/writes are issued in order to deliver unsent items or to update local cache. Data obtained from the middleware agents may have precedence over data stored locally, and appropriate merging algorithms are implemented in an application-specific basis.
In order to save bandwidth and maximize the use of high latency and low bandwidth networks, synchronization is performed in an opportunistic fashion. In this process, data is synchronized at each client API method call instead of reading/writing all inconsistent information to the middleware at once, when network connection becomes available. That’s how it works: the first time, after a period of disconnection, the application invokes an API method in the client API, all locally stored, not submitted information is synchronized. The synchronization process compares timestamps and data ids with those of the middleware services, sending data for which the local timestamp is higher than the last known good local call. Note that this process can be overwritten according to each application semantics.
In order to prevent data from non-invoked methods to remain in the client side indefinitely, a push timeout is defined, after which all queued data is flushed from the client to the corresponding sever agent.
B. Contextual computing
Another important component of the client-side API is its contextual engine. The contextual engine continuously gathers information from the device surroundings and from
corporate information systems. Surrounding information include beacons and tags ids (such as QR codes) attached to the environment and machinery. Data from corporate systems include current tasks, job assignments, assets, and documents. Based on that information, the context model, defined by a set of content-based subscriptions, is updated continuously, producing context change trigger events that are consumed by the client-side agents.
Agents subscribe to specific types of events produced by the context server, e.g. proximity to assets, in-door location coordinates, task progress, and receive notifications when those values change. For example, the application can respond to the proximity to an asset to be serviced by producing an audible or visual notification, or it can offer to open a schematic associated to that asset; tasks can be automatically marked as in progress, and feeds can be automatically posted to the project activity feed timeline.
C. Middleware layer
The middleware layer hosts agents that support the disconnected/connected operation of the client as well as the integration, summarization and filtering of information from different sources, performing large part of the computation required by the application, and keeping data consistent across heterogeneous applications.
Each agent, therefore, provides a service abstraction layer, representing general back-end services e.g. document service, timekeeping service, project management, etc. This allows the client app to be ported to different backend applications without change. Middleware agents can also provide novel features based on existing agents, e.g. news feeds and reporting services.
As middleware agents are assumed to have better network connectivity with corporate services, as well as higher bandwidth than client counterparts, they perform most of the computationally expensive processing required by the client applications, including the validation, submission, reading, summarization, filtering and duplication of data from different sources. Middleware agents cooperate with client agents during synchronization, comparing timestamps and data ids, finding differences between data stored on the server and on the clients.
The middleware layer is dynamic and modular. Agents are implemented as bundles in an OSGi container such as Apache Karaf (karaf.apache.org) that provides automatic activation/deactivation of agents, hot deployment of new or existing agents, and dependence management. Agents can also be replicated in different servers. Agent update is supported by OSGi hot deployment.
Security is another requirement addressed by the middleware services. Enterprise services must be protected from external attacks. The use of a middleware agents that mediate the access of enterprise information provide an additional protection layer where data can be checked, users can be authenticated and access can be authorized. The agents in our architecture require previous authentication of clients. In a corporate setting, the authentication mechanism is provided by a single-sign-on entity whereas the authorization can be implemented in the agent-level based on those credentials. The middleware container provides basic authorization and authentication mechanisms for the development of new middleware agents, and the communication between agents is performed via HTTPS.

D. Corporate applications layer
The corporate application layer provides points of access and adapters to existing applications and services. This layer provides raw data by means of read/write methods to corporate services and databases, exposed through RESTful services.
In the current implementation of the architecture, for example, we utilize RESTful interfaces to corporate databases behind key applications such as timekeeping, project management, document management and directory services.
I. SmartOutage Application
We have applied the described mobile contextual integrated architecture in the development of an integrated mobile application called SmartOutage, with some screenshots shown in Figure 4. SmartOutage provides a single point of access for timekeeping, project management, document management, reporting and other corporate systems that are integral part of FEs work.
An outage is a planned set of maintenance tasks performed in critical equipment, such as power generation turbines, airplane engines, oil rigs, etc. Even though planned outages are necessary for the long-term operation of heavy machinery, they are costly and time critical, due to the need for equipment shutdown. Therefore, optimizations of time spend during those operations and the removal of workflow inefficiencies can result in millions of dollars in savings a year. SmartOutage helps outage workers achieve this goal by bringing the information closer to the end user, the FE, by supporting field annotations and documentation, and by optimizing tasks as report generation and progress reporting.
Before the introduction of SmartOutage application in mid 2014, FEs had to input their timecards, update their project statuses, print document and schematics and generate handover reports manually, using field notes that were later translated into printed reports, and by using different corporate applications, one for each task. Work was many times duplicated, utilizing corporate systems from different vendors e.g. Oracle T&L for time keeping and Primavera for project management, office automation tools for reporting, and Web-based document management systems for asset documentation management. There were many document repositories, one for each type of equipment, parts and historical data. Heavy stacks of paper, with documents and schematics were often carried along and produced each day. Moreover, in order to produce this material, FEs were often required to move back and forth from the asset deck to the field office.
SmartOutage was developed according to the MCI architecture. As illustrated in Figure 5, the SmartOutage client application provides a simplified and integrated interface to corporate services such as time keeping, project management, and documents. An activity feed is also provided allowing users to post occurrences and observations during the day.
The middleware layer agents implement an abstract API that combines, filters and caches data from different organizational services. The time keeping agent communicates with Oracle T&L time keeping; whereas the project management agent interacts with Primavera Project Management. The document management agent is responsible for filtering, indexing and caching documents from different proprietary GE document repositories, creating a unified point of access to this type of information. Finally, the activity feed agent stores user-provided message feeds, as well as application-generated feeds (such as project progress notifications), and use that information to the generation of handover reports that can be printed or e-mailed. Handover are reports used by FEs to communicate health and safety occurrences, activity and project status reports to other FEs on different work shifts.

Before entering in offline mode, the SmartOutage client app allows FEs to download required documents and cache project data. Upon login, based on the user location and the task management system information, the system knows which projects and assets the user must service, and proactively downloads required data and documents to the mobile device. SmartOutage also allows FEs to update their timesheets and task status on the field, and downloads all required project and task management info to support that task.
All these operations are accomplished utilizing cached data, without the need for an active network connection. At the end of the day, using a reliable wireless network connection, the system automatically synchronizes its data with the SmartOutage middleware agents, which update corporate systems accordingly.
A common issue faced by FEs was the generation of daily outage reports, which summarized the activities performed during the day. This report includes maintenance tasks as well as any health and safety issues found during the day. Reporting could take hours and required copying of
hand notes taking during the day to standardized forms. With the app, FEs can now report the issues as they see them in the filed. They can take pictures, and post observations to activity feeds. At the end of the day, a report is automatically generated using that information.
The benefits of SmartOutage use are many. As the software gets adopted by FEs in the field, we expect to eliminate 17% of the FE's non-value added time, e.g. commuting time between deck and field office, transcribing notes into reports, and searching for documents in different repositories. We also expect quality gains in outage work by increasing FE’s deck time in 40%. This time will be better used supervising craftsmen work, solving problems on the deck and transferring knowledge to new workforce. Overall, we expect a 10% decrease in outage time, which translates in lower downtime and maintenance costs.
II. CONCLUSIONS AND FUTURE WORK
In this paper, we described the MCI software architecture for mobile contextual integrated field engineering applications. This architecture integrates contextual information from the environment and other devices and information from different corporate applications in the development of mobile applications that are more portable and contextual and integrated than their desktop counterparts.
The benefits of this approach are: a simplified API for client application development, and the automatic handling of data disconnection and synchronization, with the benefits of improved integration of corporate information systems and support for both logical and physical contextual information. We discussed the use of the architecture in the development of SmartOutage, a mobile application utilized by Field Engineers, and its benefits to the business.
We are currently extending the contextual client container to better leverage on information generated by other mobile devices in the field, and to respond to a broader range of to events generated by beacons and markers attached to assets. For such, we utilize additional devices such as augmented reality glasses, smart tools and smart watches paired with existing mobile phones. We are also studying the use of mesh networks to improve the communication between devices in the field.
III. ACKNOWLEDGEMENTS
We would like to thank all the members of the SmartOutage project for their hard work, in particular: Prasad, Amit, Venki, Chandra, Adeeth, Huawei, Casey, Soumia, Lana, Supriya and Uday.
REFERENCES
|
{"Source-Url": "https://www.ics.uci.edu/~rsilvafi/papers/Conferences/MS-2015.pdf", "len_cl100k_base": 6355, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30563, "total-output-tokens": 7980, "length": "2e12", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.0005450248718261719, "__label__crime_law": 0.0004215240478515625, "__label__education_jobs": 0.000457763671875, "__label__entertainment": 9.512901306152344e-05, "__label__fashion_beauty": 0.00019478797912597656, "__label__finance_business": 0.0006232261657714844, "__label__food_dining": 0.0004246234893798828, "__label__games": 0.0005950927734375, "__label__hardware": 0.0027370452880859375, "__label__health": 0.0005483627319335938, "__label__history": 0.000278472900390625, "__label__home_hobbies": 9.590387344360352e-05, "__label__industrial": 0.003387451171875, "__label__literature": 0.0002083778381347656, "__label__politics": 0.0002071857452392578, "__label__religion": 0.000476837158203125, "__label__science_tech": 0.08770751953125, "__label__social_life": 6.949901580810547e-05, "__label__software": 0.03399658203125, "__label__software_dev": 0.865234375, "__label__sports_fitness": 0.000308990478515625, "__label__transportation": 0.000888824462890625, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39202, 0.01481]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39202, 0.29495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39202, 0.93239]], "google_gemma-3-12b-it_contains_pii": [[0, 5127, false], [5127, 11243, null], [11243, 16526, null], [16526, 19269, null], [19269, 24802, null], [24802, 28251, null], [28251, 33192, null], [33192, 39202, null], [39202, 39202, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5127, true], [5127, 11243, null], [11243, 16526, null], [16526, 19269, null], [19269, 24802, null], [24802, 28251, null], [28251, 33192, null], [33192, 39202, null], [39202, 39202, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39202, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39202, null]], "pdf_page_numbers": [[0, 5127, 1], [5127, 11243, 2], [11243, 16526, 3], [16526, 19269, 4], [19269, 24802, 5], [24802, 28251, 6], [28251, 33192, 7], [33192, 39202, 8], [39202, 39202, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39202, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
dcff12aed5fb5c3795571a0e3497a1cba98a281e
|
How validation can help in testing business processes orchestrating web services
Damian Grela\textsuperscript{1*}, Krzysztof Sapiecha\textsuperscript{1†}, Joanna Strug\textsuperscript{1‡}
\textsuperscript{1}Department of Computer Science, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
Abstract – Validation and testing are important in developing correct and fault free SOA-based systems. BPEL is a high level language that makes it possible to implement business processes as an orchestration of web services. In general, the testing requires much more test scenarios than the validation. However, in the case of BPEL processes, which have very simple and well structured implementation, test scenarios limited to the validation may also be efficient. The paper describes an experiment that aims at answering a question whether or not the validation test scenarios are also adequate for testing an implementation of BPEL processes. The experiment employs a Software Fault Injector for BPEL Processes that is able to inject faults when the test scenarios are running. The results of the experiment seem very promising. Hence, it seems that validation tests might give a strong support for testing.
1 Introduction
Recently, SOA (Service Oriented Architecture) \cite{1} has become the most promising architecture for IT systems. It offers a way of composing systems from loosely coupled and interoperable services. The services are independent business functions made accessible over a network by remote suppliers. A developer of a SOA-based system should only select the most appropriate services and coordinate them into business processes that cover specification requirements for the system.
BPEL (Business Process Execution Language) \cite{2} is a high level language that makes it possible to implement business processes as an orchestration of web services. The
orchestration consists in subsequent invoking the web services by a special element of the process, called its coordinator. It leads to a very simple and structured SOA where only the coordinator and communication links between the coordinator and the services need to be tested. A correctness of the services may be assumed, as they are provided as ready-to-use components and should be tested by their developers before being shared.
Both, validation and testing may be performed with the help of test scenarios. In [3, 4] a method of generation of test scenarios for validation of a BPEL process was given. Test scenarios obtained by means of the method cover all functional requirements for the process and provide high validation accuracy [4]. This paper presents a case study that aims at answering a question to what extent such test scenarios are adequate for testing an implementation of the process. To this end an experiment employing Software Fault Injector for BPEL Processes (SFIBP) was carried out and fault coverage for the test scenarios was calculated.
The paper is organised as follows. In Section 2 a related work is briefly described. In section 3 the problem is formulated. Section 4 defines fault coverage for the test scenarios. Section 5 contains a description of a case study. The paper ends with conclusions.
2 Related work
The problem of testing the SOA-based systems is not new, but most researchers focused on test generation [5, 6, 7, 8, 9, 10, 11, 12]. Their works fall loosely into two categories: developing efficient algorithms for selection of adequate tests [6, 7, 8, 9] and automation of the selection process [10, 11, 12]. Y. Yuan and Y. Yan [6, 7] proposed the graph-based approaches to handle concurrency activities of BPEL processes, in addition to basic and structured activities. Their approach was extended, combined with other techniques and implemented by several other researchers [8, 9]. M. Palomo-Duarte, A. Garcia-Dominguez, and I. Medina-Bulo based their approaches on the traditional white-box testing methods [10, 11, 12] and used formal methods and hybrid approaches along with the ActiveBPEL [13] and BPELUnit [14] test library for generating tests. However, in the works there are not any studies concerning the adequacy of generated tests for both validation and testing of BPEL processes.
The adequacy of tests can be measured with regard to some predefined metrics or by injecting faults and observing whether they are detected or not [15]. Fault injection is a popular technique that has been already applied in the context of SOA-based systems [16, 17, 18, 19]. The technique was often used for test generation [15]. PUPPET (Pick UP Performance Evaluation Test-bed) [16] is a tool for automatic generation of test-beds to empirically evaluate the QoS [17] features of a Web Service under development. GENESIS [18] generates executable web services from a description provided by the user and provides an environment in which the services can be tested prior to deployment in a production system. Another fault injection tool, WSInject [19], is a
script-driven fault injector that is able to inject interface and communication faults. WSInject works at the SOAP level and intercepts SOAP messages.
All of these approaches concern web-services or communication between a BPEL process and web-services (i.e. a fault is injected when a Web service is invoked). In the case of business processes various types of faults (e.g. replacement of input values) may appear. Therefore, SFIBP should be easily configurable to inject a rich variety of faults appearing in the very specific operational environment.
3 Problem statement
Validation aims to determine whether a software system satisfies requirements specification or not [20]. Requirements specification defines, in a formal way, what the system is expected to do. Test scenarios derived from such specification may be successfully used for the validation. In [3] an effective method for generation of test scenarios for validation of BPEL processes against specification requirements defined in SCR [21] was given. However, specification requirements should not contain anything that is not of interest for a user. Thus, test scenarios derived from the specification can check all specified requirements, but not necessarily implementation details that are introduced in further stages of development of the system. Therefore, the system should be tested to detect implementation errors. As generation of tests is usually time consuming, it is of high importance to find out to what extent the validation test scenarios are useful for the testing. To this end, an experiment might be performed and the implementation error coverage for the test scenarios could be calculated.
In general, the testing requires much more test scenarios than the validation. However, in the case of BPEL processes, which have very simple and well structured implementation, test scenarios limited to the validation may also be efficient. To measure the coverage of implementation errors by the validation test set, Software Fault Injector [22] for BPEL Processes will be applied. Implementation errors of BPEL process will be simulated by injecting faults when the test is running.
4 Faults in the SOA-based systems
In the SOA-based systems faults may be caused by two reasons:
1. incorrect interaction between web-services, and
2. incorrect internal logic of the system components (web-services and/or coordinator).
Interaction faults affect communication between different web-services or between the coordinator and the web-services. Internal logic errors are introduced by human developers or production facilities when components of the system are implemented. Eight types of interaction faults and four types of internal logic errors were identified [23]. Three out of them concern the systems orchestrating web services. These are the following:
1. Misbehaving execution flow. The fault occurs when a programmer invokes improper web-service\(^1\) (i.e. different from the specified one). Fig. 1 gives an example of an improper web-service invocation error (a) and a faulty free version of the code (b).

Fig. 1. Improper (a) and correct (b) web service invocation.
2. **Incorrect response.** The fault is caused by incorrect processing, within a coordinator, of correct response of a web-service (other causes related to incorrect internal logic of a web-service, as defined in [23], are not considered due to the assumption of correctness of web-services). Incorrect processing means, that:
- a response from a wrong output port is used (Fig. 2),
- a response is assigned to a wrong variable (Fig. 3), or
- a response is not assigned at all (Fig. 4).
\(^1\)The invoked web-service should exist and the invocation should be correct with regard to the specification of the web-service (otherwise such error will be reported by the compiler).
3. Parameter incompatibility. It occurs when a web-service receives, as an input data, incorrect arguments or arguments of incorrect types. The following four errors introduced into the implementation of a coordinator cause such a fault:
How validation can help in testing business...
- a different operation of a web-service is invoked (Fig. 5). The operation should belong to the web-service (otherwise such error will be reported by a compiler).
- a wrong input port is used (Fig. 6). The port used should be consistent with the one that should be used (otherwise such error will be reported by a compiler).
- a wrong output port is used (Fig. 6), or
- a wrong value is assigned to an input port (Fig. 7).
Fig. 5. Different (a) and proper (b) operations of a web-service are invoked.
Fig. 6. Wrong (a) and correct (b) input and output ports are used.
Fig. 7. Wrong (a) and correct (b) values are assigned to an input port.
Effects of the faults are visible because the faults make the external behaviour of the coordinator be different from the expected one. The cause-effect table is shown in Fig. 8.

Fig. 8. Implementation errors, interaction and development faults and their effects.
All other faults defined in [23] are not relevant for this work. These faults are either related to a physical layer or caused by providers of web-services (incorrectness of web-services or interaction between web-services).
5 Case study
The goal of the case study is to evaluate the adequacy of validation test scenarios for testing BPEL processes. The test scenarios are evaluated based on their fault coverage calculated with respect to the faults generated by the SFIBP. The SFIBP generates the following three types of faults:
1. replacing web-service output parameters (OP),
2. replacing values of a web-service input parameters (IP),
3. replacing requested web-service with another one (WS).
The faults generated by SFIBP give the same observable effects as those described in Section 4, but their injection does not require the implementation of a coordinator to be changed.
The fault coverage for a set of test scenarios (FC) is expressed as a percentage of detected faults to all injected faults.
\[ FC = \frac{F_D}{F_I} \cdot 100\% , \quad \text{where:} \]
$F_D$ – a number of detected faults,
$F_I$ – a total number of injected faults.
As the faults are artificially generated and injected, their total number is known. However, it is not possible to determine the number and the types of all errors that might be the real source of the faults. Nevertheless, this is not shortcoming of the approach because only the coverage has considerable meaning.
The subsequent subsections describe briefly SFIBP that was used in the experiment to generate and inject faults (Section 5.1), an example system and test scenarios generated for the system (Section 5.2), and the experiment and its results (Section 5.3).
### 5.1 Software Fault Injector for the BPEL Processes
SFIBP is an execution-based injector [15], which is able to inject faults into the BPEL processes when test scenarios are running.
The SFIBP has been implemented as a special local service that is invoked instead of the proper web-service. Such approach helps reduce costs of the experiment, as the faults are injected without changing the implementation of a coordinator. A configuration file produced by the SFIBP defines three parameters of the proper web-services:
- identifiers of all methods provided by the web-services (ID),
- names of the methods,
- the number and names of parameters of the methods.
It also includes predefined values of input and output parameters, values of alternative web-services IDs that are used to generate faults and the probability that a fault will be injected. Information about the injected faults is stored in a log file.
### 5.2 Football Reservation System
Football Reservation System (FRS) is a simple system allowing its users to book tickets for football games, hotels to stay during the games and plane or train tickets to arrive at the games.
The system was implemented as a BPEL process orchestrating five web-services. Each of the services is accessible on a different server and the whole process of reservation is coordinated through a central coordinator (Fig. 9).
Short descriptions of the web-services and their input and output parameters are given in Table 1. Types of the parameters are placed in brackets next to the parameters names.
A set of test scenarios generated for the system consists of 4 test scenarios having between two and five input/output events. The total number of the events is 16. The test scenarios were generated by means of the checking path method presented in [3]. Their usage provided high validation accuracy for the system.
Fig. 9. Service orchestration for a Football Reservation process.
<table>
<thead>
<tr>
<th>web-service ID</th>
<th>description</th>
<th>Parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>Client</td>
<td>retrieves data from the client and sends information about order</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>TicketRS</td>
<td>checks an availability of a football ticket at the given date</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>HotelBS</td>
<td>checks an availability of a hotel room at the given date</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>TrainTR</td>
<td>checks an availability of a train at the given date</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>PlaneTR</td>
<td>checks an availability of a plane at the given date</td>
<td>input: Date [String]</td>
</tr>
</tbody>
</table>
5.3 The experiment
The experiment consisted in:
1. implementing a fault free BPEL process for FRS and generating validation test scenarios,
2. configuring the SFIBP,
3. starting the SFIBP and running the BPEL process with the test scenarios,
4. comparing the outputs generated by the BPEL process with the expected ones given by test scenarios,
5. saving the results,
6. calculating the fault coverage.
Steps 3, 4 and 5 were repeated 1000 times. At each of the iteration randomly generated faults were injected into the BPEL process.
Table 2 shows the setting for all web-services of the FRS. The first row of the table shows IDs of web-service. The next two rows show the values of output and input parameters that are used to replace the proper ones when the faults are injected. IDs of web-services that are invoked instead of the proper ones are shown in the last row. The probability that a fault will occur was set to 33% for all faults.
<table>
<thead>
<tr>
<th>Web-service</th>
<th>TicketRS</th>
<th>HotelBS</th>
<th>TrainTR</th>
<th>PlaneTR</th>
</tr>
</thead>
<tbody>
<tr>
<td>output parameter</td>
<td>„Yes”, „No”</td>
<td>„OK”, „No”</td>
<td>„Success”, „Failure”</td>
<td>„True”, „False”</td>
</tr>
<tr>
<td>alternative web-services</td>
<td>„TicketRS”, „PlaneTR”, „TrainTR”</td>
<td>„TicketRS”, „PlaneTR”, „TrainTR”</td>
<td>„TicketRS”, „HotelBS”, „PlaneTR”</td>
<td>„TicketRS”, „HotelBS”, „TrainTR”</td>
</tr>
</tbody>
</table>
The outputs generated by TicketRS, HotelBS, TrainTR and PlaneTR depend on an interval between a date of reservation and a date of football match. If the interval is equal or longer than it was assumed, then the respective web-service generates positive answer, otherwise the answer is negative. The intervals were set as follows: 15 days for TicketRS, 5 days for HotelBS, 1 day for TrainTR and 30 days for PlaneTR. These rules were introduced into the implementation of the web-services.
In the experiment the reservation date is an actual date (a day on which the process was invoked) and the date of the football match is the date that was specified by the user during the FRS invocation.
During the experiment the SFIBP could generate various combinations of the three types of faults (Section 5) or not introduce any fault. This gives eight different configurations of faults for each of the web-services and about 4000 for the whole system.
At the end of the experiment its results were analyzed and the fault coverage for the test scenarios was calculated. Table 3 summarises the results. It reports, for each of
the web-services, the total number of fault injected, and detected. The fault numbers were grouped based upon the type of faults.
### Table 3
<table>
<thead>
<tr>
<th>Faults</th>
<th>TicketRS</th>
<th>HotelBS</th>
<th>TrainTR</th>
<th>PlaneTR</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>IP</td>
<td>OP</td>
<td>WS</td>
<td>IP</td>
</tr>
<tr>
<td>injected</td>
<td>304</td>
<td>212</td>
<td>348</td>
<td>144</td>
</tr>
<tr>
<td>detected</td>
<td>295</td>
<td>208</td>
<td>348</td>
<td>140</td>
</tr>
<tr>
<td>FC</td>
<td>97%</td>
<td>98%</td>
<td>100%</td>
<td>97%</td>
</tr>
</tbody>
</table>
Due to the nature of the example majority of the injected faults is related to the first web-service (TicketRS) and the minority of them to the last web-service (PlaneTR). Almost all injected faults were detected by the test scenarios. The average fault coverage calculated based on the results of the experiments was 98%.
### 6 Conclusions
The paper describes a statistical experiment carried out to evaluate the test scenarios generated for validation of BPEL processes in context of testing the processes. Test generation is a time consuming activity, thus the possibility of having one set of tests scenarios providing accurate results for both validation and testing, was worth investigating.
The experiment was performed on a small example orchestrating five web-services. For the system, the SFIBP was able to generate three types of faults giving in total 4000 different fault configurations. For more complex systems the number of different fault configurations may be much higher than for the FRS. That is why not exhaustive but statistical testing was performed. It illustrates a general approach to the problem.
The experimental results seem very promising. The calculated fault coverage shows that almost all injected faults (98%) were detected by the test scenarios. The results confirmed the earlier assumptions that in the case of BPEL processes validation test scenarios may be adequate, also when they are used for testing. Hence, it seems that validation tests might give a strong support for testing. However, the experiment was carried out only on one simple system and focused on faults that only simulate implementation errors. More experiments are needed in order to make the conclusions more general. This will be one of the main goals of our further research.
References
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3372/2566", "len_cl100k_base": 4315, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 24761, "total-output-tokens": 6169, "length": "2e12", "weborganizer": {"__label__adult": 0.0003066062927246094, "__label__art_design": 0.00031185150146484375, "__label__crime_law": 0.00033020973205566406, "__label__education_jobs": 0.0006308555603027344, "__label__entertainment": 6.4849853515625e-05, "__label__fashion_beauty": 0.00014960765838623047, "__label__finance_business": 0.0002112388610839844, "__label__food_dining": 0.0003235340118408203, "__label__games": 0.0004830360412597656, "__label__hardware": 0.0008006095886230469, "__label__health": 0.0004935264587402344, "__label__history": 0.0001863241195678711, "__label__home_hobbies": 5.835294723510742e-05, "__label__industrial": 0.0003376007080078125, "__label__literature": 0.0002772808074951172, "__label__politics": 0.0002028942108154297, "__label__religion": 0.0003821849822998047, "__label__science_tech": 0.0275726318359375, "__label__social_life": 9.125471115112303e-05, "__label__software": 0.00885009765625, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.0002267360687255859, "__label__transportation": 0.0003848075866699219, "__label__travel": 0.00017249584197998047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23961, 0.03]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23961, 0.53618]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23961, 0.86996]], "google_gemma-3-12b-it_contains_pii": [[0, 1897, false], [1897, 5010, null], [5010, 7854, null], [7854, 8904, null], [8904, 9142, null], [9142, 9834, null], [9834, 11238, null], [11238, 13765, null], [13765, 15049, null], [15049, 17441, null], [17441, 19898, null], [19898, 23507, null], [23507, 23961, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1897, true], [1897, 5010, null], [5010, 7854, null], [7854, 8904, null], [8904, 9142, null], [9142, 9834, null], [9834, 11238, null], [11238, 13765, null], [13765, 15049, null], [15049, 17441, null], [17441, 19898, null], [19898, 23507, null], [23507, 23961, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23961, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23961, null]], "pdf_page_numbers": [[0, 1897, 1], [1897, 5010, 2], [5010, 7854, 3], [7854, 8904, 4], [8904, 9142, 5], [9142, 9834, 6], [9834, 11238, 7], [11238, 13765, 8], [13765, 15049, 9], [15049, 17441, 10], [17441, 19898, 11], [19898, 23507, 12], [23507, 23961, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23961, 0.13636]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
b5c8cecd16df16ee046520c5b40c2ae413b02d17
|
Generic Interfaces to Remote Applications in Open Systems
M. Merz and W. Lamersdorf
Department of Computer Science, University of Hamburg, Vogt-Kölln-Straße 30, D-2000 Hamburg 54, Germany; eMail: [merz|lamersd]@dbis1.informatik.uni-hamburg.de
Abstract
Future industrial production and engineering environments will profit substantially from emerging open distributed computer communication network environments. They will, in principle, be able to benefit from a high potential of services available in such environments to support individual client applications. In practice, however, free and flexible client/server cooperations are frequently hindered by the great and confusing variety of interfaces involved in accessing various and heterogeneous network services.
In order to support open client/server cooperations in distributed systems, this contribution proposes a unifying description mechanism for remote services in computer networks. It describes an application oriented generic communication service, which facilitates client/server cooperation in open systems. Most important basis for such a service is a uniform service specification mechanism for open server interfaces. Correspondingly, the paper first specifies a specific service interface description language (SIDL). It then shows how such a service interface description could also be used for automatic creation of server-specific local human user interfaces. In combination, a generic network interface description language (NIDL) specification, as proposed here, supports client applications in open systems by providing a common mechanism to access and utilize any service available anywhere in the network.
Keyword Codes: C.2.4, H.4, H.5.0
Keywords: Distributed Systems, Communications Applications, Information Interfaces and Presentation, General
1. INTRODUCTION
Since the integration of formerly - logically and geographically - separated and heterogeneous software systems has become an issue for research and industrial implementation, new concepts to support this kind of 'interoperability' have gradually emerged: programming languages have been extended to cover communication requirements between separated modules [1]; advanced database systems started to support distribution schemata for the allocation of
distributed objects [2]; and specific application oriented communication standards have been developed for, e.g., accessing remote database services in open network environments [3]. As a result, the system designer in the context of such a 'distributed application' scenario is now confronted with a large number of different kinds of interfaces to various services offered anywhere in the network. The variety of such interfaces can be described between the human user, on the one hand, and a (potential) multitude of heterogeneous and different remote applications in the open systems environment, on the other hand. In such a scenario, the potential cooperation of different users and system component is, in practice however, hindered by the multitude of different servers and interfaces as generally offered in the heterogeneous open system network environment.
Consider, as a simple example, an access facility to remote database services in open systems, which involves the following (different) interfaces (see Figure 1): the user interface between the human user and a (local) front-end application software, the interface between this software and appropriate network communication services, its counterpart at the remote site and the interface between the remote application and its local resources. For each of these interfaces there are several de-jure and de-facto standards available, but in most cases these standards have been specified for the requirements of some specialized application contexts (e.g. electronic mail or document interchange standards [4]) or they concentrate on system-oriented services, like communication standards, adhering to the ISO reference model for Open System Interconnection (OSI). Only recently, the interdependence of several of such interfaces between the human user, on one hand, and remote applications in open systems, on the other hand, has become an issue for international standardization in the context of the fundamental work on a general framework for 'Open Distributed Processing' (ODP) of ISO and CCITT [5].

The situation becomes even more complex if we consider future developments of high-speed communication systems: in future distributed open systems, interconnected by high-speed networks, a vast number of services will be easily accessible at a high transfer rate and a high level of distribution transparency [6]. In such environments, remote applications will play the role of dedicated servers, performing specialized task, rather than monolithic software systems like today's host applications. Here, remote servers are accessible through specific server interfaces in a similar way to both local and external communication partners [7].
Especially in a scenario as sketched above, it would create great confusion for human users if the human interface to such a fine-grained "market of services" was reflected by a
similarly fine-grained structure of different (!) server interfaces. Therefore, access to various services in open systems could be greatly supported and improved if all accessible services could be described in terms of a unified, standardized, and commonly known formal notation. Such a unifying formalism for network (service) interfaces of any kind is called a 'Network Interface Definition Language' (NIDL) and serves to provide a corresponding service description for each server that is directly or indirectly accessible for any (remote) client node [8].
This paper focuses on the joint and unified design and description of both communication and user interfaces in a heterogeneous open system scenario. The goal aimed at here is a generic system software component, which dynamically generates the required user interfaces from any specific 'Service Interface Description' (SID) expressed in terms of the standardized NIDL (see Figure 2). In such a scenario, the SID could serve both for determining the network interface of a remote server and for its (e.g. graphical) appearance at the presentation level of a local human user interface [9].
The paper is organized as follows: section 2 gives a survey about potential problems when applying current client/server interfacing techniques in the context of an open systems background. Based on these facts, requirements for a successful client/server interaction are elaborated in Section 3. Afterwards, a client/server interaction model is presented and, based on this model, a prototype implementation, including an introduction to the service interface description language SIDL (Section 4). Some conclusions are finally presented in Section 5.
2. INTERFACING PROBLEMS IN OPEN SYSTEMS
As a motivating example, a car reservation service is considered that a car rental company offers to its clients based on a distributed application service. Several questions arise within this context: how are data entry forms to be presented at a remote client's site? How is the service access to be supported for a client? What does the client's software need to know about the service considered? And how can the validity of the data transferred be assured? Finally, how does the client software adjust to a possible release change of the server interface?
As a first step, the following sections give a short classification of these interfacing problems at client/server-systems in open distributed environments. This classification focuses on problems arising specifically in an open systems context.
2.1 Heterogeneity Problems
There are several levels of heterogeneity in open distributed processing that emerge from the integration of different multi-vendor hardware and software components. At a distinct level, heterogeneity is even demanded, as the specialization of software systems requires "non-standard" implementations. In general, however, heterogeneity hinders the desired cooperative interaction of distinct distributed applications.
At the lowest level of heterogeneity, different hardware implementations and, thus, varying physical representations of data values at each respective local system have to be integrated by corresponding mapping mechanisms between heterogeneous system components. In the context of the ISO OSI reference model for open system communication, this transformation task is performed by the presentation services at level 6 of the ISO OSI reference model [10].
At a higher level of abstraction, heterogeneity problems address differences in services and resource management functions: communicating applications first have to agree about a jointly supported protocol as well as about certain communication quality attributes. Local resources, like file systems, databases or operation system services, can not be accessed from remote systems if different and inconsistent interfaces are supplied. At last, applications semantics may vary themselves, even though their syntax and semantics may seem virtually identical at the interface level.
In general, the most promising approach to tackle heterogeneity problems is to standardize interfaces or applications as a whole. A standardization procedure, however, is a time consuming process and hinders an immediate "publication" of new services. Therefore, generic application oriented communication standards may be well suited for defining the basic communication protocols necessary for each class of similar applications (like, e.g., Remote Database Access, RDA)[11].
In our case of integrating communication and user interface description techniques, the standardization may also cover the syntax of a service description that is then transmitted as an individual 'protocol data unit' (i.e. standardized message) over the network. Thus, each individual server is able to export such a description of its services provided to any of its potential clients in a unified way.
2.2 Service Access Problems
The next important question to be addressed in such a scenario is, for example, how to initiate an access to a remote application before actually interact with it?
In the context of a Local Area Network (LAN), e.g., local 'context servers' may be accessible via a dedicated 'name server'. Using such a server, a client may send a service ID to the name server and then receive all (or some, the 'best possible', etc.) information necessary to perform a remote service invocation. Here, the name server's task is to check whether a service is registered under the given ID by, e.g., looking up a local table. In this simple case, only required and registered service IDs have to be matched, since developers of client and server applications took specific care for these IDs to conform to each other. In more complex scenarios the match between client requirements and server potentials could be provided by a specific distributed system service (a so called 'trader' or 'broker' component) based, for example, upon a more extensive formal specification of both requestor and server functionality [12].
In open service environments at a global scale, an ID-based service selection is possible but not satisfactory for client service users: IDs have to be centrally reserved for each service
offered, and IDs have to be known in advance by clients for all accessible servers that provide potentially useful functions for clients in addition to the specific server interfaces. This situation could be greatly improved if services could be described and identified by potential clients semantically, i.e. not only by a more or less characteristic service name or ID, but in terms of a certain specification of their semantic functionality and respective properties. If such a service description is available, the service selection mechanism (which is a part of the 'trader' function - see below) for the client would not be restricted anymore to using IDs as the only attribute for specifying and identifying services but could be based on a limited match of the attribute semantics of both the client request and the respective server function [13]. However, even this simple task of attribute based matching of client requests to registered services requires additional support by the new distributed service component which supports matching client request to the 'right' server functions available anywhere in the network. In advanced distributed open system scenarios, such a component is called a 'trader' [14].
2.3 Conformance Problems
In heterogeneous distributed systems, client calls are usually transmitted to remote server functions via an abstract communication mechanism: the Remote Procedure Call (RPC) component [15]. The RPC communication mechanism aims at hiding nearly all distribution and communication problems to RPC users - even in certain (e.g. message transmission) error situations. For a local system component, which uses RPC - for example a client calling a remote server - all server functions are locally represented by a so called 'stub' function. This stub component, which is a part of any distributed RPC based communication function, then does all necessary transformation from local to communication contexts (resp. vice versa) and cares for message transmission and reception.
In advanced distributed systems based on RPC, the necessary local stub can be automatically generated, based on a respective local and remote interface specification. In addition, when using such 'stub generators' to automatically create client and server interface code, client and server parameter types will implicitly match since they are derived from the same interface specification.
In open systems environments, however, the situation is somewhat more difficult: developers of client and servers applications are, in general, remote and unknown to each other; clients are not supposed to supply specific compilers for client's interface stubs. Further, they

can not rely on an interface description as imported from a remote server site since data
types, as defined at the server node, may not conform to the server's actual interface data
types - or the exporting component may be faulty or even malicious.
In order to automatically support the development of interfaces between both client and
server components in open systems, at least two requirements have to be fulfilled: first, the
protocol data unit (i.e. the 'message') that carries the request has to contain type information
about the parameter values it contains and, second, a service description has to be imported
by the client from the server in order to (type) check the conformance of the actual request
parameters and types with those of the server as specified in its service description. To avoid
using uncompiled source code stubs, the service description notation should be interpreted
directly by a generic client stub.
2.4 Standardization Problems
As a consequence of what was addressed in the preceding paragraphs, an extended,
formalized, and unified - i.e. standardized - service interface description technique is a
necessary prerequisite for any effort - human or system supported - to match client requests
and server offerings in an open systems scenario. Such a standard service description
mechanism could then become a basis for a more elaborate trader service component that
supports client and server matches in open system environments, where client and server
functions are not only provided on distinct nodes of the network, but also independently
developed and, in many cases, unrelated and unknown to each other.
The issue in this case is to commit to a reasonable scope of standardization aspects: are
only syntactical aspects to be defined as it is the case with ASN.1 [10] or are service
primitives to be covered as well? At what extent is the interdependence between user
interface elements, data types and service primitives a standardization matter: a standard
could prescribe types and their appearance at the user interface level or it could be more
appropriate to design a generic service with the option for application specific extensions.
In the following sections we concentrate on an executable typed protocol description.
3. ELEMENTS OF AN INTERFACE DESCRIPTION LANGUAGE FOR OPEN
SYSTEM SERVICES
As stated above, RPC provides an appropriate cooperation paradigm and communication
mechanism for client/ server applications in distributed environments. For open networking
system at a global scale, however, an additional important question is to be raised: how can
an RPC interface description (as known, e.g., from Sun RPC) be conceptually extended to
satisfy at least the following problems:
• First, an abstraction layer has to be created to cover those heterogeneity problems, which
arise from any mismatch of the involved interfaces as shown in Figure 1.
• Second, means to express the functionality of a server to both, human and software clients,
are to be supplied to a common trading service.
• Third, since clients and servers do not provide implicit knowledge about each other's
interfaces, it has to be explicified dynamically to prevent mismatching interaction.
• Finally, a sufficient level of generality is required if the previous aspects are covered by an
integrating standard.
Applied to the car rental example, a service interface description notation should define all necessary elements that enable the car booking task to human clients: different user interfaces might be involved, invalid data values and types are to be rejected. The user may first enter a specification form of the required car and thereafter acknowledge a final order form. Therefore, several structural and behavioral constraints have to be explicated within the service description.
Consequently, we have identified the following elements of an extended interface description:
### 3.1 Type and Procedure Description
The necessary information about service functionality and interface could, in a RPC scenario, be provided based on a unified and standardized service interface description language (SIDL) for the remote procedure call interfaces of any remote service available to any client in the network.
Automatic stub generation is well applicable in local area network environments, where both client and server code is written and used by a limited group of closely related developers [16]. In open system environments, however, a user is not expected to be able to compile and link client interface code. Under such conditions, a generic client interface with an *interpreting stub* would be more appropriate. But having only a *generic communication interface*, the client application code still remains server specific. By replacing this code with a *generic user interface*, the client application as a whole is able to *adapt dynamically* (i.e. 'automatically') to any interface as required by a server. Therefore, the following aspects have to be formalized as well in order to be interpretable by generic client applications:
### 3.2 Export Description
A server's location is, in general, not known to a client in the open network. If a user searches for a service to utilize in such a scenario, a communication connection can not be established between client and server unless it has acquired a matching service description. In advanced distributed application environments, this task of selecting an appropriate service and providing the communication link to it is supported by a *trader* component or service. The trader's task is to register service descriptions, which are to be received from servers from anywhere in the network, and then to facilitate a client's search for a specific service according to its request. In an *attribute list* based service description technique as proposed here, the server provides two alternate service descriptions techniques: the *export description* and *natural language tags*. Correspondingly, a trader offers two alternate service acquisition techniques in our model:
1. A trading mechanism, based on *attribute lists* (the *export description*) [17].
2. An interactive *browsing* mechanism through registered services by the user based on *natural language tags*.
To support the task of service selection, the service interface description as proposed in this paper is extended by an *export description*, which characterizes the interface of a server as a whole on the basis of a server specific (formalized and standardized) *attribute list*.
### 3.3 User Interface Description
In our service interface description technique, the interface description items (such as type, procedure, state and export description) are extended by a *label* and a *comment* attribute for each service description. These extensions may be used to supply natural language
Annotations of the item they are bound to. They serve as an additional redundant, user-oriented tag to support an interactive analysis of service characteristics, which are available and potentially useful for specific client request in the network.
Since a generic user interface is driven by the service interface description, type-specific editors for data values can help to prevent potential type mismatches. Each type that can be defined within the service description is automatically mapped into a specific editor structure. Further constraints, like subrange types for integers, should be possible to define. These constraints should be reflected by specific user interface objects, which prevent input of not type conforming data values. Thus, the generic client application is able to prevent transmission of faulty parameters to remote servers.
In order to reflect, e.g., a subrange type, several user interface objects are selectable; therefore, the service description language is additionally extended by various presentation hints for the respective data types. These descriptions can then be used to automatically create a (graphical) user interface representation of the respective remote service interface values on a local I/O device - independent of the server and its function accessed in the open network environment.
### 3.4 State description
In general, servers can be classified as stateless and statefull servers. Services of a stateless server can be invoked in any order, while in the case of statefull servers only a subset of all possible invocation sequences is allowed. A specification of this subset of allowed state transitions is part of the protocol description of that respective server. If a formal service description contains information about those 'legal' server states, remote clients could also acquire that description and restrict client behaviour to only that which is allowed at the server interface.
Consequently, the last part of our service interface description is a formal description of a state transition automaton which specifies legal server states and transitions as initiated by server functions and service requests. By defining communication states as a part of the service description interpreted at run time, they are not "hard wired" within client and server instances. Therefore, the state description serves as an application protocol specification.
### 3.5 Implementation Architecture Overview
In the following, the general system architecture of a prototype system environment is presented, which implements in a small example the basic components of a distributed client/server environment with formally and uniquely defined interfaces as presented above. A first prototype of the respective main system components has been implemented in the context of a locally distributed heterogeneous open system network environment.
The corresponding system model involves four kinds of components: client, client agent (CAG), server agent (SAG) and server. Clients and servers consist of application code and an interface to their respective local agents. The interaction between clients and servers can be divided into 5 phases: start-up of the components, binding between client and server, service invocation, unbinding, and shut-down of the components. The process of binding implies the selection of a server as well as the import of the server's service description. As shown in Figure 4, at the beginning of a binding the service description is stored at the server's site after being converted from an external representation. The next step is the transfer of the service description to both agents, where it is stored persistently. Only the internal
The representation of a service description needs to be standardized since it is interchanged between heterogeneous components.
During a *service invocation* RPC parameters are transferred via both agents in order to perform the necessary conformance checks. If there is a mismatch between specified types and the parameter types transferred, it is discovered by the agent local to the sender, and an error code is returned.
Instead of involving a specific client application, parameter values are mapped directly to the user interface level. Therefore, the generic user interface supports functions for the user to select an appropriate server, to examine the service procedures offered by this server and, finally, to invoke selected procedures. Thus, the communication-oriented process of binding between client and server is reflected at the user level by this service selection process.
The actual service invocation requires the user to supply the RPC with parameter values. Therefore, the generic user interface generates a typed form for parameter entry (Figure 8). The required type description is retrieved from the local CAG. Return values are presented in the same way.
By involving this generic style of user interface to remote services, a conformance between the client and server interfaces is given implicitly. The possibility of non-conformance is, however, left at the semantic level of an application. The effort of client development is, therefore, reduced to only one implemention per hardware and software platform.
The prototype, implemented according to the model described above, supports the integration of user interface and communication service aspects. Developing a new server application requires solely to code service procedures upon the server communication interface and to describe these procedures by means of a SIDL service description: the formal parts as type, procedure, state and export description and, optionally, the informal part of the user interface description as natural language tags.
The prototype implementation was developed on the basis of the SUN RPC library using XDR (eXternal Data Representation) [16] as presentation service. The technical environment consists of IBM RS/6000 and Sun SPARC Stations running AIX and SunOS as operating systems. In an implementation designed for heterogeneous and open system scenarios, the
allocation of the involved client and server components to hardware systems is not restricted to the example configuration as shown in Figure 5.

4. A SERVICE INTERFACE DESCRIPTION LANGUAGE
This section presents some technical details of the Service Interface Description Language (SIDL) as proposed in this paper. As a consequence of the service description elements as presented above, a SIDL service interface description contains four main components:
- The **type description**, defining at least one type for RPC parameters to be transferred,
- the **procedure description**, which describes remote services as a procedural interface,
- an optional **state description** in case of statefull servers, and
- an **export description**, which classifies the service exported on basis of attribute lists.
The remaining user interface description is syntactically integrated into these components.

4.1 Data Type Declarations
Any remote procedure call may require structured parameter or return values to be transmitted to and from client and server interfaces. Therefore, an orthogonal and complete type system is necessary to describe these types uniquely. In distributed systems, pointer or reference types are not allowed since their values are invalid outside of their local context. However, transformation functions have to be supplied to transform between tree or list structures as a local representation and unique bit sequences as a general transfer representation.
Accordingly, the SIDL type system contains the following types:
**Basic types:**
INTEGER, DATE, CARDINAL, FLOAT, CHAR, STRING and TEXT
**Structured types:**
RECORD { ... }, CHOICE { ... } and SEQUENCE { ... }
**Opaque type:**
ANY
The TEXT type refers to a text file on the local workstation, which can be embedded into a RPC parameter. CHOICE specifies the variant part of a RECORD discriminated by a type tag. A SEQUENCE type denotes a repetition of identical subtypes. The opaque type ANY allows dynamic types, which are dynamically received at runtime but not checked for conformance, since their actual type can not be anticipated at binding time.
According to the SIDL syntax definition, a type declaration can be extended optionally by a list of attribute/value pairs. These may concern subrange restrictions of a type or hints for the user interface representation. The following parameter type
```plaintext
TYPE SelectCarT RECORD {
STRING, LABEL "Booking Date";
INTEGER, LABEL "Mileage", RANGE TINY 50 10000;
INTEGER, LABEL "# Days", RANGE TINY 1 100;
INTEGER, LABEL "Model", COMM "For a broader range of models consult our service at main branch",
RANGE RADIO 3 "BMW 323" "VW Golf" "Fiat UNO";
STRING, LABEL "Customer Name";
STRING, LABEL "First Name";
STRING, LABEL "Street";
STRING, LABEL "Zip Code";
STRING, LABEL "City";
CHOICE {
INTEGER LABEL "Visa";
INTEGER LABEL "Master";
INTEGER LABEL "Amex";
INTEGER LABEL "Invoice";
} LABEL "Payment";
};
```
defines a record type that contains nested structured and basic types. Some integers are constraint types restricted to a subrange of, e.g., 100 in the case of the "# Days" field. Thus, range constraints can be considered by a generic user interface in order to reject input of data values that do not satisfy the type constraints. Extension list attributes are a subject to standardization in order to be interpreted correctly at heterogeneous sites. For the automatic generation of user interfaces, however, they are treated as hints, since they may not necessarily be considered by the generator.
4.2 Service Procedure Interface Description
Service procedures may differ in parameter type or in call semantics. To define these procedures, they are supplied with an attribute list that contains at least the mandatory attributes REQUEST and RESULT, which refer to SIDL data types. Further attributes controlling the remote procedure call semantics can be supplied optionally.
The following example shows how natural language extensions are embedded into a procedure description. The standardized keyword COMM (for 'comment') is followed by an annotation that contains hints for the human user on the intended procedure semantics. This information should be accessible for a human user while gathering for a suitable service at a generic trader function.
PROCEDURE SelectCar {
REQUEST SelectCar;
RESULT ResultType;
TIMEOUT 120, COMM "Check availability takes time";
/* more optional attributes */
}, COMM "Claims reservation, committed by CommitBooking";
The interactive trading mechanism requires natural language annotations as given by the COMM extension presented above. A server developer is encouraged to use annotations within the service description as well as a client's user is encouraged to browse through a trader directory when searching for appropriate services in the network. Finally, the generic user interface should provide the possibility to retrieve this additional information about the service procedures the user is currently working with.
4.3 State Description
As mentioned above, stateless servers can perform client requests at the server site in any order. Stateful servers, however, require a client to issue server calls in an distinct order. In the car rental example, this would mean that a reservation can only be committed by a user when preceded by a car model selection. Such restrictions on how to use a specific server are, in general communication specifications, part of the protocol or state description of the respective server's behaviour. Usually, state-transition diagrams, resp. finite state machines, are used to model the range of valid sequences of potential service calls. Since states, as they appear at the RPC interface level, are specific for the server application, a individual application protocol specification has also to be supplied for each service as part of the service interface description (see Figure 7).
During the client/server interaction, the server state is traced by the CAG and SAG in order to provide a state conformance check for further RPCs. State specifications may also contain an extension list with annotations for each transition, resp. for the state description as a whole.
4.4 Export Interface Description
The export interface specification is an optional part of a service interface description and provides an attribute list which characterizes the service as a whole. In contrast to, e.g., extension list keywords, export definition attributes are not standardized within the scope of SIDL; they are currently restricted to providing an option for additional informal and server specific service descriptions (which, of course, could lateron be standardized as well). For example, the following export definition describes a car rental service:
```plaintext
EXPORT {
SERVICE_CLASS CAR_RENTAL;
SERVICE_NAME "RentACar";
SERVICE_FEE_CATEGORY PER_INVOCATION;
SERVICE_FEE_CURRENCY USD;
SERVICE_FEE_CHARGE 0.1;
};
```
4.5 User Interface Description
The user interface specification of a remote service in open systems provides some additional hints for a client, which may use it for an automatic (graphical or window) presentation of the typed data values. Such hints have to keep a distinct level of abstraction in order to allow a wide range of potential window managers to support an implementation of a generic user interface on top. Type-specific editors of such interfaces may vary in their visual appearance, e.g. the type 'TINY integer' may be graphically represented as a slider or as an entry field. Figure 8 gives an example on how the user interface specification could be used for automatically generating a query form from the respective SIDL service interface description.
As an example, the upper right window of Figure 8 shows a service description file, where the type SelCarT is defined and used as parameter type for the SelCar service procedure. On the left side the generic client application is shown after binding to the car rental service, which supplies this procedure. The form windows in the left part of Figure 8 represent the parameter value for the procedure invocation. The actual parameter transfer is effected by pressing the "Write TDO" button: a Tagged Data Object (TDO) is generated from the current data value and sent to the server. The transfer syntax of this data object is checked for conformance by the CAG, resp. SAG component.
5. CONCLUDING REMARKS
This contribution aimed at improved system support for the problem of matching specific application program client requests with arbitrary but appropriate generic remote server interface functions as provided at dedicated server nodes in modern distributed and heterogeneous computer network environments. Such open systems environments typically contain a multitude of heterogeneous and autonomous client and server components which occasionally cooperate in performing specific distributed application tasks.
In order to support application development for open client/server environments, the paper addresses the important problem of appropriately describing the multitude of various and different (user and server) interfaces in a uniform, standardized and machine-readable way. Such a description represents a basic prerequisite for systematic computer support for distributed client/server applications. This cooperation is, in practice however, often hindered by the lack of adequate (formal) interface description mechanisms. Therefore, the paper proposes a concept, describes a language and shortly mentioned a corresponding prototype implementation for a unifying network/service interface description technique.
In result, the proposed service interface description language (SIDL) helps to reduce both, complexity, required to access heterogeneous services in open systems, as well as implementation effort, required for realizing open distributed applications considerably by providing the necessary system support for uniquely specifying all involved user client and server communication interfaces. Finally, we have demonstrated how such interface specifications can also be used for the automatic creation of a local human user interface to any remote server with a corresponding formal service description, as proposed in this paper.
Future work in this area concentrates, e.g., on a relaxation of the interaction restrictions between user oriented and communication oriented components. For example, the human user interface should be freed from acting as a visual parameter entry stub for RPC invocation while keeping the concept of generality via loaded service description at binding time. Therefore, a deeper examination of corresponding User Interface Management System (UIMS) technologies and their relationship to respective communication oriented services seems advisable, especially for future large-scale distributed information services in open systems.
6. REFERENCES
|
{"Source-Url": "http://edoc.sub.uni-hamburg.de/informatik/volltexte/2009/63/pdf/Doc_001.pdf", "len_cl100k_base": 7077, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 33987, "total-output-tokens": 8544, "length": "2e12", "weborganizer": {"__label__adult": 0.0002830028533935547, "__label__art_design": 0.0004520416259765625, "__label__crime_law": 0.000324249267578125, "__label__education_jobs": 0.0008254051208496094, "__label__entertainment": 9.763240814208984e-05, "__label__fashion_beauty": 0.0001360177993774414, "__label__finance_business": 0.0003685951232910156, "__label__food_dining": 0.0002970695495605469, "__label__games": 0.00045561790466308594, "__label__hardware": 0.0016183853149414062, "__label__health": 0.0005207061767578125, "__label__history": 0.00034308433532714844, "__label__home_hobbies": 7.194280624389648e-05, "__label__industrial": 0.0004649162292480469, "__label__literature": 0.0003139972686767578, "__label__politics": 0.00023305416107177737, "__label__religion": 0.0004346370697021485, "__label__science_tech": 0.10552978515625, "__label__social_life": 8.213520050048828e-05, "__label__software": 0.029815673828125, "__label__software_dev": 0.8564453125, "__label__sports_fitness": 0.000179290771484375, "__label__transportation": 0.0005817413330078125, "__label__travel": 0.0002310276031494141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40941, 0.01553]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40941, 0.25624]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40941, 0.89741]], "google_gemma-3-12b-it_contains_pii": [[0, 2454, false], [2454, 5423, null], [5423, 7981, null], [7981, 11673, null], [11673, 14422, null], [14422, 17773, null], [17773, 21301, null], [21301, 25029, null], [25029, 27418, null], [27418, 28444, null], [28444, 30579, null], [30579, 33853, null], [33853, 36065, null], [36065, 38828, null], [38828, 40941, null], [40941, 40941, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2454, true], [2454, 5423, null], [5423, 7981, null], [7981, 11673, null], [11673, 14422, null], [14422, 17773, null], [17773, 21301, null], [21301, 25029, null], [25029, 27418, null], [27418, 28444, null], [28444, 30579, null], [30579, 33853, null], [33853, 36065, null], [36065, 38828, null], [38828, 40941, null], [40941, 40941, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40941, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40941, null]], "pdf_page_numbers": [[0, 2454, 1], [2454, 5423, 2], [5423, 7981, 3], [7981, 11673, 4], [11673, 14422, 5], [14422, 17773, 6], [17773, 21301, 7], [21301, 25029, 8], [25029, 27418, 9], [27418, 28444, 10], [28444, 30579, 11], [30579, 33853, 12], [33853, 36065, 13], [36065, 38828, 14], [38828, 40941, 15], [40941, 40941, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40941, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8a2a5c2aeacc52a951ba6d12843b3a682a1bfd08
|
Knowledge Integration for Domain Modeling
Armands Slihte, Janis Osis and Uldis Donins
Faculty of Computer Science and Information Technology
Institute of Applied Computer Systems, Riga Technical University, Riga, Latvia
Abstract. This research integrates artificial intelligence (AI) and system analysis by exploiting ontology, natural language processing (NLP), business use cases and model-driven architecture (MDA) for knowledge engineering and domain modeling. We describe an approach for compounding declarative and procedural knowledge in a way that corresponds to AI and system analysis standards, and is compliant for acquiring a domain model corresponding to MDA standards. We are recognizing the possibility of automatically transforming this knowledge to a Computation Independent Model (CIM) for MDA.
1 Introduction
Computer science has come a far way in understanding knowledge and developing means to manage knowledge. Knowledge engineering is mostly associated with artificial intelligence (AI), but many aspects of system analysis also deal with it. There have been significant results and applications in both artificial intelligence (AI) and system analysis. On the other hand, the integration between these two domains and the benefits it can offer has not yet been fully recognized. There are few approaches that have been going in this direction; we analyze these in the related work section. Nevertheless, none of these approaches suggest a solution for acquiring the domain model automatically from the corresponding domain knowledge, which should be the case. There is no reason why we could not automatically generate a model for a domain, for which we have all the corresponding knowledge explicitly defined.
The approach proposed in this paper provides a formal way to facilitate ontology for software engineering, more specifically for acquiring a Computation Independent Model (CIM) within Model Driven Architecture (MDA). It does not suggest a novel methodology for ontology development, but instead is based on the existing methodologies. This approach suggests ontology to be directly used as an input for domain modeling by exploiting business use cases and natural language processing (NLP). We are using Web Ontology Language (OWL) and Protégé 4.1 tool for ontology development, and Atempto Controlled English (ACE) for natural language processing. We are using Topological Functioning Model (TFM) as the CIM within MDA; acquiring a mathematically formal and thus transformable CIM. This is dis-
1This work has been supported by the European Social Fund within the project. Support for the implementation of doctoral studies at Riga Technical University.
cussed in more detail in section 5 – Integration with MDA. In this paper we also describe the approach using an example of a library business system, to show how this approach can be applied. Moreover, a long-term goal of this research is to provide a corresponding toolset to support this approach, so that the automation of domain modeling could be complete.
This paper is organized as follows. Section 2 considers related work for cooperating knowledge engineering and system analysis. Section 3 distinguishes between declarative and procedural knowledge and suggesting. Section 4 analyzes the knowledge representation possibilities and focuses on controlled natural language, ontology and business use cases. Section 5 explains the approach for integrating knowledge for domain modeling and provides an example of a library business system. Section 6 refers to a methodology for integrating the suggested approach with MDA.
2 Related Work
This work continues research on computation-independent modeling and specifically on TFM for MDA started in [1], [2], [3] and [4]. As stated in [4] an informal description of the system in textual form can be produced as a result of system analysis. This approach proposes to transform a system’s informal description into a TFM of the system. In this paper we show how to go even further and use formally defined knowledge as input for generating TFM.
Other authors have been investigating how to combine AI and system analysis for the benefit of domain modeling, incorporating ontologies with MDA. Ontologies, as formal representations of domain knowledge, enable knowledge sharing between different knowledge-based applications. Diverse techniques originating from the field of artificial intelligence are aimed at facilitating ontology development. However, these techniques, although well known to AI experts, are typically unknown to a large population of software engineers [5].
In order to overcome the gap between the knowledge of software engineering practitioners and AI techniques, a few proposals have been made suggesting the use of well-known software engineering techniques, such as UML, for ontology development. An approach proposed in [6] is dealing with generating Resource Description Framework (RDF) from a UML model. RDF is a W3C XML-based standard for sharing ontologies on the Semantic Web. Another approach [7] proposes a transformation to semantic extraction of ontologies from UML models. Their initial presumption is that UML and ontologies complement each other. That is to say, UML is designed for building models by human experts, while OWL is designed to be used at run time by intelligent processing methods.
The 2 approaches mentioned earlier are useful if you already have the design model (UML) and want to acquire the ontology. From the perspective of MDA the order of the acquired artifacts is incorrect, because the design model or PIM/PSM should be derived from CIM, which includes the declarative knowledge provided by an ontology. This means that the ontology has to come first, in order to construct an accurate PIM/PSM. In this paper we insist on starting with knowledge and not the design.
An independent ontology metamodel using the MOF has been developed in [8]; it is named the Unified Ontology Language (UOL). This is important from the perspective of MDA. OWL is a well-known standard for ontology development, but the transformation between OWL and a model defined according to MOF would not correspond to MDA standards. On the other hand, this newly introduced UOL still needs an ad-hoc transformation mechanism from OWL. Nevertheless, UOL could be considered in further stages of this research as a format for ontology.
3 Declarative and Procedural Knowledge
The traditional Artificial Intelligence (AI) techniques most frequently used to represent knowledge in practical intelligent systems include object-attribute-value triplets, uncertain facts, fuzzy facts, rules, semantic networks, and frames. Ontologies have acquired major importance in knowledge representation as well [5]. On the other hand, system analysis researchers have developed means to manage knowledge about business systems and processes, e.g. Business Process Modeling Notation (BPMN) and Model Driven Architecture (MDA).
Knowledge means understanding of a subject area. It includes concepts and facts about that subject area, as well as relations among them and mechanisms for how to combine them to solve problems in that area [5]. The term knowledge can be used to refer to a state of knowing facts, methods, principles, techniques and so on. This common usage corresponds to what is often referred to as “know about”. Second, usage of the term knowledge is when it refers to understanding facts, methods, principles and techniques sufficient to apply them in the course of making things happen. This corresponds to “know how”. Cognitive psychologists sort knowledge into two categories: declarative and procedural [9]. From the perspective of a student who is learning knowledge: 1) Declarative knowledge is that the student knows or understands (e.g. Riga is the capital of Latvia, book catalogue has entries of books); 2) Procedural knowledge is that the student is able to do something (e.g. buy an airplane ticket to Riga, get a book at the library).
This distinction between declarative and procedural knowledge may seem obvious, but has not been recognized too often. To properly describe a business system in its environment, it is necessary to know both – declarative and procedural knowledge. Approaches like BPMN and MDA are very strong describing the procedural knowledge, but lack the AI strength in dealing with declarative knowledge. In this paper we propose an approach, which integrates declarative and procedural knowledge providing a common approach for system analysis with the perspective of integrating with MDA.
4 Knowledge Representation
Before any system analysis process can start, it is necessary to acquire knowledge about the business system and its environment. Most of this knowledge usually is
defined in different documents in a form of natural language. It is necessary to store this knowledge in a way, so that it could be understandable by a computer.
Attempto Controlled English (ACE) is a controlled natural language, in other words it is a subset of English with a restricted syntax and a restricted semantics described by a small set of construction and interpretation rules. It is a formal language and can automatically and unambiguously be translated into first-order logic. Although ACE may appear perfectly natural it can be read and understood by human and machine. One could say that ACE is a first-order logic language with the syntax of a subset of English. ACE can be used as knowledge representation, specification and query language [10]. ACE was originally intended to specify software, but has since been used as a general knowledge representation language in several application domains. With Attempto Parsing Engine (APE) it is possible to derive a syntax tree from ACE texts which is crucial for Topological Functioning Model (TFM) approach.
To someone who wants to discuss topics in a domain D using a language L, ontology provides a catalogue of the types of things assumed to exist in D; the types in the ontology are represented in terms of the concepts, relations, and predicates of L [11]. Some reasons for developing an ontology are: 1) To share common understanding of the structure of information among people or software agents; 2) To enable reuse of domain knowledge; 3) To make domain assumptions explicit; 4) To separate domain knowledge from operational knowledge; 5) To analyze domain knowledge [12]. Ontologies are used for different purposes, but this research focuses on ontologies developed for a business domain, describing business terms and their relationships.
Ontology is a perfect candidate for representing declarative knowledge about a business system and its environment. Ontology defines the terms used to describe and represent an area of knowledge. Ontologies include computer-usable definitions of basic concepts in the domain and the relationships among them. OWL [13] is a common standard for defining ontologies and will be considered for further knowledge integration for domain modeling.
Business use cases are not normalized or standardized by any consortium, unlike UML use case diagram, which is defined by Object Management Group. Business use cases should not be mistaken with UML use case diagram. Moreover, there are many different use case templates and the structure of a use case can be adjusted depending on the situation and the development team [14]. These textual business use cases are considered for representing the procedural knowledge. The following structure of a use case is considered: 1) use case title, 2) actors, 3) pre-conditions, 4) main scenario, 5) extensions, and 6) sub-variations. Using ACE is considered for defining the step of the business use cases [10]. Business use cases provide a formal data structure that can be used to represent the procedural knowledge about a business system. By using ACE we enable this knowledge to be processed by a computer.
5 Integrating Knowledge for Domain Modeling
The steps of the Business Use Cases are defined using the ACE. This solves some of
the natural language problems, but not all. Still ACE texts do not solve possible ambiguity. ACE doesn’t restrict the usage of nouns and it is possible to express the same meaning using different words. Another possible problem is the inconsistency of business use cases. There might be steps defined that do not make sense in the given business system or its environment. If there would be a predefined lexicon for the specific domain, it would be possible to deal with these problems. Ontology can be used as this lexicon. In this section we are proposing an approach for integrating declarative and procedural knowledge. By exploiting ontology we prevent the ambiguity and inconsistency of business use cases.
5.1 Exploiting Ontology
The ontology is an extremely important part of the knowledge about any domain. Moreover, the ontology is the fundamental part of the knowledge, and all other knowledge should rely on it and refer to it.

**Fig. 1.** This is the class hierarchy of an ontology for a library. It includes the main classes that a library business system needs to function. The main actors are the Client and the Librarian, which both are sub-classes of Person. Class hierarchy includes also important concepts for a Library business system like Library, Book, Book Catalogue, Reader Card and Request Form.
This work does not propose a methodology for developing ontologies, but a methodology to use an already developed ontology for further knowledge engineering and system analysis. If there is no ontology defined for the business system before, it is necessary to build by analyzing the business system and its environment. Available documents and expert knowledge is the main input for this development. Particularly in the case of information extraction almost every text introduces new terms, so we
cannot assume that all terms encountered in the text we process will already be included in the ontology. The ability to add new terms to an existing ontology is crucial even when using an ontology whose structure has been formally defined [15]. This means that even if we initially have a defined ontology, it might lack some classes and properties for the business system or its environments under consideration.
To show how our approach will be using ontology for integrating declarative and procedural knowledge and after that using this knowledge for domain modeling, authors of this paper consider a library business system as example. In the following figures you can see the ontology considered.

Fig. 2. This is the description of the librarian class. You can see some of the properties and relationships between properties and classes. For example, one of the properties is checking out a book from a book fund, which is done by the librarian.
Developing an ontology includes: 1) defining classes in the ontology; 2) arranging the classes in a taxonomic hierarchy; 3) defining properties and describing the relationships with classes; 4) defining the individuals. Creating the class hierarchy is the first and second step (Fig. 1 shows an example). This ontology for an abstract library business system was developed by the authors of this paper to show an example. The third step is defining the properties and relationships between classes and properties (Fig. 2 shows an example). It is important that all the business system’s concepts and actions that can be associated with these concepts are defined. Recognizing a satisfactory scope for the domain is not easy. It will not always be possible to capture everything on the first try, but as mentioned before this has to be an iterative process. The ability to modify the ontology is crucial, because the scope of the domain can also change.
5.2 Developing Business Use Cases
Ontologies provide logical statements that describe what terms are, how they are related to each other, and how they can or cannot be related to each other. Business
use cases on the other hand provide a formal way to define the procedural knowledge, showing step by step how a process is executed, what the variations are and which actors are involved. The problem with business use cases is that their steps are sentences in natural language. We are restricting them by applying ACE, which guarantees we can analyze this sentence by syntax and get a parse tree.
Fig. 3. This is a business use case for requesting a book in a library business system. First a client searches for a book in the catalogue and then fills a request form to get the book. Librarians responsibility is to hand the request form, check out the book from the book fund and hand the book to the client. There are also variations.
Nevertheless, it does not guarantee that the terms used in sentences will be unambiguous. For example, there could be steps “Client fills a request form” and “Librarian denies a form”. These steps are correct from a syntax perspective, but they are inconsistent, because in the first sentence a form that is meant for requesting a book is defined as “Request form”, but in the second sentence it is defined as “Form”. This would not be a problem if there was a predefined vocabulary, which determines that these terms mean the same thing in this domain.
Controlled natural language also does not guarantee that the step will make sense for the given domain. For example, there could be a step “Librarian shows a reader card”. This step is perfectly correct from a perspective of syntax and may seem to make sense, because librarian can also be a reader in a library, but for the given domain “Librarian” is a definition of the person who works for the library and at this moment in time is fulfilling this role. So actually this sentence does not make sense from the perspective of the domain.
We cannot put this much responsibility on a system analyst who will be developing these business use cases. This kind of ambiguity and inconsistency should be automatically discovered and eliminated. The approach suggested in this paper will use ontology to solve both problems – the possible ambiguity and inconsistency of the sentences. Please consider the business use case shown in Fig. 3.
For the first problem of ambiguity, let us look at the first step “Client searches for a book in a catalogue”. We could rephrase this also like this “Client searches for a
book in a book catalogue”. Notice that in the second sentence we specify that it is a book catalogue and not just any catalogue. In this specific domain both sentences refer to the same object and it is important, that when the steps get analyzed the correct objects are considered. If we look at the library ontology’s class hierarchy (Fig. 1), it is clearly defined that “Catalogue” is a super-class of “Book catalogue”. This solves the problem of ambiguity in this case, because we know it refers to the same object. Another problem appears if we rephrase the step like this “Client searches for literature in a catalogue”. The concept “Literature” is not defined in our ontology, so this sentence should be marked as invalid until someone defines the concept in the ontology. The same applies to the properties. If the sentence is “Client looks for a book in a catalogue” and property “look” is not defined, this step should be marked as invalid.

For the second problem of inconsistency let us consider the example mentioned earlier “Librarian shows a reader card”. From our ontology’s property and class relationships (Fig. 2) we see that the relationship between “Librarian” and “Reader card” is defined by property “check” and not “show”. This implies that the sentence is invalid and should be corrected or the ontology has to be modified. By checking the correspondence between properties and classes it is possible to deal with this problem.
To implement the solution for these problems technically we will be analyzing the parse trees of the sentences. Fig. 4 shows a parse tree the business use case step “Librarian hands a book to a client”. This sentence can be broke down into verb phrase, noun phrase and prepositional phrase, and then also into verbs and nouns. Approach
for knowledge integration suggests that the nouns need to correspond to the classes and the verbs need to correspond to the properties. If they do not, then an error should be raised and either the ontology or the business use case needs to be modified. For this example we see that “Librarian”, “Book”, “Client” and “hand” is defined by the library ontology, so we have a valid sentence from perspective of ambiguity – all terms are defined and understandable. From perspective of consistency “Librarian” and “Book” are associated with “hand”. This association can be confirmed by the ontology, because a librarian hands a book.
6 Integration with Model Driven Architecture
In previous work [1], [2], [3] and [4] authors introduce an algorithm to automatically derive the TFM from textual use cases of a business system. Same business use cases structure is used to define the procedural knowledge. This algorithm utilizes the statistical parser to analyze the syntax of use case sentences and identify functional features for the TFM. The problem there is the potential ambiguity and inconsistency of the business use case steps, which authors are solving by applying ontology in this paper.
TFM offers a formal way to define a system by describing both the system’s functional and topological features. TFM is represented in the form of a topological space (X, Θ), where X is finite set of functional features of the system under consideration, and Θ is the topology that satisfies axioms of topological structures and is represented in the form of a directed graph [4]. TFM represents the system in its business environment and shows how the system is functioning, without details about how the system is constructed. This research considers TFM to be CIM within MDA; acquiring a mathematically formal and thus transformable CIM.
The integration with MDA is already defined with the algorithm for deriving TFM from business use cases [3]. Other branch of this research is suggesting a TopUML profile, which incorporates the topological nature of TFM with UML. This provides unique benefits for MDA, because it is possible to acquire cause-effect relationships between methods for PIM/PSM from CIM [16].
7 Conclusions
This paper describes a novel approach for integrating AI and system analysis by facilitating ontology, natural language processing, business use cases and MDA. This approach provides a way for acquiring the domain model automatically from the corresponding domain knowledge. It provides a formal way to use ontology for system analysis and suggests ontology to be directly used as an input for domain modeling by exploiting business use cases and natural language processing. This knowledge can be used for generating CIM according to previous research on TFM.
Future research includes: 1) developing guidelines for identifying the scope of the domain; 2) developing an algorithm for business use case step ambiguity and inconsistency checking according to given ontology; 3) developing guidelines for ontology
development or defining the supported ontology development methodologies; 4) implementing a tool for business use cases development, which would take OWL as input; 5) Integrating this business use case development tool with TFM generation and TopUML tools.
This approach provides a new perspective for domain modeling, allowing the domain model to be generated from formally defined knowledge; thus exploiting the power of knowledge engineering, leaving less space for interpretation and enhancing MDA with a formal CIM.
References
|
{"Source-Url": "http://www.scitepress.org/papers/2011/35812/35812.pdf", "len_cl100k_base": 4721, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 22026, "total-output-tokens": 6238, "length": "2e12", "weborganizer": {"__label__adult": 0.00037169456481933594, "__label__art_design": 0.0007715225219726562, "__label__crime_law": 0.0006279945373535156, "__label__education_jobs": 0.004547119140625, "__label__entertainment": 0.00015723705291748047, "__label__fashion_beauty": 0.00026226043701171875, "__label__finance_business": 0.0007843971252441406, "__label__food_dining": 0.00047397613525390625, "__label__games": 0.0007510185241699219, "__label__hardware": 0.0007610321044921875, "__label__health": 0.0009851455688476562, "__label__history": 0.0004820823669433594, "__label__home_hobbies": 0.00015211105346679688, "__label__industrial": 0.000659942626953125, "__label__literature": 0.001102447509765625, "__label__politics": 0.0004611015319824219, "__label__religion": 0.0006279945373535156, "__label__science_tech": 0.2489013671875, "__label__social_life": 0.0002073049545288086, "__label__software": 0.023651123046875, "__label__software_dev": 0.7119140625, "__label__sports_fitness": 0.00029778480529785156, "__label__transportation": 0.0007295608520507812, "__label__travel": 0.0002548694610595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27414, 0.02281]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27414, 0.68589]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27414, 0.91288]], "google_gemma-3-12b-it_contains_pii": [[0, 2697, false], [2697, 5882, null], [5882, 8807, null], [8807, 12099, null], [12099, 13981, null], [13981, 16138, null], [16138, 18541, null], [18541, 20646, null], [20646, 23685, null], [23685, 27027, null], [27027, 27414, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2697, true], [2697, 5882, null], [5882, 8807, null], [8807, 12099, null], [12099, 13981, null], [13981, 16138, null], [16138, 18541, null], [18541, 20646, null], [20646, 23685, null], [23685, 27027, null], [27027, 27414, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27414, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27414, null]], "pdf_page_numbers": [[0, 2697, 1], [2697, 5882, 2], [5882, 8807, 3], [8807, 12099, 4], [12099, 13981, 5], [13981, 16138, 6], [16138, 18541, 7], [18541, 20646, 8], [20646, 23685, 9], [23685, 27027, 10], [27027, 27414, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27414, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
ca256741a102907debc897aef0296dae349b28d0
|
How to report and handle Linux kernel regressions
Thorsten Leemhuis
regressions shall be fixed
intro;
"no regressions"
"no regressions"
aka "we don't break user-space"
even important fixes are reverted, if it turns out they cause regressions
looks like quite a few reported regressions are never addressed :-(/
some, because
the report was bad
some, because it was a bug and not a regression
some, because nobody located the the change causing it
intro;
some, because developers didn't handle things appropriately
I'll show you how to get your regression fixed as non-developer
I'll show you how to handle regressions appropriately as developer
[ 1. what exactly is a regression ]
definition;
regression == a kernel update breaks something
It’s a regression if something running fine with one Linux kernel
It’s a regression if something running fine with one Linux kernel works worse or not at all with a newer version.
It’s a regression if something running fine with one Linux kernel works worse or not at all with a newer version that's compiled using a similar configuration.
definition;
say your distro updated from v6.1.y to v6.2.y
definition;
say your distro updated from v6.1.y to v6.2.y and your beloved software from 20 years ago then stops working
definition;
yes, that's a regression
definition;
yes, that's a regression, as long as it's not caused by an optional new feature
yes, that's a regression, as long as it's not caused by an **optional** new feature
definition; allows progress
definition;
allows introducing new features like security hardening techniques, even if they break ancient apps
you have to explicitly enable such features
definition;
you have to explicitly enable such features at runtime
definition;
you have to explicitly enable such features at runtime or build-time
definition;
luckily, new features known to cause regression are few
a deliberate config change by your distro might have broken your beloved app from 20 years ago
definition;
recheck with a self build kernel
definition;
recheck with self build kernels
2. someone must locate which change causes the problem
trailing;
say you pay someone to upgrade your laptop
say you pay someone to upgrade your laptop and then something unrelated is broken
the person or company you payed has to fix things
the developer who caused a regression needs to fix it
the developer who caused a regression needs to fix it (or its superior)
just report your regression
just report your regression –
maybe somebody knows
what's causing it
because the root cause is already known
because the root cause is already known or because someone can point in the direction of a likely culprit
this often works :-D
trailing;
and often it does not :-/
and often it does not :-(
– somebody else then then has to locate the culprit
trailing;
in the end you get what you paid for
in the end you get what you payed for: nothing
that's why it's your job as reporter to find the culprit
trailing;
that's why it's your job as reporter to find the culprit – which often is needed anyway
culprit can often be found by compiling ~15 kernels
trailing;
trailing;
trailing;
trailing;
trailing;
trailing;
trailing;
trailing;
once culprit is known, it's clear who's responsible:
once culprit is known, it's clear who's responsible:
author
trailing;
once culprit is known, it's clear who's responsible: author or committer
once culprit is known, it's often possible to resolve things quickly
trailing;
once culprit is known, it's often possible to resolve things quickly: fix
once culprit is known, it's often possible to resolve things quickly: fix or revert
perform a bisection!
perform a bisection!
give you the lever to get
most regressions quickly fixed
[ 3. report the problem appropriately ]
improperly reported regressions might not be fixed
Documentation/admin-guide/reporting-issues.rst is your guide
Step-by-step guide how to report issues to the kernel maintainers
The above TL;DR outlines roughly how to report issues to the Linux kernel developers. It might be all that’s needed for people already familiar with reporting issues to Free/Libre & Open Source Software (FLOSS) projects. For everyone else there is this section. It is more detailed and uses a step-by-step approach. It still tries to be brief for readability and leaves out a lot of details; those are described below the step-by-step guide in a reference section, which explains each of the steps in more detail.
Note: this section covers a few more aspects than the TL;DR and does things in a slightly different order. That’s in your interest, to make sure you notice early if an issue that looks like a Linux kernel problem is actually caused by something else. These steps thus help to ensure the time you invest in this process won’t feel wasted in the end:
- Are you facing an issue with a Linux kernel a hardware or software vendor provided? Then in almost all cases you are better off to stop reading this document and reporting the issue to your vendor instead, unless you are willing to install the latest Linux version yourself. Be aware the latter will often be needed anyway to hunt down and fix issues.
- Perform a rough search for existing reports with your favorite internet search engine; additionally, check the archives of the Linux Kernel Mailing List (LKML). If you find matching reports, join the discussion instead of sending a new one.
- See if the issue you are dealing with qualifies as regression, security issue, or a really severe problem: those are ‘issues of high priority’ that need special handling in some steps that are about to follow.
- Make sure it’s not the kernel’s surroundings that are causing the issue you face.
- Create a fresh backup and put system repair and restore tools at hand.
- Ensure your system does not enhance its kernels by building additional kernel modules on-the-fly, which solutions like DKMS might be doing locally without your knowledge.
- Check if your kernel was ‘tainted’ when the issue occurred, as the event that made the kernel set this flag might be causing the issue you face.
- Write down coarsely how to reproduce the issue. If you deal with multiple issues at once, create separate notes for each of...
Make Linux Developers Fix Your Kernel Bug
Wednesday, December 7 | 7:00 AM – 8:30 AM PST
Mentor: Thorsten Leemhuis
Linux Kernel Regression Tracker
LF LIVE MENTORSHIP SERIES
Make Linux Developers Fix Your Kernel Bug
Recorded December 7, 2022 | 07:00 AM
https://www.linuxfoundation.org/webinars/make-linux-developers-fix-your-kernel-bug
1. ensure your kernel is vanilla
reporting;
to ensure both working and broken kernels are vanilla.
as "working" or "broken" might be due to distro modifications
2. base your report on a fresh kernel
3. ensure your kernel's and system's integrity.
4. submit your report to the right place
5. depict the problem adequately
Hi! Since updating from 5.19.5 to latest mainline (6.0-rc4, vanilla, untainted) my Laptop (Lenovo T14s AMD Gen2 with Fedora 36) my systems doesn't show any WiFi devices anymore. I noticed these error msgs in dmesg:
```
[ 2.065312] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)
[ 2.199881] iwlwifi: probe of 0000:00:14.3 failed with error -110
```
Does anyone have an idea what might be wrong here? Or is somebody maybe even working on a fix already? If not I'd be willing to perform a bisection to get down to the root of the problem.
Dmesg: https://example.org/myfiles/dmesg.txt
Kernel-Config: https://example.org/myfiles/config.txt [based on Fedora's]
Ciao, Thorsten
Hi! Since updating from 5.19.5 to latest mainline (6.0-rc4, vanilla, untainted) my Laptop (Lenovo T14s AMD Gen2 with Fedora 36) my systems doesn't show any WiFi devices anymore. I noticed these error msgs in dmesg:
```
[ 2.065312] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)
[ 2.199881] iwlwifi: probe of 0000:00:14.3 failed with error -110
```
Does anyone have an idea what might be wrong here? Or is somebody maybe even working on a fix already? If not I'd be willing to perform a bisection to get down to the root of the problem.
Dmesg: https://example.org/myfiles/dmesg.txt
Kernel-Config: https://example.org/myfiles/config.txt [based on Fedora's]
Ciao, Thorsten
Hi! Since updating from 5.19.5 to latest mainline (6.0-rc4, vanilla, untainted) my Laptop (Lenovo T14s AMD Gen2 with Fedora 36) my systems doesn't show any WiFi devices anymore. I noticed these error msgs in dmesg:
```
[ 2.065312] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)
[ 2.199881] iwlwifi: probe of 0000:00:14.3 failed with error -110
```
Does anyone have an idea what might be wrong here? Or is somebody maybe even working on a fix already? If not I'd be willing to perform a bisection to get down to the root of the problem.
Dmesg: https://example.org/myfiles/dmesg.txt
Kernel-Config: https://example.org/myfiles/config.txt [based on Fedora's]
Ciao, Thorsten
Hi! Since updating from 5.19.5 to latest mainline (6.0-rc4, vanilla, untainted) my Laptop (Lenovo T14s AMD Gen2 with Fedora 36) my systems doesn't show any WiFi devices anymore. I noticed these error msgs in dmesg:
```
[ 2.065312] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)
[ 2.199881] iwlwifi: probe of 0000:00:14.3 failed with error -110
```
Does anyone have an idea what might be wrong here? Or is somebody maybe even working on a fix already? If not I'd be willing to perform a bisection to get down to the root of the problem.
Dmesg: https://example.org/myfiles/dmesg.txt
Kernel-Config: https://example.org/myfiles/config.txt [based on Fedora's]
Ciao, Thorsten
Documentation/admin-guide/reporting-issues.rst covers everything crucial for regressions
Documentation/admin-guide/reporting-regressions.rst holds additional details
Reporting regressions
“We don’t cause regressions” is the first rule of Linux kernel development; Linux founder and lead developer Linus Torvalds established it himself and ensures it’s obeyed.
This document describes what the rule means for users and how the Linux kernel’s development model ensures to address all reported regressions; aspects relevant for kernel developers are left to Handling regressions.
The important bits (aka “TL;DR”)
1. It’s a regression if something running fine with one Linux kernel works worse or not at all with a newer version. Note, the newer kernel has to be compiled using a similar configuration; the detailed explanations below describes this and other fine print in more detail.
2. Report your issue as outlined in Reporting issues, it already covers all aspects important for regressions and repeated below for convenience. Two of them are important: start your report's subject with “[REGRESSION]” and CC or forward it to the regression mailing list.
1. start your report’s subject with "[REGRESSION]"
2. CC or forward it to the regression mailing list
<regressions@lists.linux.dev>
reporting;
[optional]
3. include a paragraph like this:
#regzbot introduced v5.13..v5.14-rc1
Is it a regression, if the issue can be avoided by updating some software?
Almost always: yes. If a developer tells you otherwise, ask the regression tracker for advice as outlined above.
Is it a regression, if a newer kernel works slower or consumes more energy?
Yes, but the difference has to be significant. A five percent slow-down in a micro-benchmark thus is unlikely to qualify as regression, unless it also influences the results of a broad benchmark by more than one percent. If in doubt, ask for advice.
Is it a regression, if an external kernel module breaks when updating Linux?
No, as the “no regression” rule is about interfaces and services the Linux kernel provides to the userland. It thus does not cover building or running externally developed kernel modules, as they run in kernel-space and hook into the kernel using internal interfaces occasionally changed.
Is it a regression, if a newer kernel works slower or consumes more energy?
reporting;
Is it a regression, if an external kernel module breaks when updating Linux?
Is it a regression, if some test scripts find a API or ABI change?
Does the "no regression" rule apply if I seem to be the only person affected?
Does the "no regression" rule apply for code in the staging tree as well?
What happens if fixing a regression is impossible without causing another?
Is it a regression, if some feature I relied on was removed months ago?
reporting;
reporting-issues.rst is informative for developers, too
reporting; reporting-issues.rst is informative for developers, too; but there is a dedicated doc for them as well
Documentation/process/handling-regressions.rst
Handling regressions
*We don’t cause regressions* – this document describes what this “first rule of Linux kernel development” means in practice for developers. It complements *Reporting regressions*, which covers the topic from a user’s point of view; if you never read that text, go and at least skim over it before continuing here.
The important bits (aka “The TL;DR”)
1. Ensure subscribers of the [regression mailing list](https://lists.linux.dev/ archives) (regressions@lists.linux.dev) quickly become aware of any new regression report:
- When receiving a mailed report that did not CC the list, bring it into the loop by immediately sending at least a brief “Reply-all” with the list CCed.
- Forward or bounce any reports submitted in bug trackers to the list.
2. Make the Linux kernel regression tracking bot “regzbot” track the issue (this is optional, but recommended):
1. CC regressions list
reporting;
[optional]
2. tell regzbot
3. when fixing, point to the report using a Link: tag
3. when fixing, point to the report using a Link: tag
e.g. like this:
Reported-by: Some Human <shuman@example.com>
Link: https://lore.kernel.org/r/123-msgid-456@example.com/
4. fix regressions quickly
reporting;
fixes for most bisected regressions should be mainlined within 2 weeks
many regressions should be mainlined within one week
quite a few should be mainlined within two or three days
yes, those are ambitious targets
yes, those are ambitious targets, but there are reasons for them
reporting;
see Documentation/process/handling-regressions.rst
that document also contains many other insightful things
quotes from Linus on handling regression
reporting;
quotes from Linus on handling regression – lots of them
also describes how to deal with changes where a regression risk is known
or if regzbot should be involved for each and every regression
reporting;
and how to interact with regzbot
[ 4. regression tracking ]
tracking; helps Linus et al doing a better job
Hi Linus. Below you'll find regzbot's report about regression from this cycle me or someone else told the bot about.
Let me highlight three issues you might want to know about:
* There is a long and ongoing discussion about problems with the NFS client where there is some disagreement if this qualifies as regression, as it's an old problem that is more likely to happen now. Not sure where this heads, a proper fix seems unlikely to emerge quickly. At least it currently looks like this doesn't affect a lot of users.
ensures the "no regression" rule is no hollow promise
Linux kernel regression status
[next] [mainline] [stable/longterm] • [new] • [all] • [resolved] [inconclusive]
current cycle (v6.2.. aka v6.3-rc), culprit identified
- 02852c01f654
(v6.3-rc1)
- Build error in drivers/media/i2c/imx290.c if PM support is disabled by Guenter Roeck
Earliest & latest activity: 6 days ago.
- 1ec49744ba83
(v6.3-rc1)
- Build failures for sparc64:allmodconfig and parisc:allmodconfig with gcc 11.x+ by Guenter Roeck
Earliest & latest activity: 12 & 9 days ago. Noteworthy: [patch].
current cycle (v6.2.. aka v6.3-rc), unknown culprit
none known by regzbot
previous cycle (v6.1..v6.2), culprit identified, with activity in the past three months
- 4444bc2116ae
(v6.2-rc5)
- net: wireless: rt2800usb: wifi performance issues and connection drops by Thomas Mann and Thomas Mann
Earliest & latest activity: 2 & 0 days ago. Noteworthy: [patch].
- c408b3d1d9bb
(v6.2-rc5)
- thermal: cached max_state breaks ACPI processor cooling device by Zhang, Rui
Earliest & latest activity: 9 & 2 days ago. Noteworthy: [1], [2], [3], [4], [patch (SOB)].
- 63a7cb130718
(v6.2-rc5)
- btrfs: DISCARD storm towards NVME device be it idle or not by Sergei Trofimovich
Earliest & latest activity: 4 & 3 days ago.
classic bug trackers don't fit well into the Linux kernel's mail based development workflow
regzbot in the ideal case just requires *one* additional task
tracking;
when reporting a regression, add this para to the mailed report:
#regzbot introduced v6.2..v6.3-rc1
when reporting a regression, add this para to the mailed report:
#regzbot introduced 1f2e3d4c5b6b
tracking;
regzbot then watches out for replies
regzbot then looks out for patches posted to fix tracked regressions
tracking;
regzbot considers regression resolved once fix lands
connection made through Link: tags pointing to the report
tracking;
connection made through Link: tags pointing to the report
[that's why they are important!]
to specify a fix manually, reply to report with a paragraph like this:
#regzbot fix 1f2e3d4c5b6a
to make regzbot track someone else's report, reply with a para like:
#regzbot ^introduced v6.2..v6.3-rc1
Get started with regzbot
- Get started with regzbot
- Why and how to make regzbot track a Linux kernel regression
- How to let regzbot you are fixing a Linux kernel regression it tracks
- More regzbot features relevant for both reporters and developers
- Important basics: How to interact with regzbot
- Make regzbot track an existing report
- Update properties of a tracked regression
- change the range or commit that introduced the regression
- Update the report’s title
- Point regzbot to other places with further details about a regression
- Link and monitor a related discussion
- Point to a place with further details, like a bug-tracker
- Resolve a regression
- Mark a regression as fixed
- Duplicates
- Mark a regression as resolved
- Mark a regression as inconclusive
Why and how to make regzbot track a Linux kernel regression
When reporting a Linux kernel regression it is in your interest to make regzbot aware of the issue, as that ensures the report won’t accidentally fall though the cracks; it also makes sure leading developers see the issue via the tracked regression website [or the weekly reports, which are not sent yet, but soon will be].
To get these benefits there is just one thing you need to do when reporting the regression by mail: include a line starting with `#regzbot,introduced,foo`, where
Reference documentation for regzbot, the Linux kernel regression tracking bot
- Reference documentation for regzbot, the Linux kernel regression tracking bot
- Basic concept
- What regzbot does once it's aware of a regression
- What regzbot does with the gathered data
- Interacting with regzbot
- Commands to be sent as a reply to the report
- commands to make regzbot track a regression
- commands to update properties of a tracked regression
- commands to point to related discussion, reports and webpages
- commands to resolve a regzbot entry
- commands users and developers normally shouldn’t use
- Commands regzbot accepts everywhere it looks
- backlinks
- tag users and developers normally shouldn’t use
Note: this document explains regzbot concept and all options; if you want something easier and quicker to consume, head over to 'getting started with regzbot'
Basic concept
Regzbot is a bot watching mailing lists and Git trees to track Linux kernel regression from report to elimination. to ensure none fail though the cracks unnoticed. It tries to impose as little overhead as possible on reporters and developers, but needs two things to do everything automatically:
- someone needs to tell regzbot when a mail contains a regression report
- the fix and other related discussions need to link to the mail with the report
https://gitlab.com/knurd42/regzbot/-/blob/main/docs/reference.md
The regzbot project started with funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871528.
https://linux-regtracking.leemhuis.info/about/
thx for sponsoring my current efforts:
I keep an eye on things with regzbot
I keep an eye on things with regzbot — far from perfect, but a lot better than nothing!
is regression tracking worth it?
tracking;
• Linus seems to like it
Linus seems to like it
many others said it's great that I do it
tracking;
• Linus seems to like it
• many others said it's great that I do it
• helped getting quite few fixes in on the last minute
tracking;
• Linus seems to like it
• many others said it's great that I do it
• helped getting quite few fixes in on the last minute
• regularly brings unfixed regressions back to developers attention
I'd say regression tracking is definitely worth it.
I'd say regression tracking is definitely worth it
[but I'm obviously biased]
I'd say regression tracking is definitely worth it
[and there is always something to improve]
[ finally() ]
finally()
regressions shall be fixed
my regression tracking tries to ensure that really happens
finally()
hence let the regressions list know about regressions!
finally()
hence let the regressions list
know about regressions!
[and ideally regzbot, too]
finally()
contact me if you need help or advice with anything wrt to regressions
finally()
developers,
take regression report seriously
handling-regressions.rst is your guide
if you break something, you need to fix it
finally()
users,
this gives you a pretty long lever
users,
this gives you a pretty long lever
[but better ensure you use it properly]
finally()
reporting-regressions.rst
is your guide
finally()
bisected the regression
bisected the regression with vanilla kernels
finally()
bisected the regression with vanilla kernels using a similar configuration
finally()
in initial report, is totally fine to just offer bisecting!
finally()
remember: there is no "us versus them" here
finally()
no "users" vs. "developers"
finally()
we are in this together!
Thorsten Leemhuis
mail: linux@leemhuis.info
GPG Key: 0x72B6E6EF4C583D2D
#fediverse: @kernellogger@fosstodon.org (en),
@knurd42@social.linux.pizza (en)
#EOF
|
{"Source-Url": "https://events.linuxfoundation.org/wp-content/uploads/2023/03/202303-lfmentor-regressions.pdf", "len_cl100k_base": 5474, "olmocr-version": "0.1.50", "pdf-total-pages": 172, "total-fallback-pages": 0, "total-input-tokens": 161443, "total-output-tokens": 11378, "length": "2e12", "weborganizer": {"__label__adult": 0.00019633769989013672, "__label__art_design": 0.0001863241195678711, "__label__crime_law": 0.0001703500747680664, "__label__education_jobs": 0.0006155967712402344, "__label__entertainment": 4.756450653076172e-05, "__label__fashion_beauty": 7.200241088867188e-05, "__label__finance_business": 0.0001798868179321289, "__label__food_dining": 0.00015854835510253906, "__label__games": 0.0003867149353027344, "__label__hardware": 0.0006246566772460938, "__label__health": 0.00014674663543701172, "__label__history": 9.351968765258788e-05, "__label__home_hobbies": 5.4717063903808594e-05, "__label__industrial": 0.00013065338134765625, "__label__literature": 0.00012969970703125, "__label__politics": 0.00015652179718017578, "__label__religion": 0.00022745132446289065, "__label__science_tech": 0.004619598388671875, "__label__social_life": 0.0001106858253479004, "__label__software": 0.0227813720703125, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00011223554611206056, "__label__transportation": 0.00015997886657714844, "__label__travel": 9.191036224365234e-05}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22515, 0.02652]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22515, 0.04692]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22515, 0.89949]], "google_gemma-3-12b-it_contains_pii": [[0, 69, false], [69, 96, null], [96, 96, null], [96, 103, null], [103, 120, null], [120, 169, null], [169, 243, null], [243, 312, null], [312, 345, null], [345, 393, null], [393, 448, null], [448, 515, null], [515, 579, null], [579, 646, null], [646, 682, null], [682, 742, null], [742, 808, null], [808, 922, null], [922, 1082, null], [1082, 1140, null], [1140, 1261, null], [1261, 1299, null], [1299, 1392, null], [1392, 1476, null], [1476, 1504, null], [1504, 1617, null], [1617, 1661, null], [1661, 1729, null], [1729, 1810, null], [1810, 1878, null], [1878, 1973, null], [1973, 2019, null], [2019, 2064, null], [2064, 2119, null], [2119, 2173, null], [2173, 2255, null], [2255, 2305, null], [2305, 2359, null], [2359, 2431, null], [2431, 2459, null], [2459, 2528, null], [2528, 2568, null], [2568, 2674, null], [2674, 2695, null], [2695, 2732, null], [2732, 2810, null], [2810, 2857, null], [2857, 2904, null], [2904, 2961, null], [2961, 3060, null], [3060, 3112, null], [3112, 3112, null], [3112, 3122, null], [3122, 3132, null], [3132, 3142, null], [3142, 3152, null], [3152, 3162, null], [3162, 3172, null], [3172, 3172, null], [3172, 3172, null], [3172, 3172, null], [3172, 3182, null], [3182, 3192, null], [3192, 3245, null], [3245, 3305, null], [3305, 3389, null], [3389, 3458, null], [3458, 3543, null], [3543, 3627, null], [3627, 3648, null], [3648, 3727, null], [3727, 3767, null], [3767, 3818, null], [3818, 3879, null], [3879, 6226, null], [6226, 6566, null], [6566, 6599, null], [6599, 6666, null], [6666, 6728, null], [6728, 6766, null], [6766, 6814, null], [6814, 6855, null], [6855, 6888, null], [6888, 7568, null], [7568, 8248, null], [8248, 8928, null], [8928, 9608, null], [9608, 9697, null], [9697, 9774, null], [9774, 10772, null], [10772, 10823, null], [10823, 10905, null], [10905, 11001, null], [11001, 11886, null], [11886, 11962, null], [11962, 12051, null], [12051, 12118, null], [12118, 12196, null], [12196, 12270, null], [12270, 12345, null], [12345, 12417, null], [12417, 12485, null], [12485, 12599, null], [12599, 12646, null], [12646, 13540, null], [13540, 13563, null], [13563, 13603, null], [13603, 13657, null], [13657, 13832, null], [13832, 13859, null], [13859, 13942, null], [13942, 13995, null], [13995, 14052, null], [14052, 14085, null], [14085, 14150, null], [14150, 14213, null], [14213, 14270, null], [14270, 14311, null], [14311, 14379, null], [14379, 14452, null], [14452, 14515, null], [14515, 14560, null], [14560, 14587, null], [14587, 14634, null], [14634, 15156, null], [15156, 15210, null], [15210, 16475, null], [16475, 16567, null], [16567, 16629, null], [16629, 16741, null], [16741, 16840, null], [16840, 16888, null], [16888, 16957, null], [16957, 17021, null], [17021, 17079, null], [17079, 17181, null], [17181, 17279, null], [17279, 17385, null], [17385, 18788, null], [18788, 20255, null], [20255, 20449, null], [20449, 20488, null], [20488, 20525, null], [20525, 20613, null], [20613, 20646, null], [20646, 20682, null], [20682, 20757, null], [20757, 20880, null], [20880, 21085, null], [21085, 21137, null], [21137, 21216, null], [21216, 21310, null], [21310, 21324, null], [21324, 21362, null], [21362, 21421, null], [21421, 21487, null], [21487, 21580, null], [21580, 21662, null], [21662, 21719, null], [21719, 21758, null], [21758, 21801, null], [21801, 21854, null], [21854, 21936, null], [21936, 21987, null], [21987, 22022, null], [22022, 22067, null], [22067, 22153, null], [22153, 22224, null], [22224, 22279, null], [22279, 22318, null], [22318, 22354, null], [22354, 22515, null]], "google_gemma-3-12b-it_is_public_document": [[0, 69, true], [69, 96, null], [96, 96, null], [96, 103, null], [103, 120, null], [120, 169, null], [169, 243, null], [243, 312, null], [312, 345, null], [345, 393, null], [393, 448, null], [448, 515, null], [515, 579, null], [579, 646, null], [646, 682, null], [682, 742, null], [742, 808, null], [808, 922, null], [922, 1082, null], [1082, 1140, null], [1140, 1261, null], [1261, 1299, null], [1299, 1392, null], [1392, 1476, null], [1476, 1504, null], [1504, 1617, null], [1617, 1661, null], [1661, 1729, null], [1729, 1810, null], [1810, 1878, null], [1878, 1973, null], [1973, 2019, null], [2019, 2064, null], [2064, 2119, null], [2119, 2173, null], [2173, 2255, null], [2255, 2305, null], [2305, 2359, null], [2359, 2431, null], [2431, 2459, null], [2459, 2528, null], [2528, 2568, null], [2568, 2674, null], [2674, 2695, null], [2695, 2732, null], [2732, 2810, null], [2810, 2857, null], [2857, 2904, null], [2904, 2961, null], [2961, 3060, null], [3060, 3112, null], [3112, 3112, null], [3112, 3122, null], [3122, 3132, null], [3132, 3142, null], [3142, 3152, null], [3152, 3162, null], [3162, 3172, null], [3172, 3172, null], [3172, 3172, null], [3172, 3172, null], [3172, 3182, null], [3182, 3192, null], [3192, 3245, null], [3245, 3305, null], [3305, 3389, null], [3389, 3458, null], [3458, 3543, null], [3543, 3627, null], [3627, 3648, null], [3648, 3727, null], [3727, 3767, null], [3767, 3818, null], [3818, 3879, null], [3879, 6226, null], [6226, 6566, null], [6566, 6599, null], [6599, 6666, null], [6666, 6728, null], [6728, 6766, null], [6766, 6814, null], [6814, 6855, null], [6855, 6888, null], [6888, 7568, null], [7568, 8248, null], [8248, 8928, null], [8928, 9608, null], [9608, 9697, null], [9697, 9774, null], [9774, 10772, null], [10772, 10823, null], [10823, 10905, null], [10905, 11001, null], [11001, 11886, null], [11886, 11962, null], [11962, 12051, null], [12051, 12118, null], [12118, 12196, null], [12196, 12270, null], [12270, 12345, null], [12345, 12417, null], [12417, 12485, null], [12485, 12599, null], [12599, 12646, null], [12646, 13540, null], [13540, 13563, null], [13563, 13603, null], [13603, 13657, null], [13657, 13832, null], [13832, 13859, null], [13859, 13942, null], [13942, 13995, null], [13995, 14052, null], [14052, 14085, null], [14085, 14150, null], [14150, 14213, null], [14213, 14270, null], [14270, 14311, null], [14311, 14379, null], [14379, 14452, null], [14452, 14515, null], [14515, 14560, null], [14560, 14587, null], [14587, 14634, null], [14634, 15156, null], [15156, 15210, null], [15210, 16475, null], [16475, 16567, null], [16567, 16629, null], [16629, 16741, null], [16741, 16840, null], [16840, 16888, null], [16888, 16957, null], [16957, 17021, null], [17021, 17079, null], [17079, 17181, null], [17181, 17279, null], [17279, 17385, null], [17385, 18788, null], [18788, 20255, null], [20255, 20449, null], [20449, 20488, null], [20488, 20525, null], [20525, 20613, null], [20613, 20646, null], [20646, 20682, null], [20682, 20757, null], [20757, 20880, null], [20880, 21085, null], [21085, 21137, null], [21137, 21216, null], [21216, 21310, null], [21310, 21324, null], [21324, 21362, null], [21362, 21421, null], [21421, 21487, null], [21487, 21580, null], [21580, 21662, null], [21662, 21719, null], [21719, 21758, null], [21758, 21801, null], [21801, 21854, null], [21854, 21936, null], [21936, 21987, null], [21987, 22022, null], [22022, 22067, null], [22067, 22153, null], [22153, 22224, null], [22224, 22279, null], [22279, 22318, null], [22318, 22354, null], [22354, 22515, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22515, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22515, null]], "pdf_page_numbers": [[0, 69, 1], [69, 96, 2], [96, 96, 3], [96, 103, 4], [103, 120, 5], [120, 169, 6], [169, 243, 7], [243, 312, 8], [312, 345, 9], [345, 393, 10], [393, 448, 11], [448, 515, 12], [515, 579, 13], [579, 646, 14], [646, 682, 15], [682, 742, 16], [742, 808, 17], [808, 922, 18], [922, 1082, 19], [1082, 1140, 20], [1140, 1261, 21], [1261, 1299, 22], [1299, 1392, 23], [1392, 1476, 24], [1476, 1504, 25], [1504, 1617, 26], [1617, 1661, 27], [1661, 1729, 28], [1729, 1810, 29], [1810, 1878, 30], [1878, 1973, 31], [1973, 2019, 32], [2019, 2064, 33], [2064, 2119, 34], [2119, 2173, 35], [2173, 2255, 36], [2255, 2305, 37], [2305, 2359, 38], [2359, 2431, 39], [2431, 2459, 40], [2459, 2528, 41], [2528, 2568, 42], [2568, 2674, 43], [2674, 2695, 44], [2695, 2732, 45], [2732, 2810, 46], [2810, 2857, 47], [2857, 2904, 48], [2904, 2961, 49], [2961, 3060, 50], [3060, 3112, 51], [3112, 3112, 52], [3112, 3122, 53], [3122, 3132, 54], [3132, 3142, 55], [3142, 3152, 56], [3152, 3162, 57], [3162, 3172, 58], [3172, 3172, 59], [3172, 3172, 60], [3172, 3172, 61], [3172, 3182, 62], [3182, 3192, 63], [3192, 3245, 64], [3245, 3305, 65], [3305, 3389, 66], [3389, 3458, 67], [3458, 3543, 68], [3543, 3627, 69], [3627, 3648, 70], [3648, 3727, 71], [3727, 3767, 72], [3767, 3818, 73], [3818, 3879, 74], [3879, 6226, 75], [6226, 6566, 76], [6566, 6599, 77], [6599, 6666, 78], [6666, 6728, 79], [6728, 6766, 80], [6766, 6814, 81], [6814, 6855, 82], [6855, 6888, 83], [6888, 7568, 84], [7568, 8248, 85], [8248, 8928, 86], [8928, 9608, 87], [9608, 9697, 88], [9697, 9774, 89], [9774, 10772, 90], [10772, 10823, 91], [10823, 10905, 92], [10905, 11001, 93], [11001, 11886, 94], [11886, 11962, 95], [11962, 12051, 96], [12051, 12118, 97], [12118, 12196, 98], [12196, 12270, 99], [12270, 12345, 100], [12345, 12417, 101], [12417, 12485, 102], [12485, 12599, 103], [12599, 12646, 104], [12646, 13540, 105], [13540, 13563, 106], [13563, 13603, 107], [13603, 13657, 108], [13657, 13832, 109], [13832, 13859, 110], [13859, 13942, 111], [13942, 13995, 112], [13995, 14052, 113], [14052, 14085, 114], [14085, 14150, 115], [14150, 14213, 116], [14213, 14270, 117], [14270, 14311, 118], [14311, 14379, 119], [14379, 14452, 120], [14452, 14515, 121], [14515, 14560, 122], [14560, 14587, 123], [14587, 14634, 124], [14634, 15156, 125], [15156, 15210, 126], [15210, 16475, 127], [16475, 16567, 128], [16567, 16629, 129], [16629, 16741, 130], [16741, 16840, 131], [16840, 16888, 132], [16888, 16957, 133], [16957, 17021, 134], [17021, 17079, 135], [17079, 17181, 136], [17181, 17279, 137], [17279, 17385, 138], [17385, 18788, 139], [18788, 20255, 140], [20255, 20449, 141], [20449, 20488, 142], [20488, 20525, 143], [20525, 20613, 144], [20613, 20646, 145], [20646, 20682, 146], [20682, 20757, 147], [20757, 20880, 148], [20880, 21085, 149], [21085, 21137, 150], [21137, 21216, 151], [21216, 21310, 152], [21310, 21324, 153], [21324, 21362, 154], [21362, 21421, 155], [21421, 21487, 156], [21487, 21580, 157], [21580, 21662, 158], [21662, 21719, 159], [21719, 21758, 160], [21758, 21801, 161], [21801, 21854, 162], [21854, 21936, 163], [21936, 21987, 164], [21987, 22022, 165], [22022, 22067, 166], [22067, 22153, 167], [22153, 22224, 168], [22224, 22279, 169], [22279, 22318, 170], [22318, 22354, 171], [22354, 22515, 172]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22515, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
99ffea9823f2bb8963c5e11ae16f2418360d4b73
|
Why Study Assembler?
Useful to know assembly language because ...
- sometimes you are required to use it (e.g. device handlers)
- improves your understanding of how compiled programs execute
- very helpful when debugging
- understand performance issues better
- performance tweaking (squeezing out last pico-s)
- re-write that performance critical code in assembler
CPU Architecture
A typical modern CPU has
- a set of data registers
- a set of control registers (incl PC)
- an arithmetic-logic unit (ALU)
- access to memory (RAM)
- a set of simple instructions
- transfer data between memory and registers
- push values through the ALU to compute results
- make tests and transfer control of execution
Different types of processors have different configurations of the above
Fetch-Execute Cycle
All CPUs have program execution logic like:
```c
uint32_t pc = STARTING_ADDRESS;
while (1) {
uint32_t instruction = memory[pc];
pc++; // move to next instr
if (instruction == HALT) {
break;
} else {
execute(instruction);
}
}
```
pc = Program Counter, a CPU register which tracks execution
Note that some instructions may modify pc (e.g. JUMP)
**Fetch-Execute Cycle**
Executing an instruction involves:
- determine what the operator is
- determine which registers, if any, are involved
- determine which memory location, if any, is involved
- carry out the operation with the relevant operands
- store result, if any, in appropriate register
Example instruction encodings (not from a real machine):
<table>
<thead>
<tr>
<th>Instruction</th>
<th>R1</th>
<th>R2</th>
<th>R3</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>LOAD</td>
<td>R4</td>
<td>0x10004</td>
<td></td>
</tr>
</tbody>
</table>
**Assembly Language**
Instructions are simply bit patterns within a 32-bit bit-string.
Could specify machine code as a sequence of hex digits, e.g.
<table>
<thead>
<tr>
<th>Address</th>
<th>Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x100000</td>
<td>0x3c041001</td>
</tr>
<tr>
<td>0x100004</td>
<td>0x34020004</td>
</tr>
<tr>
<td>0x100008</td>
<td>0x0000000c</td>
</tr>
<tr>
<td>0x10000C</td>
<td>0x03e00008</td>
</tr>
</tbody>
</table>
Assembly language is a symbolic way of specifying machine code:
- write instructions using mnemonics rather than hex codes
- refer to registers using either numbers or names
- can associate names to memory addresses
**MIPS Architecture**
MIPS is a well-known and simple architecture:
- historically used everywhere from supercomputers to PlayStations, ...
- still popular in some embedded fields e.g. modems, TVs
- but being out-competed by ARM (in phones, ...)
We consider the MIPS32 version of the MIPS family:
- qtspim ... provides a GUI front-end, useful for debugging
- spim ... command-line based version, useful for testing
- xspim ... GUI front-end, useful for debugging, only in CSE labs
Source code for browsing under `/home/cs1521/spim`
**MIPS Instructions**
MIPS has several classes of instructions:
- load and store ... transfer data between registers and memory
- computational ... perform arithmetic/logical operations
- jump and branch ... transfer control of program execution
- coprocessor ... standard interface to various co-processors
- special ... miscellaneous tasks (e.g. syscall)
And several addressing modes for each instruction:
- between memory and register (direct, indirect)
- constant to register (immediate)
- register + register + destination register
MIPS Instructions
MIPS instructions are 32-bits long, and specify ...
• an operation (e.g. load, store, add, branch, ...)
• one or more operands (e.g. registers, memory addresses, constants)
Some possible instruction formats
<table>
<thead>
<tr>
<th>OPCODE</th>
<th>R1</th>
<th>R2</th>
<th>R3</th>
<th>unused</th>
</tr>
</thead>
<tbody>
<tr>
<td>6 bits</td>
<td>5 bits</td>
<td>5 bits</td>
<td>5 bits</td>
<td>11 bits</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>OPCODE</th>
<th>R1</th>
<th>Memory Address or Constant Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>6 bits</td>
<td>5 bits</td>
<td>21 bits</td>
</tr>
</tbody>
</table>
Examples MIPS Assembler
lw $t1, address # reg[t1] = memory[address]
sw $t3, address # memory[address] = reg[t3]
la $t1, address # reg[t1] = address
lui $t2, const # reg[t2] = const <<< 16
and $t0, $t1, $t2 # reg[t0] = reg[t1] & reg[t2]
add $t0, $t1, $t2 # reg[t0] = reg[t1] + reg[t2]
# add signed 2's complement ints
addi $t2, $t3, 5 # reg[t2] = reg[t3] + 5
# add immediate, no sub immediate
mult $t3, $t4 # (Hi,Lo) = reg[t3] * reg[t4]
seq $t7, $t1, $t2 # reg[t7] = (reg[t1] == reg[t2])
j label # PC = label
beq $t1, $t2, label # PC = label if reg[t1] == reg[t2]
nop # do nothing
MIPS Architecture
MIPS CPU has
• 32 general purpose registers (32-bit)
• 16/32 floating-point registers (for float/double)
• PC ... 32-bit register (always aligned on 4-byte boundary)
• HI,LO ... for storing results of multiplication and division
Registers can be referred to as $0..31 or by symbolic names
Some registers have special uses e.g.
• register $0 always has value 0, cannot be written
• registers $1, $26, $27 reserved for use by system
More details on following slides ...
MIPS Architecture - Integer Registers
<table>
<thead>
<tr>
<th>Number</th>
<th>Names</th>
<th>Usage</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>$zero</td>
<td>Constant 0</td>
</tr>
<tr>
<td>1</td>
<td>$at</td>
<td>Reserved for assembler</td>
</tr>
<tr>
<td>2,3</td>
<td>$v0,$v1</td>
<td>Expression evaluation and results of a function</td>
</tr>
<tr>
<td>4..7</td>
<td>$a0..$a3</td>
<td>Arguments 1-4</td>
</tr>
<tr>
<td>8..16</td>
<td>$t0..$t7</td>
<td>Temporary (not preserved across function calls)</td>
</tr>
<tr>
<td>16..23</td>
<td>$s0..$s7</td>
<td>Saved temporary (preserved across function calls)</td>
</tr>
<tr>
<td>24,25</td>
<td>$t8,$t9</td>
<td>Temporary (preserved across function calls)</td>
</tr>
<tr>
<td>26,27</td>
<td>$k0,$k1</td>
<td>Reserved for OS kernel</td>
</tr>
<tr>
<td>28</td>
<td>$gp</td>
<td>Pointer to global area</td>
</tr>
<tr>
<td>29</td>
<td>$sp</td>
<td>Stack pointer</td>
</tr>
<tr>
<td>30</td>
<td>$fp</td>
<td>Frame pointer</td>
</tr>
<tr>
<td>31</td>
<td>$ra</td>
<td>Return address (used by function call instruction)</td>
</tr>
</tbody>
</table>
• Except for registers 0 and 31, these uses are only conventions.
• Conventions allow compiled code from different sources to be combined (linked).
• Most of these conventions are irrelevant when you are writing small MIPS assembly code programs.
• But use registers 8..23 for holding values.
• Definitely do not use 0,1 and 31.
MIPS Architecture - floating point registers
<table>
<thead>
<tr>
<th>Reg</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>$f0$</td>
<td>hold return value of functions which return floating-point results</td>
</tr>
<tr>
<td>$f0$</td>
<td>temporary registers; not preserved across function calls</td>
</tr>
<tr>
<td>$f12$</td>
<td>used for first two double-precision function arguments</td>
</tr>
<tr>
<td>$f16$</td>
<td>temporary registers; used for expression evaluation</td>
</tr>
<tr>
<td>$f20$</td>
<td>saved registers; value is preserved across function calls</td>
</tr>
</tbody>
</table>
Notes:
- floating point registers can be used as 32 32-bit register or 16 64-bit registers
- for 64-bit use even numbered registers
Data and Addresses
All operations refer to data, either
- in a register
- in memory
- literally (i.e. constant)
Computation operations refer to registers or constants. Only load/store instructions refer to memory.
To access registers, you can also use $name$:
- e.g. $zero == 0$, $t0 == 8$, $fp == 30$, ...
To refer to literals, use C-like constants:
- $1$ $3$ $-1$ $-2$ $12345$ $0x1$ $0xFFFFFFFF$
- "a string" 'a' 'b' '1' '\n' '\0'
Memory Addressing Modes
Ways of specifying memory addresses:
<table>
<thead>
<tr>
<th>Format</th>
<th>Address referred to</th>
</tr>
</thead>
<tbody>
<tr>
<td>(reg)</td>
<td>contents of register</td>
</tr>
<tr>
<td>imm</td>
<td>immediate (= constant)</td>
</tr>
<tr>
<td>imm(reg)</td>
<td>immediate + contents of register</td>
</tr>
<tr>
<td>sym</td>
<td>address of symbol (= name)</td>
</tr>
<tr>
<td>sym(reg)</td>
<td>address of symbol + reg contents</td>
</tr>
<tr>
<td>sym +/- imm</td>
<td>address of symbol +/- immediate</td>
</tr>
<tr>
<td>sym +/- imm(reg)</td>
<td>sym address +/- (imm + reg contents)</td>
</tr>
</tbody>
</table>
Describing MIPS Assembler Operations
- An address refers to a memory cell.
- Mem[addr] = contents of cell at address addr.
- &name = location of memory cell for name.
- Registers are denoted:
- $R_d$ destination register (where result goes)
- $R_s$ source register #1 (where data comes from)
- $R_t$ source register #2 (where data comes from)
- Reg[R] = contents of register R.
- Data transfer is denoted by <-.
```c
lw $7, buffer # Reg[7] <- Mem[buffer]
```
### Setting Register
- **li** \( R_{dimm} \)
load immediate
\( \text{Reg}_d \leftarrow \text{imm} \)
- **la** \( R_{addr} \)
load address
\( \text{Reg}_d \leftarrow \text{addr} \)
- **move** \( R_dR_s \)
move data reg-to-reg
\( \text{Reg}_d \leftarrow \text{Reg}_s \)
#### Setting Register
- The **li** (load immediate) instruction is used to set a register to a constant value, e.g.
\[
\text{li} \quad R_d, \quad 42 \quad \# \quad R_d = 42
\]
\[
\text{li} \quad R_d, \quad 0x2a \quad \# \quad R_d = 42
\]
\[
\text{li} \quad R_d, \quad 'c' \quad \# \quad R_d = 42
\]
- The **move** instruction is used to set a register to the same value as another register, e.g.
\[
\text{move} \quad R_d, \quad R_s \quad \# \quad \text{assign to } R_d \text{ value in } R_s
\]
- Note destination is first register.
### Setting A Register to An Address
- Note the **la** (load address) instruction is used to set a register to a labelled memory address.
\[
\text{la} \quad R_d, \quad \text{start}
\]
- The memory address will be fixed before the program is run, so this differs only syntactically from the **li** instruction.
- For example, if vec is the label for memory address 0x10000100 then these two instructions are equivalent:
\[
\text{la} \quad R_d, \quad \text{vec}
\]
\[
\text{li} \quad R_d, \quad 0x10000100
\]
- In both cases the constant is encoded as part of the instruction.
- Neither **la** or **li** access memory - there are very different to the **lw** instruction.
### Pseudo Instructions
- Both **la** and **li** are pseudo instructions provide by the assembler for user convenience.
- The assembler translates these pseudo-instructions into instructions actually implemented by the processor.
- For example, \( \text{li} \quad R_d, \quad 15 \) might be translated to \( \text{addi} \quad R_d, \quad R_0, \quad 15 \).
- If the constant is large the assembler will need to need translate a li/la instruction to two actual instructions.
Accessing Memory
These instructions move data between memory and CPU. 1, 2 and 4-bytes (8, 16 and 32 bit) quantities can be moved. There are two operands the register which will supply/receive the value and the memory address. For the 1 and 2-byte operations the low (least significant) bits of the register are used.
\[
\begin{align*}
&\text{lw } R_d addr \text{ load word (32-bits)} \\
&\quad \text{Reg}[R_d] \leftarrow \text{Mem}[addr..addr+3] \\
&\text{sw } R_s addr \text{ store word (32-bits)} \\
&\quad \text{Mem}[addr..addr+3] \leftarrow \text{Reg}[R_s] \\
&\text{lh } R_d addr \text{ load half-word (16 bits)} \\
&\quad \text{Reg}[R_d] \leftarrow \text{Mem}[addr..addr+1] \\
&\text{sh } R_s addr \text{ store half-word (16 bits)} \\
&\quad \text{Mem}[addr..addr+1] \leftarrow \text{Reg}[R_s] \\
&\text{lb } R_d addr \text{ load byte (8-bits)} \\
&\quad \text{Reg}[R_d] \leftarrow \text{Mem}[addr] \\
&\text{sb } R_s addr \text{ store byte (8-bits)} \\
&\quad \text{Mem}[addr] \leftarrow \text{Reg}[R_s]
\end{align*}
\]
Addressing Modes
Examples of load/store and addressing:
```
main:
la $t0, vec # reg[t0] = &vec[0]
li $t1, 5 # reg[t1] = 5
sw $t1, ($t0) # vec[0] = reg[t1]
li $t1, 13 # reg[t1] = 13
sw $t1, 4($t0) # vec[1] = reg[t1]
li $t1, -7 # reg[t1] = -7
sw $t1, 8($t0) # vec[2] = reg[t1]
li $t2, 12 # reg[t2] = 12
li $t1, 42 # reg[t1] = 42
sw $t1, vec($t2) # vec[3] = reg[t1]
jr $ra # return
.data
# 16 bytes of storage
vec: .space 16 # int vec[4];
```
Operand Sizes
MIPS instructions can manipulate different-sized operands
- single bytes, two bytes ("halfword"), four bytes ("word")
Many instructions also have variants for signed and unsigned
Leads to many opcodes for a (conceptually) single operation, e.g.
- LB ... load one byte from specified address
- LBU ... load unsigned byte from specified address
- LH ... load two bytes from specified address
- LHU ... load unsigned 2-bytes from specified address
- LW ... load four bytes (one word) from specified address
- LA ... load the specified address
All of the above specify a destination register
Arithmetic Instructions
```
add Rd Rs Rt, add
Reg[Rd] ← Reg[Rs] + Reg[Rt]
add Rd Rs imm, add
Reg[Rd] ← Reg[Rs] + imm
sub Rd Rs Rt, sub
Reg[Rd] ← Reg[Rs] - Reg[Rt]
mul Rd Rs Rt, mul
Reg[Rd] ← Reg[Rs] * Reg[Rt]
div Rd Rs Rt, div
Reg[Rd] ← Reg[Rs] / Reg[Rt]
rem Rd Rs Rt, rem
Reg[Rd] ← Reg[Rs] % Reg[Rt]
neg Rd Rs, neg
Reg[Rd] ← - Reg[Rs]
```
All arithmetic is signed (2’s-complement).
The second operand (\(R_t\)) can be replaced by a constant in all the above instructions.
Unsigned versions of instructions are available
e.g. addu, subu, mulu, divu, ...
Logic Instructions
and $R_dR_sR_t$ logical and
Reg[$R_d$] <- Reg[$R_s$] & Reg[$R_t$]
and $R_dR_simm$ logical and
Reg[$R_d$] <- Reg[$R_s$] & imm
or $R_dR_sR_t$ logical or
Reg[$R_d$] <- Reg[$R_s$] | Reg[$R_t$]
not $R_dR_s$ logical not
Reg[$R_d$] <- ~Reg[$R_s$]
xor $R_dR_sR_t$ logical xor
Reg[$R_d$] <- Reg[$R_s$] ⊕ Reg[$R_t$]
All of these instructions can use imm instead of $R_t$.
Bit Manipulation Instructions
sll $R_dR_sR_t$ shift left logical
Reg[$R_d$] <- Reg[$R_s$]«Reg[$R_t$]
sll $R_dR_simm$ shift left logical
Reg[$R_d$] <- Reg[$R_s$]« imm
srl $R_dR_simm$ shift right logical
Reg[$R_d$] <- Reg[$R_s$]» imm
sra $R_dR_simm$ shift right arithmetic
Reg[$R_d$] <- Reg[$R_s$]» imm
rol $R_dR_simm$ rotate left
Reg[$R_d$] <- rot(Reg[$R_s$] imm left)
ror $R_dR_simm$ rotate right
Reg[$R_d$] <- rot(Reg[$R_s$] imm right)
All of these instructions can use $R_t$ instead of imm.
Jump Instructions
Jumps control flow of program execution.
j label jump to location
PC <- & label
jal label jump and link
ra <- PC ; PC <- label
jr $R_s$ jump via register
PC <- Reg[$R_s$]
jalr $R_s$ jump and link via reg
Reg[31] <- PC ; PC <- Reg[$R_s$]
Branch Instructions
Branches combine testing and jumping.
beq $R_sR_t$ label branch on equal
if (Reg[$R_s$] == Reg[$R_t$]) PC <- label
bne $R_sR_t$ label branch on not equal
if (Reg[$R_s$] != Reg[$R_t$]) PC <- label
blt $R_sR_t$ label branch on less than
if (Reg[$R_s$] < Reg[$R_t$]) PC <- label
ble $R_sR_t$ label branch on less or equal
if (Reg[$R_s$] <= Reg[$R_t$]) PC <- label
bgt $R_sR_t$ label branch on greater than
if (Reg[$R_s$] > Reg[$R_t$]) PC <- label
bge $R_sR_t$ label branch on greater or equal
if (Reg[$R_s$] >= Reg[$R_t$]) PC <- label
MIPS Instruction Set
Implementation of pseudo-instructions:
<table>
<thead>
<tr>
<th>What you write</th>
<th>Machine code produced</th>
</tr>
</thead>
<tbody>
<tr>
<td>li $t5, const</td>
<td>ori $t5, $0, const</td>
</tr>
<tr>
<td>la $t3, label</td>
<td>lui $at, label[31..16]</td>
</tr>
<tr>
<td></td>
<td>ori $t3, $at, label[15..0]</td>
</tr>
<tr>
<td>bge $t1, $t2, label</td>
<td>slt $at, $t1, $t2</td>
</tr>
<tr>
<td></td>
<td>beq $at, $0, label</td>
</tr>
<tr>
<td>blt $t1, $t2, label</td>
<td>slt $at, $t1, $t2</td>
</tr>
<tr>
<td></td>
<td>bne $at, $0, label</td>
</tr>
</tbody>
</table>
Note: use of $at register for intermediate results
MIPS vs SPIM
MIPS is a machine architecture, including instruction set
SPIM is an emulator for the MIPS instruction set
• reads text files containing instruction + directives
• converts to machine code and loads into "memory"
• provides debugging capabilities
▶ single-step, breakpoints, view registers/memory, ...
• provides mechanism to interact with operating system (syscall)
Also provides extra instructions, mapped to MIPS core set
• provide convenient/mnemonic ways to do common operations
• e.g. move $s0,$v0 rather than addu $s0,$0,$v0
Using SPIM
Three ways to execute MIPS code with SPIM
• spim ... command line tool
▶ load programs using -file option
▶ interact using stdin/stdout via login terminal
• qtspim ... GUI environment
▶ load programs via a load button
▶ interact via a pop-up stdin/stdout terminal
• xspim ... GUI environment
▶ similar to qtspim, but not as pretty
▶ requires X-windows server
Command-line tool:
Using SPIM
GUI tool:
System Calls
The SPIM interpreter provides I/O and memory allocation via the syscall instruction.
<table>
<thead>
<tr>
<th>Service</th>
<th>n</th>
<th>Arguments</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>printf("%d")</td>
<td>1</td>
<td>int in $a0</td>
<td></td>
</tr>
<tr>
<td>printf("%f")</td>
<td>2</td>
<td>float in $f12</td>
<td></td>
</tr>
<tr>
<td>printf("%lf")</td>
<td>3</td>
<td>double in $f12</td>
<td></td>
</tr>
<tr>
<td>printf("%s")</td>
<td>4</td>
<td>$a0 = string</td>
<td></td>
</tr>
<tr>
<td>scanf("%d")</td>
<td>5</td>
<td>-</td>
<td>int in $v0</td>
</tr>
<tr>
<td>scanf("%f")</td>
<td>6</td>
<td>-</td>
<td>float in $f0</td>
</tr>
<tr>
<td>scanf("%lf")</td>
<td>7</td>
<td>-</td>
<td>double in $f0</td>
</tr>
<tr>
<td>fgets</td>
<td>8</td>
<td>buffer address in $a0</td>
<td>length in $a1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>address in $v0</td>
</tr>
<tr>
<td>sbrk</td>
<td>9</td>
<td>nbytes in $a0</td>
<td>address in $v0</td>
</tr>
<tr>
<td>exit</td>
<td>10</td>
<td>-</td>
<td></td>
</tr>
</tbody>
</table>
All input/output via stdin/stdout ... no files.
MIPS (SPIM) memory layout
<table>
<thead>
<tr>
<th>Region</th>
<th>Address</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>text</td>
<td>0x00400000</td>
<td>instructions only; read-only; cannot expand</td>
</tr>
<tr>
<td>data</td>
<td>0x10000000</td>
<td>data objects; read/write; can be expanded</td>
</tr>
<tr>
<td>stack</td>
<td>0x7fffffff</td>
<td>grows down from that address; read/write</td>
</tr>
<tr>
<td>k_text</td>
<td>0x80000000</td>
<td>kernel code; read-only</td>
</tr>
<tr>
<td>k_data</td>
<td>0x90000000</td>
<td>only accessible in kernel mode</td>
</tr>
</tbody>
</table>
MIPS Assembly Language
MIPS assembly language programs contain
- comments ... introduced by
- labels ... appended with :
- directives ... symbol beginning with .
- assembly language instructions
Programmers need to specify
- data objects that live in the data region
- functions (instruction sequences) that live in the code/text region
Each instruction or directive appears on its own line
Example MIPS assembler program
```mips
# hello.s ... print "Hello, MIPS"
main:
la $a0, msg # load the argument string
li $v0, 4 # load the system call (print)
syscall # print the string
jr $ra # return to caller (__start)
.data # the data segment
msg: .asciiz "Hello, MIPS\n"
```
Structure of Simple MIPS programs
```mips
# Prog.s ... comment giving description of function
# Author ...
main: # indicates start of code
# (i.e. first user instruction to execute)
# ...
.data # variable declarations follow this line
# ...
# End of program; leave a blank line to make SPIM happy
```
Assembler Directives
Directives (instructions to assembler, not MIPS instructions)
```mips
.text # following instructions placed in text
.data # following objects placed in data
.globl # make symbol available globally
.a: .space 18 # uchar a[18]; or uint a[4];
.align 2 # align next object on 2-byte addr
.i: .word 2 # unsigned int i = 2;
.v: .word 1,3,5 # unsigned int v[3] = \{1,3,5\};
.h: .half 2,4,6 # unsigned short h[3] = \{2,4,6\};
.b: .byte 1,2,3 # unsigned char b[3] = \{1,2,3\};
.f: .float 3.14 # float f = 3.14;
.s: .asciiz "abc" # char s[4] \{\'a\',\'b\',\'c\',\'\0\'};
.t: .ascii "abc" # char s[3] \{\'a\',\'b\',\'c\'};
```
|
{"Source-Url": "https://cgi.cse.unsw.edu.au/~cs1521/19T3/lec/mips_basics/notes", "len_cl100k_base": 6589, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28908, "total-output-tokens": 6612, "length": "2e12", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.0004456043243408203, "__label__crime_law": 0.000377655029296875, "__label__education_jobs": 0.0011510848999023438, "__label__entertainment": 7.700920104980469e-05, "__label__fashion_beauty": 0.00018727779388427737, "__label__finance_business": 0.00020325183868408203, "__label__food_dining": 0.0004341602325439453, "__label__games": 0.0007610321044921875, "__label__hardware": 0.0138397216796875, "__label__health": 0.00035953521728515625, "__label__history": 0.00031185150146484375, "__label__home_hobbies": 0.00021195411682128904, "__label__industrial": 0.0021839141845703125, "__label__literature": 0.00017905235290527344, "__label__politics": 0.0002465248107910156, "__label__religion": 0.0006542205810546875, "__label__science_tech": 0.08331298828125, "__label__social_life": 7.432699203491211e-05, "__label__software": 0.01163482666015625, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.00038695335388183594, "__label__transportation": 0.0007576942443847656, "__label__travel": 0.00017273426055908203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19812, 0.02548]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19812, 0.41534]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19812, 0.703]], "google_gemma-3-12b-it_contains_pii": [[0, 1195, false], [1195, 3381, null], [3381, 6114, null], [6114, 8505, null], [8505, 10547, null], [10547, 13253, null], [13253, 14945, null], [14945, 16411, null], [16411, 18460, null], [18460, 19812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1195, true], [1195, 3381, null], [3381, 6114, null], [6114, 8505, null], [8505, 10547, null], [10547, 13253, null], [13253, 14945, null], [14945, 16411, null], [16411, 18460, null], [18460, 19812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19812, null]], "pdf_page_numbers": [[0, 1195, 1], [1195, 3381, 2], [3381, 6114, 3], [6114, 8505, 4], [8505, 10547, 5], [10547, 13253, 6], [13253, 14945, 7], [14945, 16411, 8], [16411, 18460, 9], [18460, 19812, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19812, 0.16269]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
337d68f5bfb621b39916b25c2f9e276db431132e
|
AN INTERACTIVE PLANNING SYSTEM
Technical Note 245
July 1, 1981
By: David E. Wilkins
Computer Scientist
Ann E. Robinson
Senior Computer Scientist
Artificial Intelligence Center
The research reported here is supported by Air Force Office of Scientific Research Contract F49620-79-C-0188 (8871) and by Office of Naval Research Contract N00014-80-C-0300 (1349).
ABSTRACT
A principal goal of our planning and plan execution research is to develop a computer system that interacts with a person planning some activity. The system, designed to be independent of the problem area in which the planning takes place, will allow the person to (1) represent the problem area and the actions that may be performed in it; (2) explore alternative plans for performing the activity; (3) monitor the execution of a plan so produced, and (4) modify the plan as needed during its execution. The system currently being tested allows a person to produce a plan interactively, suggesting alternative actions, showing the effects of actions on the situation, checking for problems in the plan, and (occasionally) suggesting corrections for such problems. The plan is represented as a hierarchy of actions linked together in a network, generally called a "procedural network".
Current areas of investigation include the following: (1) development of representations for encoding information about a given problem area, stressing the representation of actions that may be performed in it; (2) development of computational methods for identifying difficulties in a plan, such as the overallocation of a resource or the possible effect of one action on the successful performance of subsequent actions; (3) development of strategies for deciding which actions and action sequences should be included in a plan; (4) development of effective communication with the user, including determining which and how much information should be communicated, and how best to present it.
1. Introduction
A principal goal of our planning and plan execution research is to develop a computer system that interacts with a person planning some activity and monitoring the execution of a plan thus produced. The system is designed to be independent of the particular problem area or the activity being planned.\(^1\) Our research builds directly on previous planning research \[2\] \[6\] \[7\] \[8\] and on research in natural-language dialogs about tasks \[3\] \[4\]. The basic approach to planning is to work within the hierarchical-planning paradigm, representing plans in procedural networks, as has been done in NOAH \[6\] and other systems.
In developing a planning system our approach has been to view it as an evolutionary process in which the computer plays an increasingly larger role in decision-making activities. The first tasks to be automated are those that have been the more traditional duties of computers: storing and giving back information. For a planning system such as described here, these include encoding information about what actions can be performed, when they are allowed, what their effects are, and keeping track of any actions' planned and their effects. The next tasks to be automated include adding procedures to identify the decisions to be made, making the simpler ones, and asking the user to make the more difficult ones. As computational techniques for more sophisticated decision-making are developed and refined, more complex decision-making can be automated in the planning system which can then make decisions such as which steps to perform, what order to perform them in and how to accommodate to unanticipated changes or failures. The system described here is in the middle stage of development.
This evolutionary approach has several advantages. From the planning point of view it allows us to address larger, "real-world" problems, which may initially be beyond the capabilities of fully automatic planning techniques, but which could provide interesting
\(^{1}\)The research reported here is supported by Air Force Office of Scientific Research Contract F-49620-79-C-0189 and by Office of Naval Research Contract N00014-89-C-0309.
research problems. Development of an interactive planner also encourages us to deal with
the issue of representing the planning problem in terms that can be easily communicated
to a user. Clearly, ease of communication is crucial for an interactive system in which the
person and computer must cooperate closely. It is also important for a system in which
planning is automatic, because the machine must still be able to communicate about the
planning that it has performed, and this requires that the machine have a representation
of the problem in terms that are familiar to a person. An interactive system such as
described here also provides an opportunity to further explore issues in human-machine
interaction, such as deciding what information to communicate, how best to present it,
and what medium to use (text, graphics, speech, etc.). In this note, we emphasize planning
issues, particularly those that are important from the perspective of building an interactive
system.
We have designed and implemented a system, SIPE, (System for Interactive Planning
and Execution monitoring), that supports interactive planning. Unlike its predecessors,
SIPE is designed to allow interaction with users throughout the planning and plan execu-
tion processes. The user is able to watch and, when desired, guide and/or control the
planning process. Our long-range goal is to fully automate the processes of planning and
execution-monitoring.
Development of the basic planning system has led to several extensions of previous sys-
tems. These include the development of a perspicuous formalism for encoding descriptions
of actions, the use of constraints to partially describe objects, the creation of mechanisms
that permit concurrent exploration of alternative plans, the incorporation of heuristics
for reasoning about resources, and mechanisms that make it possible to perform simple
deductions. Section 2 describes each capability in more detail, while Sections 3, 4, and 5
present examples from the SIPE program to illustrate how problems are solved.
The execution-monitoring part of SIPE has not been fully implemented, although work on it is in progress. During execution of a plan, some person or computer system monitoring the execution can specify what actions have been performed and what changes have occurred in the domain being modeled. Based on this, the plan can be updated interactively to cope with unanticipated occurrences. Planning and plan execution can be intermixed by producing a plan for part of an activity and then executing some or all of that plan before elaborating on the remaining portion.
2. Extensions of Previous Research
We are extending planning research toward several major objectives; among them are the following:
- The development of more flexible and uniform representations in which partial descriptions of objects and their properties can be specified and the objects treated as resources.
- The ability to explore several alternatives in parallel.
- The introduction of deductive operators for making deductions about the system's representation of the state of the problem domain.
- The ability to interact with the user, communicating both in terms of text and graphical representations of the plans.
2.1 Representation of Actions and Objects
We have developed a formalism for representing the actions that can take place both in the domain and during the planning process. (See [5] for more details.) Action descriptions (often referred to as operators), procedural networks, and information about objects in the domain and their interrelationships are represented in the same formalism – a hierarchy
of nodes with attributes. This uniform representation makes it possible to encode partial
descriptions of unspecified objects and of objects in the domain model. Thus, operators
that refer to abstract (unbound) objects can be represented in the same formalism as
procedural network nodes that refer to specific objects in the domain model.
2.1.1 Formalism for Describing Actions
Operators representing actions contain information about the objects that participate
in the action (represented as resources and arguments of the action), what the action is
attempting to achieve (its goal), the action’s effects when it is performed, and the conditions
necessary before the action can be performed (its preconditions). The action’s effects,
preconditions, and goal are all encoded as first-order predicates on variables and objects
in the domain. Negated predicates that occur in the effects of a plan essentially remove
from the model a fact that was true before but is no longer true. The current system
makes the closed-world assumption: any negated predicate is true unless the unnegated
form of the predicate is explicitly given in the model or in the effects of an action that
has been performed. This is not critical; the system could be changed to assume that a
predicate’s truth-value is unknown unless an explicit mention of the predicate is found in
either negated or unnegated form.
Operators also contain a plot that specifies how the action is to be performed. When
used by the planning system, the plot can be viewed as instructions for expanding a node
in the procedural network to a greater level of detail. The plot of an operator can be
described either in terms of goals to be achieved (i.e., a predicate to make true), or in
terms of processes to be invoked (i.e., an action to perform). (Most previous systems have
represented a process as a goal with only a single choice of action.) Encoding a step as a
process implies that only that action can be taken at that point, while encoding a step as a
goal implies that any action can be taken that will achieve the goal. Another less explicit
difference between encoding a step as a goal or as a process is whether the emphasis is on
the situation to be achieved or the actual action being performed.
During planning, an operator is applied to an already existing GOAL or PROCESS
node in the procedural network to produce another procedural network at the next level
of detail. In Figure 1, the SECURE PUMPBOLTS operator is applied to the GOAL node
shown to produce the three-node expansion. Figure 1 shows only part of the information at
each node.) Operators may specify preconditions that must obtain in the world state before
the operator can be applied. (The operator in Figure 1 has no precondition.) Operators
contain lists of resources and arguments to be matched with the resources and arguments
of the node being expanded. In Figure 1, PUMP1 in the operator is matched with the
PUMP in the goal node. The plot of the operator is not shown in the figure, but it contains
a template for generating the three PROCESS nodes and their effects.
We have already encoded domain operators describing actions that can be performed
in several problem areas; eventually planning operators will also be encoded. Domain
operators provide the planning system with information required to produce a plan for
some activity. Planning operators provide the planning system with information so it can
reason about its own planning process (metaplaning). They also furnish a major portion
of the interface between the planning system and the user, who will be able to direct the
planning process by invoking various planning operators.
Uniformity of representation for domain information, specific plans of action, and all
operators is expected to facilitate both the user's ability to interact with and control the
planning system, and the system's ability to incorporate (learn) new operators from plans
it has already produced. Examples of domain operators are given and discussed in Section
3; Section 4 discusses the advantages of the representation.
2.1.2 Partially Described Objects
Partial descriptions of unspecified objects can be viewed as setting constraints on the possible values of a variable representing an object. Partial descriptions include properties the object must have as well as relationships that must exist between that object and other objects (i.e., predicates that must be satisfied in a certain world state). SIPE provides a general language for expressing these constraints on variable bindings so they can be encoded as part of the operator. During planning, the system also generates constraints based on information in the operators, propagates them to variables in related parts of the network, and finds variable bindings that satisfy all constraints. Section 3 contains examples of constraints in operators; these are then discussed in Section 4.
Stefik [7] has used constraints to partially describe objects during planning in the domain of molecular genetics. Our system extends Stefik's approach in two ways. (1)
Constraints on variables can be evaluated before the variables are fully instantiated. For example, a set can be created that can be constrained to be only bolts, then to be longer than one inch and shorter than two inches, and then to have hex heads. This set can be used in planning before its members are identified in the domain. (2) Partial descriptions can vary with the context, thus permitting simultaneous consideration of alternative plans involving the same unidentified objects.
2.1.3 Objects Identified as Resources
Objects associated with an action can be characterized as resources that are to be used during a particular action and then released, e.g., a saw used during a cutting action. Since this is a common phenomenon and since it is often difficult or awkward to keep track of resources in current planning systems, we have included in the formalism a means of specifying the objects that serve as resources for an action. Declaration of a resource associated with an action connotes that one precondition of the action is that the resource be available. Mechanisms in the planning system, as they allocate and deallocate resources, automatically ensure that these preconditions will be satisfied. This form of reasoning about resources is a useful heuristic for quickly identifying and correcting problems in plans. Section 4 contains an example of the use of this heuristic and describes it in more detail.
2.1.4 Purposes
In the procedural networks that represent plans, PROCESS and GOAL nodes represent an action to be performed or a goal to be achieved. Associated with these nodes are predicates stating the expected effects of performing the action or achieving the goal. When a node is planned to a greater level of detail by applying an operator, the expansion
may consist of many nodes. Which node in the expansion is the main purpose of that sequence of steps must be determined, to ascertain when the effects of the higher level node become true in the more detailed expansion. (This may be the last in a series of nodes that, acting together, achieve the tacit "purpose" of the expansion.)
For example, let us consider the GOAL node in Figure 1 in which the effect is (SECURED PUMP). The SECURE PUMPBOLTS operator expands this node into three nodes at the next level of detail, as shown in the figure. The first node might be called a preparatory action, and the last a cleanup action. Somewhere must be encoded the fact that (SECURED PUMP) becomes true after the second node in the expansion. This is needed, for example, in answering user questions or determining the correct state at the "put wrench away" node (perhaps some operators for putting the wrench away may be effected by or depend upon the state of the pump).
In the example, the "tightly bolts" node accomplishes the repair of the pump, while the "put wrench away" node is a cleanup action that leaves objects in their normative states. Operators in SIPE have a PURPOSE attribute that specifies the main purpose of their expansion. Thus, the PURPOSE of the SECURE PUMPBOLTS operator is (TIGHT PUMPBOLTS). This is included in the effects of the "tightly bolts" node produced in the plan, so the (SECURED PUMP) effect is copied down to this node. In NOAH the assumption was that the last node of an expansion achieved the main purpose and so the effects were copied down to that node. In the example above, NOAH would incorrectly attach (SECURED PUMP) to the "put wrench away" node. SIPE allows flexibility in specifying purposes, so that situations like the one described above can be represented accurately.
2.2 Exploring Alternatives in Parallel
A context mechanism has been developed to allow constraints on a variable's value to be established relative to specific plan steps. Constraints on a variable's value, as well as the binding of a variable to a particular instance (possibly determined during the solution of a general constraint-satisfaction problem), can be retrieved only relative to a particular context. This permits the user to shift focus back and forth easily between alternatives.
SIPE accomplishes this in a hierarchical procedural-network paradigm by introducing CHOICE nodes in the procedural networks at each place an alternative can occur. Attributes of nodes and their values are stored relative to choice points. Thus, the constraints on a variable at a given point in a plan can be accessed by specifying the path of choices in the plan that is to be followed to reach that point. Different constraints can be retrieved by specifying a different plan (path of choices). This shifting of focus between alternatives cannot be done in systems using a backtracking algorithm, in which descriptions built up during expansion of one alternative are removed during the backtracking process before another alternative is investigated. Most other planning systems either do not allow alternatives (e.g., NOAH), or use a backtracking algorithm (e.g., MOLGEN [7], NONLIN [8]). An exception is the system described by Hayes-Roth et al. [2], in which a blackboard model is used to allow shifting focus between alternatives.
2.3 Deductive Operators
In addition to operators describing actions, SIPE allows specification of deductive operators that deduce facts from the current world state. As more complex domains are represented, it becomes increasingly important to deduce effects of actions from axioms about the world rather than explicitly representing these effects in operators. Deductive operators in SIPE may include both existential and universal quantifiers, and so provide a
rich formalism for deducing (possibly conditional) effects of an action.
Deductive operators are written in the same formalism as other operators in SIPE (see Section 3), permitting the system to control deduction with the same mechanisms it uses to control the application of operators. Deductive operators have no instructions for expanding a node to a greater level of detail. Instead, if the precondition of a deductive operator holds, its effects can be added to the world model (in the same context in which the precondition matched) without changing the existing plan. This may "achieve" some goal in the plan (by deducing that it has already been achieved), and avoid the need to plan actions to achieve it.
The ability to perform deductions is important in many domains. Consider the "blocks world" (described in [6]) that has been used as a test domain for many planners. In this world only one block may be on top of another, so that whenever a block is moved, the operator for the move action can be written to explicitly state the effect that the block underneath will be clear. In the more general case in which one block might have many blocks on top of it, there may or may not be another block on the underneath block so the effects of the action must be conditional on this. Since systems like NOAH and NONLIN must mention effects explicitly (universally or existentially quantified variables are not allowed in the description of effects), they cannot represent this more general case with a single move operator.
In SIPE, deductive operators may be used to deduce all the clearing and unclosing effects that occur, so the operators themselves do not need to represent these effects. As the domain grows to include many operators, this aspect becomes very convenient. Furthermore, in SIPE, existentially quantified variables can occur in any predicate; in particular, they can occur in both preconditions and effects of operators. Thus, by using existential variables in the precondition of a deductive operator, SIPE can handle the
general case of recognizing a clear block with one deductive operator.
2.4 Interaction With the User
A major goal has been to design and build a planning system that supports interactive planning. Among other features, the user must be able to invoke planning operations easily, but sHe must not be required to make tedious choices that could be performed automatically. In SIPE the user can either direct fairly low-level and specific planning operations or invoke higher-level operations that combine these lower-level ones. Some choices the system presently makes may not be optimal, so the user may want to make several of them. As the system's heuristics for selection improve, the user will presumably leave more choices to the system.
Planning operations the user can now invoke include: (1) determining which actions can be used to plan a given step in more detail; (2) testing to verify whether a given action can be used for further planning of a specific step; (3) testing the availability of resources for a given plan or subplan; (4) planning a step in greater detail (either with a user-supplied action or one chosen by the system); (5) indicating which object to use for an action; (6) instructing the system to select objects for an action; (7) instructing the system to find problems or conflicts in a plan or subplan; and (8) rearranging the order of certain plan steps when a conflict would arise from their parallel execution. In addition to supporting breadth-first and depth-first planning, these operations allow islands to be constructed in a plan (to arbitrary levels of detail), and then linked together later. The example in Section 5 shows a planning sequence in which some of the capabilities mentioned are invoked.
3. Examples of Operators
Several domains have been encoded in SIPE and problems solved in each of them, one
Figure 2
SIPE's Blocks World Operators
is the blocks world described in [6]. Many domain-independent planning systems (e.g., NONLIN and NOAH) have presented their solutions to the same problems in this domain. To facilitate comparison with other systems, operators for the blocks world in SIPE are shown in Figure 2.
On the basis of their names, the arguments and resources in an operator are automatically constrained by the system to be objects of a certain type, i.e., BLOCK1 must be a
block while OBJECT1 may be any object. The MAKECLEAR operator has the precondi-
tion (ON BLOCK1 OBJECT1), which means that it will not be applied unless there is a
block on OBJECT1. Matching this precondition will constrain BLOCK1 to be one of the
blocks on OBJECT1. In this encoding of the blocks world there will be only one block on
OBJECT1, so this matching will instantiate BLOCK1. The more general case in which
many blocks can be on one block is also handled. This case would result in a constraint that
would limit possible instantiations for BLOCK1 without instantiating BLOCK1. When
MAKECLEAR is applied to expand a plan node, its plot generates only one node, a
PROCESS node. This new node represents the plan for putting BLOCK1 on OBJECT2
where OBJECT2 is constrained to be something other than OBJECT1. OBJECT2 is not
instantiated, merely constrained, and can later be chosen to be a table, another block, or
any other object in the domain (unless it is the instantiation of OBJECT1).
The plot of the PUTON operator represents the plan of first clearing BLOCK1 and
OBJECT1 in parallel, then of putting BLOCK1 on OBJECT1 with a primitive action.
The effects of the PUTON.PRIMITIVE process node in the plot include a predicate that
matches the predicate stating the purpose of the PUTON operator. The PUTON.PRIMITIVE
node in the plan accomplishes the purpose of the PUTON operator and therefore inherits
higher-level effects attached to the node being expanded. SIPE assumes that the two
CLEARTOP goals produced in the plan must be kept true until the PUTON.PRIMITIVE
process is executed, since it accomplishes the purpose of the PUTON operator. (SIPE
allows for overriding this in the specification of operators.) If one of the CLEARTOP
goals is achieved and then later becomes untrue before the PUTON.PRIMITIVE process
is performed, the system will recognize the problem and attempt to correct it. The system
will not complain if the CLEARTOP goal becomes false after the PUTON.PRIMITIVE
operation.
The effect encodes as part of the PUTON.PRIMITIVE process is only one predicate because the system deduces all the other effects with deductive operators. For example, suppose that BLOCK1 was on BLOCK2 when the PUTON operator was applied to put BLOCK1 on OBJECT1. The blocks world deductive operators would be triggered when (ON BLOCK1 OBJECT1) is posted as an effect and they would (in a typical case) deduce (CLEARTOP BLOCK2), (NOT (ON BLOCK1 BLOCK2)), and (NOT (CLEARTOP OBJECT1)). These effects therefore do not have to be listed as effects in the PUTON operator.
4. The Resource Heuristic
Blocks world operators (those of SIPE are in Figure 2) are used most frequently in the literature to get block A on block B on block C. In a procedural network, this goal is given as two parallel GOAL nodes – one for (ON A B) and one for (ON B C). Let us suppose that initially A, B, and C are all on the table. (Other popular configurations include having A on either B or C initially, but in both cases A must be moved to the table first so that B can be moved onto C. Thus, having all blocks on the table is a central subproblem in all these configurations.)
In NOAH, NONLIN, and SIPE, both original GOAL nodes are expanded with the PUTON operator or its equivalent. The central problem is to notice that B must be put on C before A is put on B (otherwise B will not be clear when it is to be moved onto C). NOAH and NONLIN both build up a table of multiple effects (TOME) that tabulates every predicate instance asserted and denied in the parallel expansions of the two GOAL nodes. Using this table, the programs detect that (CLEARTOP B) is asserted in the expansion of (ON B C), but is denied in the (ON A B) expansion. Both programs then solve this
problem by doing (ON B C) first.
SIPE uses its resource heuristic to detect this problem and propose the solution without having to generate a TOME. (SIPE will eventually generate TOMEs to detect interactions that do not fit into the resource-reasoning paradigm.) When some object is listed in a plan node as a resource, the resource heuristic then prevents that particular object from being mentioned as either a resource or an argument in any plan node that is in parallel. In the example above, BLOCK1 is listed as a resource in PUTON.PRIMITIVE since it is being physically moved. Therefore, nothing in a parallel branch should try to move it, or even be dependent on its current location. Thus, as soon as the expansion of (ON B C) with the PUTON operator is accomplished, even before the expansion of (ON A B), SIPE recognizes that the plan is not valid; because B (matched to BLOCK1 in PUTON) is a resource in the PUTON.PRIMITIVE process and an argument in (ON A B).
Not being able to refer to a resource in another branch is sometimes too strong a restriction. SIPE also permits specification of shared resources, which will eventually allow the same object to be a shared resource or an argument in a parallel branch if certain conditions for the sharing are met (the sharing conditions have not been implemented yet).
SIPE's heuristic for solving a resource-argument conflict (as distinct from a resource-resource conflict) is to put the branch using the object as a resource before the parallel branch using the object as an argument. (This can be prevented by the user interacting with the system.) In this way SIPE decides that (ON B C) must come before (ON A B) without generating a TOME and without expanding both nodes. The assumption is that an object used as a resource will have its state or location changed by such a use, so the associated action must be done first to ensure that it will be "in place" when later actions occur that use it as an argument.
Resources are a useful heuristic, but they must be used properly by the person writing
operators. They enable the system to identify incorrect plans more quickly. The PUTON operator could have listed BLOCK1 as a resource of the operator as a whole. In this case, SIPE would detect that a plan with a conflict would be produced as soon as the decision was made to expand (ON B C) with PUTON (before the expansion was actually done). Such early identification of conflicts can help prune undesirable plans from the search space even before they are generated. The authors have found resources generally helpful in encoding several domains in SIPE. We also find reasoning about resources to be natural (especially in scheduling tasks), and easy to communicate to the user.
Figure 3
First Level of Cooking Plan
5. An Example of User Interaction
An early version of the planner that supports graphical interaction with a user was implemented on the Dolphin, a research computer developed at Xerox-PARC [1]. The Dolphin has a high-resolution black-and-white bitmap display terminal that is used for both graphics and text. More recently, a color-graphics terminal connected to a DEC-2060 has been used for interacting with the user.
Both the procedural networks (plans) produced and, for certain domains, the domain
configuration (principally the location of objects) are presented graphically. As new plans or partial plans are developed, they are also displayed on the terminal. The user can also choose to view different portions of the plan, at different levels of detail, and can look at any alternative plans. Steps in the plan (nodes in the network) can either be referred to by name or by pointing to them.
The user can also see a graphic representation of either the actual domain configuration or the one that will exist after some sequence of planned steps. The configuration is generally shown together with a plan or partial plan, and the configuration corresponds to the expected state following execution of that plan. The use, however, can vary this.
The planning choices available to the user appear in a menu. The user selects one of these actions by pointing to the relevant box. Current choices include the following:
- Planning the details of an action.
- Testing the resource requirements of a plan or plan-segment.
- Testing resource requirements and automatically fixing any problems, if possible.
- Identifying an object to be used in an action.
- Linearizing a parallel plan-segment.
- Viewing other parts of the plan, other plans, etc.
- Changing to another plan.
- Labeling a plan or plan-segment for future reference.
In addition, some plan-execution steps are available including:
- Record that a step has been executed.
- Record that several steps have been executed in parallel.
Record that a previously unknown situation now holds.
The first two update the system's model of the situation; the third will also cause the system to determine if the reported situation effects any planned actions and if so, will inform the user so that the plan can be modified accordingly.
Whenever a node or domain object is to be specified, the user can either point to it or type its name. Currently, the user may make most of the planning choices, although we have developed mechanisms that automate more decisions, such as the allocation of resources and the selection of objects to be used in an action.
Two problem domains in addition to the blocks world have been encoded and are being used to test the planner. They are launching planes from the deck of an aircraft carrier (including the prelaunch movement of planes) and preparing a meal. We will use the meal-preparation domain to illustrate the system's interaction with a user.
Figure 3 shows a preliminary plan for preparing three dishes—a vegetable, a meat, and a potato dish. These are represented by the three GOAL nodes in the plan. At this level of abstraction, they are assumed to be performed in parallel, as indicated.
Figure 4 shows the plan at a later stage of development. Each goal node has been expanded to one more level of detail. Here the choice was made to prepare fried meat, steamed broccoli, and fried potatoes—so far still in parallel. The plan has been elaborated to the point that the meat and potatoes are in pans. A display of the expected domain configuration at this point would show the meat and potatoes in the pans.
Since burners and pots are critical resources, the user now decides to select the objects to be used and to test for possible resource conflicts. An automatic resource allocation mechanism could be invoked, or, as in the example, the user can make the choice by typing the names of objects or by pointing to them and then sHe can execute a resources-testing action to test for conflicts. Each cooking step in this plan requires a burner; since there
are only two, the system, using the resource heuristic described in Section 4, detects a conflict. The conflict has many solutions; one is linearizing the parallel branches, another is choosing an alternative expansion for one of the branches. The choice here (made by the user) was to linearize – frying the potato before frying the meat. The resulting plan is shown in Figure 5.
If an alternative action were chosen to resolve the conflict, a CHOICE node would be created in the plan, and the decisions made in the context of this new choice would be stored relative to the choice. Each alternative plan so created is named, either by the user if sHe desires, or by the system – and at any time the user can elect to view and/or extend any of the plans by identifying the desired plan. Other parts of a plan can be labeled and/or displayed.
Planning will continue in this manner until each node in the network has been expanded
to primitive actions and no resource or other conflicts exist. The user could have elected to start execution of the first part of the plan before fully elaborating it, using some of the execution-monitoring mechanisms currently being developed.
6. Summary of the Advantages of SIPE
A major difference between SIPE and previous planners is that SIPE is interactive. Its interactive capabilities help the user guide and direct the planning process, allowing alternative plans to be explored concurrently by means of the context mechanism. Thus the user can shift focus as sHe pleases without being required to understand the program's search strategy or backtracking algorithm.
Enabling graceful interaction is one reason the operator description language (illustrated in Section 3) was designed to be straightforward and easy to understand. SIPE's deductive operators also contribute to this. They allow quantified information to be encoded and therefore can be used to make fairly sophisticated deductions, thus eliminating the
need to express effects in operators when they can be deduced. Perspicuous operators are of primary importance in facilitating interaction with nonexperts planning tasks using operators written by an expert. In addition, it is hoped that nonexperts will also be able to encode operators for their own domains without undue effort. (It is interesting to compare the readability of the operators in this paper with the SOUP code operators in NOAH for the same problem.)
SIPE has developed a resource heuristic (described in Section 4) that recognizes conflicts more effectively and earlier in the planning process than did previous planning systems that employed a table of multiple effects. It has been beneficial in user interaction to have reasoning about resources as a central part of the system, because resources seem to be a natural and intuitive way to think about objects in many domains.
Purposes in SIPE operators are used to coordinate higher level effects with lower-level plans. Being able to mention purposes explicitly provides the flexibility needed to represent many domains. In some domains (such as the meal-preparation one encoded in SIPE operators), operators have cleanup actions to perform after accomplishing the main purpose. SIPE has the flexibility to represent this, whereas in NOAH, higher-level effects would improperly be attached to the last cleanup action. SIPE also permits deductive operators, which provide even more flexibility for representing new domains.
One of the most important features of SIPE is the ability to constrain the possible values of variables with partial descriptions. In addition to the constraints described in PUTON and MAKECLEAR, an operator could require certain values for certain attributes of an object (e.g., that a block be colored red, or be bigger than 5 inches on a side). It is well known that this allows more efficient planning, since choices can be delayed until information has been accumulated.
Other advantages of constraints, however, are also critical. A key consideration is that
constraints allow expression of a much wider range of problems. For example, SIPE can be given the problem of placing one red block on top of one blue block without being told specifically which blocks to use (assuming that there are many red and blue blocks in the world). Because they do not permit constraints, most domain-independent planners (e.g., NOAH and NONLIN) cannot express such a problem. The ability to express such problems is vital to many real-world applications. For example, consider the problem of scheduling airline flights in San Francisco. For the SFO-JFK flight, the scheduler wants to plan for the availability of a wide-bodied jet (without caring which one it is) and a pilot qualified to fly that type of aircraft (again sHe doesn’t care which pilot, as long as the selected pilot meets this constraint). A planning system requiring the user to state exactly which plane and pilot sHe wanted to make available before allowing the formulation of plans for their availability would not be acceptable.
REFERENCES
|
{"Source-Url": "http://www.ai.sri.com/pubs/files/1508.pdf", "len_cl100k_base": 7789, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 28675, "total-output-tokens": 9207, "length": "2e12", "weborganizer": {"__label__adult": 0.00030732154846191406, "__label__art_design": 0.0008897781372070312, "__label__crime_law": 0.0003876686096191406, "__label__education_jobs": 0.004955291748046875, "__label__entertainment": 0.00016117095947265625, "__label__fashion_beauty": 0.00023221969604492188, "__label__finance_business": 0.0005216598510742188, "__label__food_dining": 0.0004045963287353515, "__label__games": 0.0009741783142089844, "__label__hardware": 0.0014009475708007812, "__label__health": 0.0005578994750976562, "__label__history": 0.0005254745483398438, "__label__home_hobbies": 0.0001850128173828125, "__label__industrial": 0.0008645057678222656, "__label__literature": 0.0006690025329589844, "__label__politics": 0.00032973289489746094, "__label__religion": 0.0005092620849609375, "__label__science_tech": 0.345703125, "__label__social_life": 0.00018262863159179688, "__label__software": 0.043792724609375, "__label__software_dev": 0.59521484375, "__label__sports_fitness": 0.00027632713317871094, "__label__transportation": 0.0007824897766113281, "__label__travel": 0.0002263784408569336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40074, 0.01432]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40074, 0.81167]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40074, 0.94732]], "google_gemma-3-12b-it_contains_pii": [[0, 363, false], [363, 1954, null], [1954, 4145, null], [4145, 6201, null], [6201, 7804, null], [7804, 9825, null], [9825, 11922, null], [11922, 12922, null], [12922, 14717, null], [14717, 16535, null], [16535, 18534, null], [18534, 20589, null], [20589, 22447, null], [22447, 22938, null], [22938, 24951, null], [24951, 26703, null], [26703, 28770, null], [28770, 29997, null], [29997, 31497, null], [31497, 33568, null], [33568, 34500, null], [34500, 35532, null], [35532, 37596, null], [37596, 39280, null], [39280, 40074, null]], "google_gemma-3-12b-it_is_public_document": [[0, 363, true], [363, 1954, null], [1954, 4145, null], [4145, 6201, null], [6201, 7804, null], [7804, 9825, null], [9825, 11922, null], [11922, 12922, null], [12922, 14717, null], [14717, 16535, null], [16535, 18534, null], [18534, 20589, null], [20589, 22447, null], [22447, 22938, null], [22938, 24951, null], [24951, 26703, null], [26703, 28770, null], [28770, 29997, null], [29997, 31497, null], [31497, 33568, null], [33568, 34500, null], [34500, 35532, null], [35532, 37596, null], [37596, 39280, null], [39280, 40074, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40074, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40074, null]], "pdf_page_numbers": [[0, 363, 1], [363, 1954, 2], [1954, 4145, 3], [4145, 6201, 4], [6201, 7804, 5], [7804, 9825, 6], [9825, 11922, 7], [11922, 12922, 8], [12922, 14717, 9], [14717, 16535, 10], [16535, 18534, 11], [18534, 20589, 12], [20589, 22447, 13], [22447, 22938, 14], [22938, 24951, 15], [24951, 26703, 16], [26703, 28770, 17], [28770, 29997, 18], [29997, 31497, 19], [31497, 33568, 20], [33568, 34500, 21], [34500, 35532, 22], [35532, 37596, 23], [37596, 39280, 24], [39280, 40074, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40074, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
5e899416644909699cbf86965793f269cad61707
|
[REMOVED]
|
{"Source-Url": "http://arco.esi.uclm.es/public/papers/2008-AIR.pdf", "len_cl100k_base": 4106, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 18497, "total-output-tokens": 5852, "length": "2e12", "weborganizer": {"__label__adult": 0.0003306865692138672, "__label__art_design": 0.0009484291076660156, "__label__crime_law": 0.0004405975341796875, "__label__education_jobs": 0.0007977485656738281, "__label__entertainment": 0.0001550912857055664, "__label__fashion_beauty": 0.0001962184906005859, "__label__finance_business": 0.00035500526428222656, "__label__food_dining": 0.0003614425659179687, "__label__games": 0.0006747245788574219, "__label__hardware": 0.0014219284057617188, "__label__health": 0.0006833076477050781, "__label__history": 0.00036025047302246094, "__label__home_hobbies": 0.0001056194305419922, "__label__industrial": 0.0004665851593017578, "__label__literature": 0.0005412101745605469, "__label__politics": 0.0003962516784667969, "__label__religion": 0.0005292892456054688, "__label__science_tech": 0.161865234375, "__label__social_life": 0.0001170039176940918, "__label__software": 0.0217437744140625, "__label__software_dev": 0.80615234375, "__label__sports_fitness": 0.0002472400665283203, "__label__transportation": 0.0005893707275390625, "__label__travel": 0.0002321004867553711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26021, 0.02627]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26021, 0.35134]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26021, 0.92448]], "google_gemma-3-12b-it_contains_pii": [[0, 3221, false], [3221, 8255, null], [8255, 13485, null], [13485, 15774, null], [15774, 18347, null], [18347, 23723, null], [23723, 26021, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3221, true], [3221, 8255, null], [8255, 13485, null], [13485, 15774, null], [15774, 18347, null], [18347, 23723, null], [23723, 26021, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26021, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26021, null]], "pdf_page_numbers": [[0, 3221, 1], [3221, 8255, 2], [8255, 13485, 3], [13485, 15774, 4], [15774, 18347, 5], [18347, 23723, 6], [23723, 26021, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26021, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
6c2cdbff78d4ae128eed738f9a27ab7542b58708
|
Computer Support for Schema Integration
C. A. Ewald
DSTO-TN-0072
APPROVED FOR PUBLIC RELEASE
© Commonwealth of Australia
DTIC QUALITY INSPECTED 3
DEPARTMENT OF DEFENCE
DEFENCE SCIENCE AND TECHNOLOGY ORGANISATION
Computer Support for Schema Integration
C.A. Ewald
Information Technology Division
Electronics and Surveillance Research Laboratory
DSTO-TN-0072
ABSTRACT
Currently, as part of an organisation-wide move for greater interoperability and reduction of duplication, Defence is examining a number of existing information systems in order to make them work more closely together. Schema integration is the merging of different database design specifications which have commonality. In this report, we examine support for schema integration provided by commercial off the shelf (COTS) software products, in particular computer aided software engineering (CASE) tools, and suggest desirable features which current products do not support. We examine which of the two products tested provides cost-effective support for schema evolution. The two products chosen for examination are InfoModeler version 2 (chosen for its support of the rich object-role modelling methodology, and natural language interface) and ERwin Version 2.6.1 (the market leader both in Australia and the U.S.A.).
It appears that no existing product provides a full schema merge capability, and this would not be expected, as schema integration needs to involve human creativity. However, some tasks within the integration process lend themselves to automation, and it is worthwhile examining tools to perform these tasks.
RELEASE LIMITATION
Approved for public release
DTIC QUALITY INSPECTED 3
Executive Summary
Currently, as part of an organisation-wide move for greater interoperability and reduction of duplication, Defence is examining a number of existing information systems in order to make them work more closely together. Schema integration is the merging of different database design specifications which have commonality. Commercial off-the-shelf (COTS) tools for schema integration are often expensive, as are custom-developed solutions. Like database design, schema integration is a process which requires creative human input, so full automation of the process is not expected. While no current COTS software products provide all the features considered desirable in a product to support the integration process, a lot can be done with state-of-the-art tools. In this report, we examine two such tools, showing that at least one has many features which provide significant benefit to the integrator. The tools chosen for this evaluation were InfoModeler Version2, and ERwin Version2.6.1. InfoModeler was selected for its support of the very expressive object role modelling language FORML, and for its natural language interface. ERwin is the market leader, both in Australia and the USA.
We considered features helpful in schema integration, and used in a different fashion from that expected in traditional schema design. These were schema comparison, reverse engineering, and integrity checking. Overall, InfoModeler was found to be more suitable for schema integration.
A number of features useful to integration were missing from both tools. In particular, relational integrity checking was not performed. We suggest that the integrated schema be checked for cycles and redundant functional dependencies. Simple and fast algorithms are available for these processes. A method of generating sample populations which illustrate a given set of constraints, and can be used to highlight a normally “hard to find” class of conflicts is also suggested.
Authors
Cathy Ewald
Information Technology Division
Cathy Ewald was awarded a PhD in Computer Science from The University of Queensland, Brisbane in 1996. She has conducted research into object-relational and nested relational databases, and into schema integration at the University of South Australia and the University of Queensland. She joined DSTO in January 1997, and is continuing to address research issues related to making distinct databases interoperate.
## Contents
1. INTRODUCTION ................................................................. 1
2. SCHEMA COMPARISON ....................................................... 1
3. REVERSE ENGINEERING .................................................... 2
4. INTEGRITY CHECKING ....................................................... 3
5. OTHER FACTORS ............................................................. 4
6. FEATURES NOT INCLUDED IN EITHER TOOL ......................... 4
7. CONCLUSIONS ............................................................... 5
8. ACKNOWLEDGMENT ......................................................... 6
9. REFERENCES ................................................................. 6
1. Introduction
Currently, as part of an organisation-wide move for greater interoperability and reduction of duplication, Defence is examining a number of existing information systems in order to make them work more closely together. Schema integration is the merging of different database design specifications which have commonality. A computer based tool for schema integration should not interfere with the human creativity essential to such work, but should provide as much support as possible for the humans involved in the process. In this report, we examine two COTS database design tools, and compare their strengths and weaknesses for the schema evolution problem. We also describe additional features which neither tool supports, and which we suggest would be beneficial. The tools chosen for this experiment were ERwin Version2.6.1 and InfoModeler 2. The tools were tested by altering sample schemata which came with the tool, and integrating the altered and new versions. Thus, the schemata were structurally very similar, with constraint and relationship differences. The tools were compared with respect to features used in integration. The main features considered were schema comparison (that is the identification of similarity or difference), reverse engineering ( extraction of a schema from a working database or from data definitions in SQL or similar ), and integrity checking ( the detection of constraint conflicts and other problems introduced by integration ). For each of these features, a description of the performance of each product is given, then the support of the products for the feature is rated on a scale of one to ten. Both technical performance and suitability for the Defence environment are taken into account. For example, InfoModeler performs re-engineering well, but receives a lower score than ERwin for that task because it requires a connection to the on-line database. This raises issues of security and access rights in the Defence environment. Some other factors are also briefly examined. We then suggest some integrity checking algorithms based on relational database theory which would also be beneficial, though they are not included in either tool.
2. Schema Comparison
By schema comparison, we mean all those features which aid the user in finding similarities and differences between entities, relationships and constraints on the schema.
We review the way in which each product supports this operation.
The standard edition of ERwin, on which our testing was carried out, has no custom facilities for comparisons of schemata. However, the add-on product Model Mart provides a “Complete Compare” function which detects and reports on differences in entity and relationship definition, and specification of a limited variety of constraints (participation, and keys). Complete Compare is primarily designed to reconcile different versions or views of the same schema, but would be helpful in identification of constraint conflict, and overlap between schemata, provided that they were substantially similar. It assumes that all naming conflicts have been resolved. That is, items with different names are taken to be different items. Model Mart has a merge feature which actually combines schemata, but once again these need to be substantially similar. In addition, Model Mart provides a sophisticated tool for configuration management. This includes good support for version management and tracking, but these facilities would have limited application to the schema integration problem. It should also be noted that Model Mart is an optional add-on, which must be purchased at extra cost.
InfoModeler does not require add-on software to support integration, with the configuration management and comparison features being a part of the standard product. Schemata to be integrated or related in some way are explicitly gathered by the user into a project. The tool attempts to reconcile schemata within a project, highlighting cases where any item is multiply defined. One can then mark some version(s) of the items in question as "external", meaning that they are allowed variations from the official definition, or rename them using a sophisticated name space maintenance feature. Once again, this is not an integration or merging tool, but a function similar to Complete Compare. As with Complete Compare, the tool cannot cope automatically with naming conflicts. Objects which have the same name are highlighted by the tool. However, similar objects with different names are ignored. Models within a project are still regarded as independent models. However, creation of a "dictionary" merges them into a single new schema. The old schemata are left unchanged, though linked with the dictionary so that changes in a source schema can be propagated to the dictionary and vice versa.
While is this not a "virtual table", and does not create the resultant technical problems, it is a similar concept when seen from a conceptual viewpoint. Human intervention is required in this process.
Comparison Rating for ERwin : 4/10
Comparison Rating for InfoModeler 8/10
3. Reverse Engineering
Reverse engineering is the process of defining a schema from a database implementation.
ERwin was found to give satisfactory performance in reverse engineering, from either a data definition language script or an active database. Non-relational legacy systems such as A/REV and COBOL cannot be re-engineered using ERwin, nor can object-based or object relational systems.
In InfoModeler, a reverse-engineering facility exists, but requires an active database connection. This is a disadvantage for some defence work in which users are prepared to give access to metadata, but not grant connection rights to databases. "Dummy" databases can be constructed from metadata, but when operating in a heterogeneous environment, this requires access to a large number of different database management systems. The issue in our environment is that users of working, on-line systems are understandably reluctant to allow meta-data managers to access, and run tools on, their working system. It is regarded as much safer to simply allow access to an SQL "build file" which describes the structure of the database. This avoids the need for these people to have power over working systems, and also reduces the expense in purchases of a number of different database management programs. Once again, only relational and common desktop databases such as MS Access systems can be re-engineered. However, a new release of InfoModeler is now available which supports the full ranges of SQL 3 constructs, thus allowing most extended data types to be understood by the tool. Users even have the facility to define their own types.
Reverse engineering Rating for ERwin : 8/10
Reverse engineering Rating for InfoModeler 6/10
4. Integrity Checking
Integrity checking is the process of detecting semantic conflicts due to integration.
Neither tool provides a great degree of support for this process. The main features for constraint conflict detection have been discussed in the previous section. Recall that Model Mart detects some conflicts in its Complete Compare, and InfoModeler when a schema is added to a project or explicitly checked by the user. Model Mart was not tested due to cost considerations.
While the tools do not have specifically tailored features for this task, InfoModeler was found to have generic features which could be adapted to support this process. Firstly, in addition to the entity-relationship (ER) modelling language supported by ERwin and other tools, Info Modeler gives access to a more expressive Object-Role Modelling (ORM) language, which allows a wider range of constraints to be expressed in the model. ER is the most widely used data modelling language in Defence, as well as in Australian and American industry. Constraints in ORM which are not supported by ER include, reflexive, transitive and symmetric constraints on circular relationships, domain range constraints, and subset equality and exclusion constraints between sets of entities playing roles related in some sense. Thus, by definition the constraint report of InfoModeler is potentially more powerful. InfoModeler can additionally identify some cycles, a large number of mistakes or redundancies related to subtyping, many common data modelling mistakes, lack of a unique identifier for an entity and some types of constraint conflict. Relationship errors, such as a relationship accidentally reversed are detected when checking is done, and the tool does not allow relationships to be included “in the wrong direction”. A subtype that is not defined in terms of its supertype will be reported. It also detects cases where any data item has more than one definition or more than one different set of constraints.
None of the checks performed by either of these tools require access to populations (instances). However, InfoModeler allows the user to input a sample population for every “fact”. Certain constraints, namely those related to keys (unique identifiers) can also be inferred from these user supplied information examples. Thus, if the constraint is not consistent with the examples, the tool detects this. The tool has the capability to suggest constraints of this type based on a set of examples. This facility is restricted to uniqueness (key) constraints, which can be inferred from data using algorithmic techniques. This facility is not intended to check the quality of data, rather as a “reality check” on the design. That is, it helps users to decide if the constraints they have specified exclude data which they might wish to include or fail to exclude situations forbidden in the real world. It is equally applicable to the design of a new schema as to integration. The user is offered the change to alter uniqueness constraints which are suggested by the system.
ERwin performs some checking related to design errors, for example identification of some cycles on the schema. Only structural checks such as a check for cycles involving entities and subtypes are performed.
Checking Rating for ERwin 4/10
Checking Rating for InfoModeler 8/10
5. Other Factors
In this section, the general features of the tools are examined. Clearly, users want a tool that they can use easily, in addition to having features for schema integration. The general features of the packages when used for ER modelling are comparable, to the extent that the choice of product for conventional ER modelling is largely a matter of taste. Both have good user interfaces, the ability to “browse” a schema for a particular construct, searching tools, the ability to define complex colour schemes and some checking facilities. InfoModeler, as mentioned earlier, has more ability to analyse and check a schema and comes with advanced reporting capabilities. It also provides a user interface which encourages the users to properly document the model. When using FORML, the user can perform modelling almost entirely in natural language. ERwin runs on a wide variety of hardware platforms and operating systems, while InfoModeler only runs on Windows platforms. Data can be imported into InfoModeler from ERwin, when a .dll file to perform data conversion is installed. This file is supplied with the current release, and is freely available. ERwin cannot directly read InfoModeler files. Reports and schemata from InfoModeler can be exported as RTF files, for inclusion in written reports. InfoModeler can generate, but not read, a wide range of SQL and PC based database script files. As mentioned earlier, reverse engineering can be performed if connected to the database. ERwin can generate and read most SQL and PC database script files, providing a way of interchanging data. ERwin also provides a wide range of report formats, as does InfoModeler. ERwin reports may be exported as text files delimited by commas or tabs, to allow input into other programs such as word processors. RTF format is not provided.
Both stand alone products cost less than $6000, with InfoModeler’s estimated recommended retail price per seat being $4600, and $5499 for ERwin. A cheaper desktop version of InfoModeler ($800) is also available. This version has the full range of modelling features, but can only reverse-engineer from, and generate code for, desktop PC databases. Five copies of ERwin, with Universal Directory and Model Mart cost around $40 000.
6. Features not Included in Either Tool
As has been mentioned earlier, neither tool provides a full integration function. As mentioned in the previous section, current COTS products provide only limited support for integrity checking in the context of schema integration. We suggest that some algorithmic checking procedures based on relational database theory be implemented. Since schema diagrams identify keys, functional dependency (FD) constraints can be extracted from any schema and mapped to a graphical representation. In conceptual data models, most FDs are key constraints, that is based on the unique identifier of an entity. Functional dependency graphs are defined and explained in [Yang86]. In [Ewald96] and associated papers (E093,E094,E095) simple graph search and comparison operations are then used to detect conflict, redundancy and inconsistency on evolving schemata. The classical synthesis algorithm [Maier 83] can also be applied to remove redundancy or to produce a well-designed relational database from a conceptual specification.
Cycles on a conceptual schema may result from addition of a redundant dependency and from conflict between existing and new dependencies. A redundant dependency is one which can be derived from the others on the schema using a set of formal axioms. For this reason, it is often beneficial to have a human examine all cycles on a schema to ensure
that none result from such schema problems. A "loop" on the conceptual schema may result in the same information being stored within a local schema in multiple ways. Addition of new information to a schema may also create such a loop. Conflicting dependencies may be added during integration, resulting in a graph which contains two representations of the dependency relationship between a given pair of attribute sets.
All these situations can be detected by performing simple graphical tests on the FD graphs created from the schemata under consideration. Comparison of current and previous graphs is used to detect conflict, while depth first search identifies cycles. These quick screening tests report any potential problems to a human designer, who can then use the synthesis algorithm or conceptual re-design techniques to eliminate problems.
The above techniques still detect only a limited set of constraint conflicts, namely those involving functional dependency type constraints. These are the constraints which can be checked without reference to sample schema populations. A further level of checking which considers a larger range of constraints, and examines the potentially complex interactions between constraints is recommended. This is not done by either of the tools examined above. From a functional dependency graph, we can generate a list of the dependencies which are known not to hold. These are known as potential violations, and a formal theory is presented in [Ewald96]. Based on these, algorithms have been developed to detect constraint conflict. This is done by constructing constraint patterns, using an algorithm which halts if a pattern cannot be found. These algorithms also generate small, meaningful sets of information examples which illustrate the features of the set of constraints under consideration. These are particularly useful with the ORM modelling techniques, which encourage interaction with users by means of information examples. The InfoModeler tool allows the designer to enter examples of data elements at design time, and can infer some constraints based on such examples. Conflict between functional, set inclusion, set exclusion and mandatory constraints may be detected by these algorithms. Real information examples from a repository or a relational database may be retrieved to provide output in a form with which the users of the system are familiar.
### 7. Conclusions
Current COTS products provide useful support for schema integration. In this report, the support of ERwin 2.6.1 and InfoModeler 2 for the basic activities of the integrator is assessed.
The table below summarises the results of the investigations, considering a version of ERwin without Model Mart extensions. Note that Model Mart supports relevant features, but at a cost. Graphical user interface, and support for ER modelling are similar in the two products. InfoModeler’s additional ORM features complement the ER modelling methodology, and provide a more detailed view of the data model. It is possible to completely convert from ER to ORM, however a recommended approach is to use both methods, presenting the ER view as an abstraction when a fully detailed view is not needed. This also allows communication with domain experts in natural language, and with experienced modellers using whichever graphical notation they prefer. The InfoModeler product supports conversion of models between ER and ORM forms and detailed integrity checking before conversion.
As can be seen, existing software can help with schema integration, but neither product examined could be considered to be a full schema integration tool. Some easy-to-implement features which would be beneficial in such a tool are suggested. In particular, checking for cyclic key constraints, removal of redundancy, and generation of suitable information examples are beneficial and computationally cost-effective. The examples generated are artificial, but an alternative is to use algorithms to select a small amount of relevant data from a larger repository of examples (either real data or examples given by users). That is, the algorithms as originally developed construct symbol “constraint patterns” using semantically meaningless symbols. However, these are then replaced by matching data from a repository, which is more meaningful, especially to domain experts from areas other than information technology.
8. Acknowledgment
The author would like to thank the DSTO reviewer, Mr Conn Copas, for his helpful comments on a draft version of this report.
9. References
Computer Support for Schema Integration
C.A. Ewald
(DSTO-TN-0072)
DISTRIBUTION LIST
AUSTRALIA
DEFENCE ORGANISATION
Task sponsor:
Director General C3I Development
S&T Program
Chief Defence Scientist
FAS Science Policy
AS Science Corporate Management
Director General Science Policy Development
Counsellor, Defence Science, London
Counsellor, Defence Science, Washington
Scientific Adviser to MRDC Thailand
Director General Scientific Advisers and Trials
Scientific Adviser - Policy and Command
Navy Scientific Adviser
Scientific Adviser - Army
Air Force Scientific Adviser
Director Trials
Aeronautical & Maritime Research Laboratory
Director
Electronics and Surveillance Research Laboratory
Director
Chief Information Technology Division
Research Leader Command & Control and Intelligence Systems
Research Leader Military Computing Systems
Research Leader Command, Control and Communications
Executive Officer, Information Technology Division
Head, Information Architectures Group
Head, Information Warfare Studies Group
Head, Software Systems Engineering Group
1 shared copy
(1 shared copy)
1 copy of Doc Control Sheet and 1 distribution list
1 copy of Doc Control Sheet and 1 distribution list
Head, Trusted Computer Systems Group
Head, Advanced Computer Capabilities Group
Head, Computer Systems Architecture Group
Head, Systems Simulation and Assessment Group
Head, Intelligence Systems Group
Head, CCIS Interoperability Lab
Head Command Support Systems Group
Head, C3I Operational Analysis Group
Head Information Management and Fusion Group
Head, Human Systems Integration Group
Task Manager John Mansfield
C. Ewald
Publications and Publicity Officer, ITD
DSTO Library and Archives
Library Fishermens Bend
Library Maribyrnong
Library Salisbury
Australian Archives
Library, MOD, Pyrmont
Capability Development Division
Director General Maritime Development
Director General Land Development
Intelligence Program
Defence Intelligence Organisation
Corporate Information Program
Director General Information Policy and Plans
Director Metadata Management
Corporate Support Program (libraries)
OIC TRS Defence Regional Library, Canberra
Officer in Charge, Document Exchange Centre (DEC),
US Defence Technical Information Center,
UK Defence Research Information Centre,
Canada Defence Scientific Information Service,
NZ Defence Information Centre,
National Library of Australia,
Universities and Colleges
Australian Defence Force Academy
Library
Head of Aerospace and Mechanical Engineering
Senior Librarian, Hargrave Library, Monash University
OUTSIDE AUSTRALIA
Abstracting and Information Organisations
INSPEC: Acquisitions Section Institution of Electrical Engineers 1
Library, Chemical Abstracts Reference Service 1
Engineering Societies Library, US 1
Materials Information, Cambridge Scientific Abstracts 1
Documents Librarian, The Center for Research Libraries, US 1
Information Exchange Agreement Partners
Acquisitions Unit, Science Reference and Information Service, UK 1
Library - Exchange Desk, National Institute of Standards and Technology, US 1
SPARES 10
Total number of copies: 64
Currently, as part of an organisation-wide move for greater interoperability and reduction of duplication, Defence is examining a number of existing information systems in order to make them work more closely together. Schema integration is the merging of different database design specifications which have commonality. In this report, we examine support for schema integration provided by commercial off the shelf (COTS) software products, in particular computer aided software engineering (CASE) tools, and suggest desirable features which current products do not support. We examine which of the two products tested provides cost-effective support for schema evolution. The two products chosen for examination are InfoModeler version 2 (chosen for its support of the rich object-role modelling methodology, and natural language interface) and ERwin Version 2.6.1 (the market leader both in Australia and the U.S.A.).
It appears that no existing product provides a full schema merge capability, and this would not be expected, as schema integration needs to involve human creativity. However, some tasks within the integration process lend themselves to automation, and it is worthwhile examining tools to perform these tasks.
|
{"Source-Url": "https://apps.dtic.mil/sti/pdfs/ADA335348.pdf", "len_cl100k_base": 5186, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 30667, "total-output-tokens": 6130, "length": "2e12", "weborganizer": {"__label__adult": 0.0002911090850830078, "__label__art_design": 0.0004849433898925781, "__label__crime_law": 0.0006451606750488281, "__label__education_jobs": 0.0015735626220703125, "__label__entertainment": 9.208917617797852e-05, "__label__fashion_beauty": 0.00016391277313232422, "__label__finance_business": 0.0009293556213378906, "__label__food_dining": 0.00025725364685058594, "__label__games": 0.0004968643188476562, "__label__hardware": 0.0010709762573242188, "__label__health": 0.0004324913024902344, "__label__history": 0.000347137451171875, "__label__home_hobbies": 9.28044319152832e-05, "__label__industrial": 0.0006394386291503906, "__label__literature": 0.0003361701965332031, "__label__politics": 0.00032973289489746094, "__label__religion": 0.0003502368927001953, "__label__science_tech": 0.1087646484375, "__label__social_life": 0.0001385211944580078, "__label__software": 0.08770751953125, "__label__software_dev": 0.7939453125, "__label__sports_fitness": 0.0001863241195678711, "__label__transportation": 0.0003886222839355469, "__label__travel": 0.00018310546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28642, 0.01954]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28642, 0.24259]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28642, 0.9279]], "google_gemma-3-12b-it_contains_pii": [[0, 218, false], [218, 1683, null], [1683, 1683, null], [1683, 3658, null], [3658, 4126, null], [4126, 4858, null], [4858, 8512, null], [8512, 11717, null], [11717, 15065, null], [15065, 18748, null], [18748, 22249, null], [22249, 24297, null], [24297, 25505, null], [25505, 26858, null], [26858, 27412, null], [27412, 28642, null]], "google_gemma-3-12b-it_is_public_document": [[0, 218, true], [218, 1683, null], [1683, 1683, null], [1683, 3658, null], [3658, 4126, null], [4126, 4858, null], [4858, 8512, null], [8512, 11717, null], [11717, 15065, null], [15065, 18748, null], [18748, 22249, null], [22249, 24297, null], [24297, 25505, null], [25505, 26858, null], [26858, 27412, null], [27412, 28642, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28642, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28642, null]], "pdf_page_numbers": [[0, 218, 1], [218, 1683, 2], [1683, 1683, 3], [1683, 3658, 4], [3658, 4126, 5], [4126, 4858, 6], [4858, 8512, 7], [8512, 11717, 8], [11717, 15065, 9], [15065, 18748, 10], [18748, 22249, 11], [22249, 24297, 12], [24297, 25505, 13], [25505, 26858, 14], [26858, 27412, 15], [27412, 28642, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28642, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
ee6077030a6a558812627e23678bfdc9dbc78f57
|
Data Augmentation Using GANs
Project 236754, Dima Birenbaum
Supervisors: Yaron Honen, Gary Mataev
Main Goal
Our main goal is to generate faces with specific emotions. This generated data will serve as an external data source that helps improve a classifier.
Part A
Synthetic Data Generation by emotion transition using Generative Adversarial Networks
The Data
For middle stage, FER2013 dataset was chosen.
This dataset contains images of size 48x48 pixels and 7 emotion expressions: Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral.
The data distribution in FER2013 dataset is:
<table>
<thead>
<tr>
<th>Emotion</th>
<th>Amount</th>
</tr>
</thead>
<tbody>
<tr>
<td>Angry</td>
<td>4593</td>
</tr>
<tr>
<td>*Disgust</td>
<td>547</td>
</tr>
<tr>
<td>Fear</td>
<td>5121</td>
</tr>
<tr>
<td>Happy</td>
<td>8989</td>
</tr>
<tr>
<td>Sad</td>
<td>6077</td>
</tr>
<tr>
<td>Surprise</td>
<td>4002</td>
</tr>
<tr>
<td>Neutral</td>
<td>6198</td>
</tr>
</tbody>
</table>
* - we will discuss further
The project uses CycleGAN architecture, as a method, for image-to-image style transfer.
CycleGAN - is a two way GAN, that consists of 2 Discriminators and 2 Generators.
The idea is to transfer an input from one domain to another back and forth.
Theory Background
Domains A, B, mapping functions: $G:A \rightarrow B$, $F:B \rightarrow A$, associated adversarial discriminators $D_A$, $D_B$. $D_B$ encourages $G$ to translate $A$ into outputs indistinguishable from domain $B$, and vice versa, for $D_A$ and $F$.
To further regularize the mappings, used *two-cycle consistency loss*. The main intuition, that when translating from one domain to another and back again, the model should arrive at where it started. Two-cycle consistency loss consists of:
(a) Forward cycle-consistency: $a \rightarrow G(a) \rightarrow F(G(a)) \approx a$
(b) Backward cycle-consistency: $b \rightarrow F(b) \rightarrow G(F(b)) \approx b$
Target and loss functions
- Adversarial loss:
\[
\mathcal{L}_{GAN}(G, D_A, A, B) = \mathbb{E}_{a \sim p_{data}(a)}[(D_A(a) - 1)^2] + \mathbb{E}_{b \sim p_{data}(b)}[\left(D_A(G(b))\right)^2]
\]
- Cycle consistency loss:
\[
\mathcal{L}_{cyc}(G, F) = \mathbb{E}_{a \sim p_{data}(a)}[\|F(G(a)) - a\|_1] + \mathbb{E}_{b \sim p_{data}(b)}[\|G(F(b)) - b\|_1]
\]
- Full objective:
\[
\mathcal{L}(G, F, D_A, D_B) = \mathcal{L}_{GAN}(G, D_B, A, B) + \mathcal{L}_{GAN}(F, D_A, A, B) + \lambda \mathcal{L}_{cyc}(G, F)
\]
- Target function:
\[
\hat{G}, \hat{F} = \arg\min_{G, F} \max_{D_A, D_B} \mathcal{L}(G, F, D_A, D_B)
\]
The Model: *Forward Cycle A2B*
- **Start**
- **Discriminator A**
- Decision [0, 0.9]
- **Input A**
- **Generator A2B**
- **Generated B**
- **Generator B2A**
- **Reconstructed A**
- **Discriminator B**
- Decision [0, 0.9]
The Model: *Backward Cycle B2A*
```
Start
Input B → Discriminator B → Decision [0, 0.9] → Generator B2A → Generated A → Discriminator A → Decision [0, 0.9] → Generator A2B → Reconstructed B
```
Both cycles together
- **Cycle A**
- Real Image (A)
- Fake Image (B)
- Discriminator A
- Generator A2B
- Generator B2A
- Reconstructed (A)
- Reconstructed (B)
- **Cycle B**
- Fake Image (A)
- Real Image (B)
- Discriminator B
- Generator A2B
- Generator B2A
- Cycle B
Decision [0, 0.9]
The Architecture
Anatomy of Cycle GAN
Generator and Discriminator
The Networks:
**Generator**
The Generator consists of 3 parts:
- Decode (downsampling)
- Transferring (6 residual blocks)
- Encode (upsampling).
The Networks: **Discriminator**
The Discriminator - a simple CNN network, that determinates if the image is fake or real.
First results
- Neutral -> Happy transition
Problems
- “Dirty” dataset, unbalanced classes, mislabeled data
- Similarity between classes (for example: fear-angry, sad-neutral)
- Lack of data (Disgust Class – 550 images)
- Discriminator learns faster than the Generator.
- Vanishing gradient
- Quality and artifacts of output images
Solution – Weighted Cycle Loss
- Data augmentation, transform on training
- Different learning rates for generator and discriminator: 0.0002, 0.0001
- Learning rate decay
- Soft labels for discriminator: Real target is 0.9 instead of 1
- Improving quality by changing cycle loss to:
\[
\mathcal{L}_{cyc}(G, F, D_A, A, \gamma) = \mathbb{E}_{a \sim p_{data}(a)} \left[ D_A(a) \cdot \left[ \gamma \cdot \| f_{D_A} \left( F \left( G(a) \right) \right) - f_{D_A}(a) \|_1 + (1 - \gamma) \cdot \| F(G(a)) - a \|_1 \right] \right]
\]
Where: \( \gamma \in [0, 1] \) – linearly increase with epochs, to 1, \( f_{D_{(\cdot)}} \) is the future extractor using last layer of \( D_{(\cdot)} \)
- So final objective updated to:
\[
\mathcal{L}(G, F, D_A, D_B) = \mathcal{L}_{GAN}(G, D_B, A, B) + \mathcal{L}_{GAN}(F, D_A, A, B) + \lambda \mathcal{L}_{cyc}(G, F, D_A, A, \gamma) + \lambda \mathcal{L}_{cyc}(G, F, D_B, B, \gamma)
\]
Results of improved model
The Conclusion
So as we saw, we have a lot of problems, such as:
- model instability;
- vanishing gradient;
- dirty or small dataset;
- control over the training;
- battle between generator and discriminator etc.
So how can we improve stability of training? The answer is - The Wasserstein distance.
Wasserstein CycleGAN - is a two-way Wasserstein GAN, that consists of 2 Critics and 2 Generators.
The idea is, for distribution of mass $\mu(x)$ on a space $X$, we wish to transport the mass in such a way that it is transformed into the distribution $\nu(x)$ on the same space.
Our main goal and bottle-neck is to create data, that has same distribution as targeted domain, one of the most suitable and available methods for this task is *The Wasserstein distance*.
*The Wasserstein distance* is the minimum cost of transporting mass in converting the data distribution $q$ to the data distribution $p$. The Wasserstein distance for the real data distribution $Pr$ and the generated data distribution $Pg$ is mathematically defined as the greatest lower bound (infimum) for any transport plan.
Theory Background - *The Wasserstein distance*
- The Wasserstein distance loss:
\[
W(\mathbb{P}_r, \mathbb{P}_g) = \inf_{\gamma \in \Pi(\mathbb{P}_r, \mathbb{P}_g)} \mathbb{E}_{(x,y) \sim \gamma} [||x - y||]
\]
Where \(\Pi(\mathbb{P}_r, \mathbb{P}_g)\) – denotes the set of all joint distributions \(\gamma(x, y)\), whose marginals are respectively \(Pr\) and \(Pg\).
- However, the equation for the Wasserstein distance is highly intractable. Using the *Kantorovich-Rubinstein duality*, we can simplify the calculation to:
\[
W(\mathbb{P}_r, \mathbb{P}_\theta) = \sup_{\|f\|_L \leq 1} \mathbb{E}_{x \sim \mathbb{P}_r} [f(x)] - \mathbb{E}_{x \sim \mathbb{P}_\theta} [f(x)]
\]
Where \(\sup\) is the least upper bound and \(f\) is a \(1 - \text{Lipschitz function}\) following this constraint:
\[|f(x_1) - f(x_2)| \leq 1 \cdot |x_1 - x_2|\]
So to calculate the Wasserstein distance, we just need to find a 1-Lipschitz function. We build a deep network to learn it. This network is very similar to the discriminator $D$, just without the sigmoid function and outputs a scalar score* rather than a probability.
* - This score can be interpreted as how real the input images are.
The Networks: **Generator**
The Generator same as in Cycle GAN:
- Decode (downsampling)
- Transferring (6 residual blocks)
- Encode (upsampling).
The Networks:
**Critic**
Same as Discriminator, but without Sigmoid activation at the end.
```
Conv2D(64, 4x4, s=2) # Input(48x48x1)
Batch Normalization
LeakyReLU(0.2)
Conv2D(128, 4x4, s=2)
Batch Normalization
LeakyReLU(0.2)
Conv2D(256, 4x4, s=2)
Batch Normalization
LeakyReLU(0.2)
Conv2D(512, 4x4, s=2)
Batch Normalization
LeakyReLU(0.2)
Linear(1)
Scalar
```
Results of Wasserstein Cycle Gan
The Results
Let’s see visual results of the work.
Other results: Teenager
Other results: Women
Other results: Men
Other results: Old
Other results: Asian
Other results: Noisy
Other results: Noisy - Watermarks
Results on transformed data
Other results: Children #1
Other results: Children #2
Part B
- Classifiers on FER2013 Dataset
- Fake Neutral Images Generator Using DCGAN
The Classifier
Classification task on “dirt” dataset, maybe challenging.
Results were checked on two different classifiers:
- Simple (~65%)
- Current State of the art (73%)
FER2013
<table>
<thead>
<tr>
<th>Layer Type</th>
<th>Configuration</th>
</tr>
</thead>
<tbody>
<tr>
<td>Input image</td>
<td>48<em>48</em>1</td>
</tr>
<tr>
<td>Convolution&ReLU</td>
<td>[3, 3, 1, 64] s=1</td>
</tr>
<tr>
<td>Max-Pooling&Norm</td>
<td>[1, 3, 3, 1] s=2</td>
</tr>
<tr>
<td>Convolution&ReLU</td>
<td>[3, 3, 64, 128] s=1</td>
</tr>
<tr>
<td>Max-Pooling&Norm</td>
<td>[1, 3, 3, 1] s=2</td>
</tr>
<tr>
<td>FC*2</td>
<td>256</td>
</tr>
<tr>
<td>Softmax</td>
<td>[256, 7]</td>
</tr>
<tr>
<td>Output logits</td>
<td>[7]</td>
</tr>
</tbody>
</table>
Surreal (Paper) Classifier - Architecture
Doesn't work
<table>
<thead>
<tr>
<th>Layer Type</th>
<th>Configuration</th>
</tr>
</thead>
<tbody>
<tr>
<td>Input image</td>
<td>48<em>48</em>1</td>
</tr>
<tr>
<td>Convolution&ReLU</td>
<td>[3, 3, 1, 64] s=1</td>
</tr>
<tr>
<td>Max-Pooling&Norm</td>
<td>[1, 3, 3, 1] s=2</td>
</tr>
<tr>
<td>Convolution&ReLU</td>
<td>[3, 3, 64, 128] s=1</td>
</tr>
<tr>
<td>Max-Pooling&Norm</td>
<td>[1, 3, 3, 1] s=2</td>
</tr>
<tr>
<td>FC*2</td>
<td>256</td>
</tr>
<tr>
<td>Softmax</td>
<td>[256, 7]</td>
</tr>
<tr>
<td>Output logits</td>
<td>[7]</td>
</tr>
</tbody>
</table>
## Simple Classifier - Architecture
<table>
<thead>
<tr>
<th>Layer (type)</th>
<th>Output Shape</th>
<th>Param #</th>
</tr>
</thead>
<tbody>
<tr>
<td>conv2d_1 (Conv2D)</td>
<td>(None, 46, 46, 64)</td>
<td>640</td>
</tr>
<tr>
<td>conv2d_2 (Conv2D)</td>
<td>(None, 46, 46, 64)</td>
<td>36928</td>
</tr>
<tr>
<td>batch_normalization_1 (Batch)</td>
<td>(None, 46, 46, 64)</td>
<td>256</td>
</tr>
<tr>
<td>max_pooling2d_1 (MaxPooling2D)</td>
<td>(None, 23, 23, 64)</td>
<td>0</td>
</tr>
<tr>
<td>dropout_1 (Dropout)</td>
<td>(None, 23, 23, 64)</td>
<td>0</td>
</tr>
<tr>
<td>conv2d_3 (Conv2D)</td>
<td>(None, 23, 23, 128)</td>
<td>73856</td>
</tr>
<tr>
<td>batch_normalization_2 (Batch)</td>
<td>(None, 23, 23, 128)</td>
<td>512</td>
</tr>
<tr>
<td>conv2d_4 (Conv2D)</td>
<td>(None, 23, 23, 128)</td>
<td>147584</td>
</tr>
<tr>
<td>batch_normalization_3 (Batch)</td>
<td>(None, 23, 23, 128)</td>
<td>512</td>
</tr>
<tr>
<td>max_pooling2d_2 (MaxPooling2D)</td>
<td>(None, 11, 11, 128)</td>
<td>0</td>
</tr>
<tr>
<td>dropout_2 (Dropout)</td>
<td>(None, 11, 11, 128)</td>
<td>0</td>
</tr>
<tr>
<td>conv2d_5 (Conv2D)</td>
<td>(None, 11, 11, 256)</td>
<td>295168</td>
</tr>
<tr>
<td>batch_normalization_4 (Batch)</td>
<td>(None, 11, 11, 256)</td>
<td>1024</td>
</tr>
<tr>
<td>conv2d_6 (Conv2D)</td>
<td>(None, 11, 11, 256)</td>
<td>590880</td>
</tr>
<tr>
<td>batch_normalization_5 (Batch)</td>
<td>(None, 11, 11, 256)</td>
<td>1024</td>
</tr>
<tr>
<td>max_pooling2d_3 (MaxPooling2D)</td>
<td>(None, 5, 5, 256)</td>
<td>0</td>
</tr>
<tr>
<td>dropout_3 (Dropout)</td>
<td>(None, 5, 5, 256)</td>
<td>0</td>
</tr>
<tr>
<td>conv2d_7 (Conv2D)</td>
<td>(None, 5, 5, 512)</td>
<td>1180160</td>
</tr>
<tr>
<td>batch_normalization_6 (Batch)</td>
<td>(None, 5, 5, 512)</td>
<td>2048</td>
</tr>
<tr>
<td>conv2d_8 (Conv2D)</td>
<td>(None, 5, 5, 512)</td>
<td>2359808</td>
</tr>
<tr>
<td>batch_normalization_7 (Batch)</td>
<td>(None, 5, 5, 512)</td>
<td>2048</td>
</tr>
<tr>
<td>max_pooling2d_4 (MaxPooling2D)</td>
<td>(None, 2, 2, 512)</td>
<td>0</td>
</tr>
<tr>
<td>dropout_4 (Dropout)</td>
<td>(None, 2, 2, 512)</td>
<td>0</td>
</tr>
<tr>
<td>flatten_1 (Flatten)</td>
<td>(None, 2048)</td>
<td>0</td>
</tr>
<tr>
<td>dense_1 (Dense)</td>
<td>(None, 512)</td>
<td>1049088</td>
</tr>
<tr>
<td>dropout_5 (Dropout)</td>
<td>(None, 512)</td>
<td>0</td>
</tr>
<tr>
<td>dense_2 (Dense)</td>
<td>(None, 256)</td>
<td>131328</td>
</tr>
<tr>
<td>dropout_6 (Dropout)</td>
<td>(None, 256)</td>
<td>0</td>
</tr>
<tr>
<td>dense_3 (Dense)</td>
<td>(None, 128)</td>
<td>32896</td>
</tr>
<tr>
<td>dropout_7 (Dropout)</td>
<td>(None, 128)</td>
<td>0</td>
</tr>
<tr>
<td>dense_4 (Dense)</td>
<td>(None, 7)</td>
<td>903</td>
</tr>
</tbody>
</table>
Total params: 5,985,863
Trainable params: 5,982,151
Non-trainable params: 3,712
Simple Classifier - Results
Baseline:
Accuracy of the network on the 3589 test images: 65.09%
Accuracy of Angry: 53% of 262 / 491 total
Accuracy of Disgust: 60% of 33 / 55 total
Accuracy of Fear: 46% of 244 / 528 total
Accuracy of Happy: 85% of 750 / 879 total
Accuracy of Sad: 44% of 262 / 594 total
Accuracy of Surprise: 78% of 327 / 416 total
Accuracy of Neutral: 73% of 458 / 626 total
Baseline + Synthetic Data:
Accuracy of the network on the 3589 test images: 66.26%
Accuracy of Angry: 57% of 282 / 491 total +4%
Accuracy of Disgust: 65% of 36 / 55 total +5%
Accuracy of Fear: 51% of 271 / 528 total +5%
Accuracy of Happy: 87% of 767 / 879 total +2%
Accuracy of Sad: 45% of 271 / 594 total +1%
Accuracy of Surprise: 78% of 328 / 416 total -0%
Accuracy of Neutral: 67% of 423 / 626 total -6%(*)
* - As you can see, we diminish Neutral Class, so what can we do? Generate Neutral Class, more in future works section.
SOTA Classifier (VGG19) - Results
Can we achieve state of the art?!
The Fake GAN
So how can we supply more data, with the same distribution? The answer is, we will create it.
Fake GAN using DCGAN, WGAN-GP
Generator - Architecture
The generator is designed to map the latent space vector \( Z \) to data-space. Since data are images, converting \( Z \) to data-space means ultimately creating an image with the same size as the training images (i.e. 1x48x48). In practice, this is accomplished through a series of strided two dimensional convolutional transpose layers, each paired with a 2d batch norm layer and a ReLU activation.
Discriminator - Architecture
Discriminator - is a binary classification network that takes an image as input and outputs a scalar probability that the input image is real (as opposed to fake).
Discriminator takes a 1x48x48 input image, processes it through a series of Conv2d, BatchNorm2d, and LeakyReLU layers, and outputs the final probability through a Sigmoid activation function.
Overview
Real Faces
Random Noise $z \sim \mathcal{N}(0,1)$
Generator
Generated Faces
Discriminator
Fake
Real
$\mathbb{G}$eneraten $\mathcal{F}$aces
$\mathbb{R}$andom $\mathcal{N}$oise $\mathcal{N}(0,1)$
Overview
Generate Image From Noise → Find Faces → OpenCV Face Detector → Filter Neutral Faces
Neutral Faces → Sad → Surprised → Happy → Angry → Fear
Future work
- Further work with generated data:
- Analyze distribution
- Analyze similarity of generated and original images, by using $ssim()$
- Can we improve state of the art results
- Generation of Neutral Class for FER2013, using Fake GAN
- Improvement Fake GAN by using WGAN-GP
- Put all together:
- Use Fake Gan as part of Cycle GAN architecture
- Analyze difference between Cycle GAN, Improved Cycle Gan and Wasserstein GAN
- Testing performance on generated data while training on original and vice versa
The END
|
{"Source-Url": "http://gip.cs.technion.ac.il/projects/uploads/131_presentation_8.pdf", "len_cl100k_base": 5173, "olmocr-version": "0.1.53", "pdf-total-pages": 52, "total-fallback-pages": 0, "total-input-tokens": 62700, "total-output-tokens": 6821, "length": "2e12", "weborganizer": {"__label__adult": 0.0007600784301757812, "__label__art_design": 0.0033016204833984375, "__label__crime_law": 0.0007672309875488281, "__label__education_jobs": 0.0010175704956054688, "__label__entertainment": 0.0002720355987548828, "__label__fashion_beauty": 0.0005035400390625, "__label__finance_business": 0.0002732276916503906, "__label__food_dining": 0.0005044937133789062, "__label__games": 0.0010623931884765625, "__label__hardware": 0.0030498504638671875, "__label__health": 0.0014047622680664062, "__label__history": 0.00047516822814941406, "__label__home_hobbies": 0.00019252300262451172, "__label__industrial": 0.0008425712585449219, "__label__literature": 0.0003368854522705078, "__label__politics": 0.0004353523254394531, "__label__religion": 0.0010004043579101562, "__label__science_tech": 0.2076416015625, "__label__social_life": 0.00021779537200927737, "__label__software": 0.017791748046875, "__label__software_dev": 0.7568359375, "__label__sports_fitness": 0.0004601478576660156, "__label__transportation": 0.0005197525024414062, "__label__travel": 0.0003457069396972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14194, 0.05599]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14194, 0.34911]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14194, 0.74364]], "google_gemma-3-12b-it_contains_pii": [[0, 100, false], [100, 261, null], [261, 355, null], [355, 542, null], [542, 806, null], [806, 1053, null], [1053, 1729, null], [1729, 2371, null], [2371, 2615, null], [2615, 2811, null], [2811, 3125, null], [3125, 3192, null], [3192, 3340, null], [3340, 3463, null], [3463, 3508, null], [3508, 3797, null], [3797, 4717, null], [4717, 4743, null], [4743, 4958, null], [4958, 5325, null], [5325, 5843, null], [5843, 6689, null], [6689, 7026, null], [7026, 7174, null], [7174, 7538, null], [7538, 7571, null], [7571, 7622, null], [7622, 7646, null], [7646, 7667, null], [7667, 7686, null], [7686, 7705, null], [7705, 7726, null], [7726, 7747, null], [7747, 7781, null], [7781, 7809, null], [7809, 7836, null], [7836, 7863, null], [7863, 7948, null], [7948, 8132, null], [8132, 8643, null], [8643, 9118, null], [9118, 11354, null], [11354, 12281, null], [12281, 12350, null], [12350, 12488, null], [12488, 12513, null], [12513, 12914, null], [12914, 13301, null], [13301, 13513, null], [13513, 13664, null], [13664, 14187, null], [14187, 14194, null]], "google_gemma-3-12b-it_is_public_document": [[0, 100, true], [100, 261, null], [261, 355, null], [355, 542, null], [542, 806, null], [806, 1053, null], [1053, 1729, null], [1729, 2371, null], [2371, 2615, null], [2615, 2811, null], [2811, 3125, null], [3125, 3192, null], [3192, 3340, null], [3340, 3463, null], [3463, 3508, null], [3508, 3797, null], [3797, 4717, null], [4717, 4743, null], [4743, 4958, null], [4958, 5325, null], [5325, 5843, null], [5843, 6689, null], [6689, 7026, null], [7026, 7174, null], [7174, 7538, null], [7538, 7571, null], [7571, 7622, null], [7622, 7646, null], [7646, 7667, null], [7667, 7686, null], [7686, 7705, null], [7705, 7726, null], [7726, 7747, null], [7747, 7781, null], [7781, 7809, null], [7809, 7836, null], [7836, 7863, null], [7863, 7948, null], [7948, 8132, null], [8132, 8643, null], [8643, 9118, null], [9118, 11354, null], [11354, 12281, null], [12281, 12350, null], [12350, 12488, null], [12488, 12513, null], [12513, 12914, null], [12914, 13301, null], [13301, 13513, null], [13513, 13664, null], [13664, 14187, null], [14187, 14194, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14194, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14194, null]], "pdf_page_numbers": [[0, 100, 1], [100, 261, 2], [261, 355, 3], [355, 542, 4], [542, 806, 5], [806, 1053, 6], [1053, 1729, 7], [1729, 2371, 8], [2371, 2615, 9], [2615, 2811, 10], [2811, 3125, 11], [3125, 3192, 12], [3192, 3340, 13], [3340, 3463, 14], [3463, 3508, 15], [3508, 3797, 16], [3797, 4717, 17], [4717, 4743, 18], [4743, 4958, 19], [4958, 5325, 20], [5325, 5843, 21], [5843, 6689, 22], [6689, 7026, 23], [7026, 7174, 24], [7174, 7538, 25], [7538, 7571, 26], [7571, 7622, 27], [7622, 7646, 28], [7646, 7667, 29], [7667, 7686, 30], [7686, 7705, 31], [7705, 7726, 32], [7726, 7747, 33], [7747, 7781, 34], [7781, 7809, 35], [7809, 7836, 36], [7836, 7863, 37], [7863, 7948, 38], [7948, 8132, 39], [8132, 8643, 40], [8643, 9118, 41], [9118, 11354, 42], [11354, 12281, 43], [12281, 12350, 44], [12350, 12488, 45], [12488, 12513, 46], [12513, 12914, 47], [12914, 13301, 48], [13301, 13513, 49], [13513, 13664, 50], [13664, 14187, 51], [14187, 14194, 52]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14194, 0.20667]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
61b9cd7bf63a8fdde58f3b23492fb5e60187b96b
|
Selected Thesis Abstracts
The following Abstracts should provide you with an indication of the sort of work I like to supervise, in terms of topic as well as method. In each case the full thesis citation is shown, and the thesis can be downloaded from the link provided.
You should also carefully read the material provided by the University regarding enrolment for graduate studies – either at Master’s or PhD level – including the important information regarding Scholarships for study:
- Master’s: http://www.otago.ac.nz/study/masters/index.html
- PhD: http://www.otago.ac.nz/study/phd/otago009275.html
The Scholarships are highly competitive, but if you are a student with very good grades you will have a good chance of securing financial support for your studies.
Before you contact me you should read through all the information on my website so that you are sure that we will work well together. If you have not read this material I am unlikely to reply. You might even read one or two of my papers to get a feel for the research that I conduct! While many are available online I’d be happy to email them through to you if you indicate those of particular interest. Note also that, as part of the admission process, I will ask you to prepare a draft research proposal, so that I can assess the extent of your commitment as well as your research interests.
I look forward to hearing from you!
Professor Stephen MacDonell
This thesis examines the adoption of mentoring practice across the Information Systems (IS) project management process in the context of project success improvement. The purpose of this research is to propose a model expanding on prevailing theories and research by explaining the nature and effects of mentoring practice adoption in IS project management (Gregor, 2006). This study adopted a two-part multiple-method research approach. As little was known about the nature and characteristics of IS project management mentoring in practice, an exploratory survey was conducted to assess the landscape of mentoring practice adoption. Practising IS project managers who were staff of multinational companies (MNCs) based in Malaysia were asked to relate their mentoring adoption experiences and perceptions as mentees across the IS project management process. Forty-six IS practising project managers participated in the initial web-based survey. Subsequently, in-depth one-to-one interviews were conducted using open-ended and semi-structured questions. To this end, McCracken’s (1988) long-interview technique was used to draw out the experiences and perceptions of interviewees in narrative form. Narratives were collected from twenty-one IS project managers who were a subset of the initial group surveyed. The collected interview narratives were analysed using the iterative and constant comparison analysis technique of Miles and Huberman (1994).
Drawing on a combination of theoretical frameworks, including Kolb’s theory of experiential learning (D. Kolb, 1984; D. A. Kolb et al., 1999), social exchange theory and communitarian theory (Gibb, 1999), the three models of mentoring (apprenticeship, competence and reflective models) (Maynard & Furlong, 1993), and the mentoring model of Anderson and Shannon (1995) this research has made significant contributions. This research not only contributes to IS literature but also IS project management practice and policy.
The espoused theory of IS project management mentoring provides a better understanding of the requirements, nature, and extent of the role of mentoring in effective IS project management. In this regard, four key findings emerged from this study. Firstly, mentoring support was affirmed as an effective mechanism for project success improvement and problem-solving enhancement. Secondly, mentoring support nurtures IS project managers. Thirdly, learning is a key and effective outcome under IS project management mentoring; mentoring as a learning platform was efficacious. Fourthly, human capital can be fostered and social capital enriched through mentoring adoption. The study found that participating IS project managers were provided with tactical support towards project success over the duration of the project and, over the long term, their competencies were perceived as being enhanced. IS project management mentoring therefore brings about the suggestion of advancement and maturation of competencies to IS project managers.
As for key contributions to IS project management practice and policy, this research underscores the efficacy of mentoring adoption in the soft-skill development, strategic overviews and development of key deliverables, and improvement of key project processes that are related to scheduling, staffing and costing. This research also brings knowledge of key impediments to mentoring practice adoption, and this can serve as early warning signals. Last but not least, the strengthening of IS project management competencies can be done by advocating purposeful adoption of IS project management mentoring practice and/or by institutionalizing the role of IS project mentors in IS projects. These pragmatic advices to practice and policy may lead a greater human capital investment realization of the person of IS project manager.
Context: Systematic literature review (SLR) is a methodology used to aggregate all relevant evidence of a specific research question. One of the activities associated with the SLR process is the selection of primary studies. The process used to select primary studies can be arduous, particularly when the researcher faces large volumes of primary studies. Another activity associated with an SLR is the presentation of results of the primary studies that meet the SLR purpose. The results are generally summarized in tables and an alternative to reduce the time consumed to understand the data is the use of graphic representations. Systematic mapping (SM) is a more open form of SLR used to build a classification and categorization scheme of a field of interest. The categorization and classification activities in SM are not trivial tasks, since they require manual effort and domain of knowledge by reviewers to achieve adequate results. Although clearly crucial, both SLR and SM processes are time-consuming and most activities are manually conducted. Objective: The aim of this research is to use Visual Text Mining (VTM) to support different activities of SLR and SM processes, e.g., support the selection of primary studies, the presentation of results of an SLR and the categorization and classification of an SM. Method: Extensions to the SLR and SM processes based on VTM were proposed. A series of case studies were conducted to demonstrate the usefulness of the VTM techniques in the selection, review, presentation of results and categorization context. Results: The findings have showed that the application of VTM is promising in terms of providing positive support to the study selection activity and that visual representations of SLR data have led to a reduction in the time taken for their analysis, with no loss of data comprehensibility. The application of VTM is relevant also in the context of SM. Conclusions: VTM techniques can be successfully employed to assist the SLR and SM processes.
Computer-based information systems (IS) play an increasingly pervasive and important role in contemporary organisations. Despite decades of continuing research and the development of an extensive prescriptive literature, IS development projects continue to be problematic, with many failing or being seriously challenged. In addition, the IS development environment has changed significantly in recent years, with rapid advances or shifts in technology, increasing devolution of IS responsibility and expenditure to user groups, high levels of packaged software acquisition and customisation, greater outsourcing of IS development, and an increasing emphasis on enterprise-wide and inter-organisational IS. In many cases these changes are interrelated and involve more flexible, ad hoc or non-traditional development approaches. Combined with the fact that at the same time IS have become increasingly sophisticated and integrated, the potential for unpredictable or unintended consequences has also increased.
Together, the continued problematic nature of many IS projects and the changing IS development environment, suggest that there is an ongoing need for a fuller understanding of IS development processes and practices. Given the limitations of factor-based, prescriptive studies, an understanding of how contemporary IS development is enacted needs to be grounded in and built upon the cumulative body of research that attempts to understand the complexity and dynamic nature of IS development. Accordingly, this study uses a conceptualisation of IS development as a process in which an IS emerges from a dynamic and interactive relationship between the technology, its social and organisational context, and the negotiated actions of various individuals and groups. The thesis presents the results of an extensive empirical investigation into contemporary IS development practices based on data collected from New Zealand. The study uses a range of research methods and ultimately develops a sociotechnical process model of IS development as situated action.
Following Walsham’s (1993) emphasis on the content, context and process of IS-related organisational change, the methods used in this study are three-fold. First, an extensive literature review is undertaken to provide a comprehensive synthesis of contemporary empirical knowledge about the content of IS development. Second, a survey is used to collect contextual data about IS development and acquisition practices in New Zealand. Finally, these both support an in-depth longitudinal case study of the IS development process in an organisational setting.
The literature review synthesises the results of recent empirical studies of the various influences that shape IS development, using a classificatory framework based around actors, project content, IS development processes, and context. The review shows that, while a number of traditional factors influencing IS development continue to be relevant, other factors have emerged as important as a result of changes to the IS development environment and to IS development practice. In particular, increasing recognition within the IS literature has been given to the relative importance of people and process and of the organisational and environmental context in which IS development takes place.
The results of the literature review inform the design of a survey instrument intended to provide an updated assessment of IS development and acquisition practices in New Zealand organisations. A Web-based survey was administered to a sample of senior IS managers in 460 public and private sector organisations with 200 or more FTEs. Based on the 106 usable responses, the results of the survey confirm the ongoing relevance of a number of traditional factors identified in the IS literature as facilitating or inhibiting IS development. However, a number of factors were identified as emerging or increasing in relevance in light of changes in the IS development environment.
While the survey provides a useful description of contemporary IS development and acquisition practice in New Zealand, it does not enable a detailed understanding of IS development in action. To address this, an IS project in a large New Zealand organisation was followed in action for over two years. The project involved the development of a sophisticated financial database model using a
purchased commercial software package and external consultants. As such, it provides a useful exemplar of development in a contemporary IS environment. The case study illustrates how a seemingly small, well-defined project experienced delays and difficulties as might be expected in larger, more complex projects. It offers insights into the significance of external actors, the importance of full stakeholder participation, the influence of initial characterisations of the nature of the project, and the observance of project management processes. Consideration of the project outcome reveals its multi-dimensional, subjective and temporal nature.
A process approach (Markus & Robey, 1988) is employed to structure the analysis of the case study. A combination of temporal bracketing, narrative analysis and visual representation is used to analyse the sequence of social action and organisational processes involved in the project and to develop a process explanation of how and why the particular project outcome in this case study developed over time. Underpinning and informing this analysis is the construction and utilisation of a model of IS development as a situated, sociotechnical process. Drawing on theoretical concepts from structuration theory and the sociology of technology, the model considers the situated actions and practices of various individuals and groups involved in IS development, the ways in which these are enacted within different contextual elements, and the role of existing and new technological artifacts in this process. IS development is characterised as iterative and emergent, with change occurring dynamically from a trajectory of situated interactions (in which meanings and actions are negotiated) and intended and unintended consequences.
As a whole, this PhD highlights the changing nature of the IS development environment and the way a complex ensemble of ‘factors’ interact to influence IS project outcomes. Common themes emerge around the importance of people and process, and the context in which IS development takes place, while at the same time explicitly including a consideration of technology in the analysis.
This thesis investigates the role of technology-use mediation in supporting the work of global virtual teams.
The work is set in the context of a longer term action research programme into collaborative computing and global virtual teams, initiated by Auckland University of Technology in New Zealand and Uppsala University in Sweden. Over the period since 1998, global virtual collaborations involving teams of students from both universities have been conducted annually. This thesis investigates the 2004 collaboration cycle, in which participants from St Louis University Missouri joined the collaboration. This was the first triadic collaboration, and covered Northern, Southern and Western aspects of the globe while traversing three widely divergent time-zones.
In spite of the extensive experience in collaboration possessed by the coordinators at all three sites, the results of the global virtual trial were at best mixed. This repeated experience of dissatisfaction in our global virtual collaborations, in spite of the technology being in place has been a primary motivator for this work. Why is global virtual collaboration difficult? What roles and activities are critical? How can we do it better? These are not issues solely to do with the student actors in the global virtual teams, but more to do with the supporting cast, engaged in “activities which involve the shaping of other users activities of [technology] use” (Orlikowski et al., 1995, p.425). Thus came about my interest in exploring the topic of technology-use mediation.
This thesis applies a research framework adapted from DeSanctis & Poole’s “Adaptive Structuration Theory” (1994) by the author. Initially applied to “facilitation” in virtual teams “Extended Adaptive Structuration Theory (EAST)” (Clear, 1999a), has undergone further development. The resulting research framework “Technology-use Mediated AST (TUMAST)” is applied here for the first time to investigate technology-use mediation activities performed during the global virtual collaborative trial.
A corpus of data based on the email communications of supporting parties to the collaboration is analysed in depth in this study, applying a combination of grounded theoretic and structurational techniques. Thus a very rich and firmly grounded picture of the processes of technology-use mediation is built. This thesis represents the first known in-depth longitudinal study of technology-use mediation in a real global virtual team setting.
From this exploratory study some novel theorizations have resulted. Methodologically it demonstrates analysis of technology-use mediation applying the TUMAST framework in a manner that captures the richness and evolution over time of these complex activities. Substantively it proposes a novel theory of “Collaborative Technology Fit (CTF)”. It is hoped that future global virtual team coordinators and researchers may apply the theory in order to map their situation, and diagnose their degree of collaborative alignment on multiple dimensions, thus enabling corrective actions to be taken. While the work arises in a tertiary education context, it reflects the reality of professionals at work in a global virtual team. Its application within other domains remains to be proven, but readings from the literature, and personal experience within global virtual software development teams suggest its wider applicability.
In contemporary software development, frequent user engagement throughout the development process is commonly viewed as good practice, leading to increased development productivity and user satisfaction with the product. Contemporary Agile software development methodologies, as adopted by many practitioners, promote such frequent and timely involvement of users. The quality of this user involvement may be variable in its contribution to the software project, however. While there are many factors that may influence the quality of the user involvement, in this thesis it is posited that the degree of alignment or misalignment of expectation of user involvement among the development and user groups is an important factor in the quality of user involvement. There is little documented research in this alignment of expectations of user involvement, and an investigation of this in practice is the basis of this thesis. Furthermore there is no consistent meaning to the notion of “quality of user involvement” in related literature and so another aim of the thesis is to get a deeper understanding of the meaning of “quality” in user involvement.
The scope of the investigation presented in this thesis is to develop an instrument for comparing expectations and use this to identify patterns of alignment in expectations by role. The linking of alignment of expectations to project success is beyond the scope of this investigation. A framework for conceptualising “quality” in the context of user involvement, as well as characterising significant barriers and enablers to high quality user involvement, are also goals of this research.
The approach to the investigation is based on a systematic, role-by-role comparison of user involvement expectations using Repertory Grid techniques. The standardised grids of constructs and elements related to user involvement were constructed from a synthesis of relevant current literature. Three grids were used for comparison of expectations by role: the nature of the involvement, the modes of communication associated with the involvement, and the characteristics of the user that influenced user involvement. In addition to the Repertory Grid data, interview data were gathered from participants through a series of semi-structured interviews. Thematic analysis of the interviews was used to develop conceptual frameworks related to the notion of “high quality user involvement” as well as to identify the significant enablers and barriers to high quality user involvement.
Two Agile software development organizations based in New Zealand, were the case organizations for this study. A total of nineteen interviews were conducted with technical roles including Developer, Project Manager, Business Analyst, Tester and user roles such as the Product Owner and Subject Matter Expert. The results show no strong misalignment of expectations of user involvement across both the user roles and development roles for the case organizations studied, and some interesting patterns were uncovered. Some significant barriers and enablers of high quality user involvement are also identified, providing some useful insights to guide the design of future techniques and tools to support high quality user involvement.
Software comprehension is a complex and ongoing challenge facing the software development industry. The often immense number of interrelated components in contemporary software systems places a high cognitive load on software stakeholders, whose job requires deep understanding and awareness of those constituting components. Among many approaches, 3D visualisation of the software static structure has recently emerged as a promising approach that is increasingly being demonstrated to significantly help in alleviating that cognitive burden by exploiting and leveraging humans’ natural perceptual abilities.
Furthermore, in addition to easing comprehension and increasing awareness of constituting software artefacts, this technology has the potential to bring visible various important aspects of the software process that could potentially make this technology a valuable tool for a wider spectrum of software practitioners. Recent literature, however, shows that the majority of prior research has limited itself to visualising the software product and in the best cases, only highlighted some effects of the software process.
This thesis identifies and attends to this gap in software visualisation research by introducing a novel visualisation approach named Conceptual Visualisation. It asserts that visualising the software process not only has several potentially beneficial implications for the software industry, but that from a cognitive perspective, visualising those process in the context of the software structure is particularly suitable and significant to increase human awareness and understanding of both the processes and their implemented product artefacts. The proposed approach is designed and constructed following a systems development research methodology and adhering to the principles of sound design science research. It is then assessed via functional demonstration, being applied to six open source systems of varying size and complexity. Conceptual Visualisation is shown to make a novel contribution to the software visualisation research literature, addressing many prior stated requirements in doing so. Once developed beyond a proof of concept, its use in practice should bring multiple benefits to a range of software stakeholders.
Information technology, information systems and applications have become key survival factors in modern organisations. Technology can transform and redefine organisations and the way they operate. As the power of information technology (IT) intensifies and organisations work to exploit the full capacity of IT, the role of IT management in those organisations becomes more and more important. This importance is reinforced by the increasing pervasiveness of IT along with highly competitive operating contexts. As IT initiatives are generally implemented via projects, the management of IT projects has come under increasing scrutiny. IT projects continue to fail; as a result, while research in IT project management has grown, many challenges for research and practice remain.
There have been many studies of the IT project management context; however, very few have considered project initiation decisions. The primary intent of this research is therefore to investigate IT projects particularly in terms of their initiation. As these projects originate from decisions made by management, it is important to understand the drivers of these decisions. Therefore, the objective of this research is to explore the influencing factors in IT project decisions during their initiation.
A combination of semi-structured interviews and the repertory grid data collection and analysis method was employed to investigate the motivating factors and primary drivers that influence individual IT Managers’ project initiation decisions. Eighteen participants representing six medium and large organisations were interviewed. A total of forty-nine IT projects were identified by these eighteen managers. A rich data set was collected and in-depth analysis was conducted. The results showed that there are multiple underlying reasons for the decisions made at this early stage and that there are some common patterns of decision drivers among the interviewed IT Managers. For instance, most projects are still motivated by a desire to achieve efficiencies or cost savings, their potential tends to be assessed using cost benefit analysis, and packaged software solutions along with consultancy services are widely employed in solution development and delivery.
Drawing on the results as well as prior research, an ‘IT Project Pre-Initiation Decision Framework’ is proposed to assist IT Managers and others in their evaluation of rationales during the preliminary project initiation decision-making process. This multidimensional matrix evaluative framework is intended to assist IT Managers in ensuring the cogency of rationales with an ability to make objective appraisals. In order to justify the proposed approach as a universal evaluative framework, it is necessary to conduct in-depth and longitudinal case studies from different perspectives. Future research is also needed in identifying to what extent project initiation decision(s) might lead to successful or unsuccessful project outcomes.
MCIS (Hons.II(i)), Self-organising Maps (SOMs) in Software Project Management, AUT, L. Dai, 2011 (Role: Joint Supervisor. Co-supervisor: J. Buchan, School of Computing and Mathematical Sciences) http://hdl.handle.net/10292/4516
Although numerous researchers have devoted much time and effort to the issue, generating a reliable and accurate cost estimate at an early stage of the development life cycle remains a challenge to software engineers. In recent years an increasing number of studies have turned their attention to the employment of machine learning, especially Artificial Neural Networks (ANNs), in performing such estimation activities. A Self-Organising Map (SOM) is a particular type of ANN that utilises a neighbourhood function that can be used as an unsupervised clustering tool. Its ability to project multi-dimensional data into a two-dimensional map makes the SOM appealing to software engineers.
In addition, the vague and ambiguous nature of real world software data demands techniques that can handle fuzziness. Accordingly, researchers have introduced fuzzy logic approaches such as fuzzy sets, fuzzy rules, fuzzy inference and the associated fuzzy clustering techniques into the original area of neural networks. Following a thorough literature review, it was decided that Self-Organising Maps could be an appropriate candidate for estimation in software project management. In order to investigate our hypothesis we build predictive models using Self-Organising Maps and compare them with Linear Regression models. The Fuzzy C-means algorithm is utilized in our study to pre-process ambiguous and vague real world data, which also refines the clustering outcome.
This study presents and analyses the results of three case studies that use data sets from different software projects. The findings indicate that Self-Organising Maps surpass Linear Regression in all three cases (even when noise was introduced), both in terms of generating more accurate estimates and presenting easy-to-understand relationships among the project features, when compared to Linear Regression models. Alternative approaches and extensions are suggested in order to overcome the limitations of the study. Other recommended future study areas include, but are not limited to, exploring alternative approaches to forming Fuzzy Self-Organising Maps (FSOMs), adopting new versions of the Fuzzy C-means algorithm, and investigating further the sensitivity of SOMs and FSOMs.
The software industry has been plagued by the staggering failure rate of projects, which have resulted in the loss of billions of dollars. The well-known Chaos Report by the Standish Group declared that software projects are in chaos with only 16.2% of software projects actually being successful in the year 1994 and a more recent study by them suggest that 32% of the projects were successful in the year 2009 (Eveleens and Verhoef, 2010; Dominguez, 2009; Bishop, 2009).
The post-mortem examination of failed software development projects reveals that failures do not happen overnight and that long before the failure, the projects render significant symptoms or “early warning signs” of trouble (Kappelman, McKeeman and Zhang, 2006). A warning sign is an indication or an event that predicts or alerts impending problems. Early warning sign provide an indication of manifesting risks. This research mainly focuses on a new and innovative concept known as early warning signs which could be incorporated into ongoing project risk management to ameliorate the project success rates by addressing early warning signs encountered during the project. The project risk management theories are not closely integrated with the early warning phenomenon but this can apparently be utilised as a tool in project risk management (Nikander, 2002).
The study utilises the System Development Research Methodology. The models simulating a typical project environment were designed using a simulation tool known as SimSE. For the evaluation of the models two experimental techniques namely “Individual EWS Testing” and “Controlled Experimental Study” were used. Findings of the research suggest that the implementation of early warning phenomenon has positive effects on the project outcomes. Also, there is a positive impact on the project outcomes if the corrective actions are taken early. The concept of early warning signs looks promising and this study is just one step in this direction and has introduced this new concept to the research arena.
MCIS (Hons.I), User Directed Search Based Reverse Engineering, AUT, F. Schmidt, 2010 (Role: Joint Supervisor. Co-supervisor: Dr A. Connor, School of Computing and Mathematical Sciences) http://hdl.handle.net/10292/836
The current research represents the planning, design, implementation and evaluation of a user directed software clustering approach that utilizes Search Based Software Engineering (SBSE). The aim of this research is to examine if a user directed software clustering approach contributes to the quality of software clustering. Because of the explorative and constructive character this research project utilises the System Development Research Methodology.
This research is enabled by the implementation of the Search Based Reverse Engineering (SBRE) component. The SBRE component features multiple similarity measurements and the inclusion of user constraints in the clustering process to create different implementation perspectives of the software system depending on the requirements and preferences of the stakeholders. These similarity measurements are based on software metrics, which measure different software-attributes. The SBRE component utilizes a greedy and tabu search algorithm for the identification of the cluster landscape of the analyzed software systems.
The evaluation showed that a user controlled SBSE cluster approach is able to adapt to different user configurations and derive corresponding cluster landscapes from software systems. Different measures are introduced to control the cluster process. It has been shown how these measures contribute to the quality of the clustering. It is demonstrated that tabu search is applicable in the field of software clustering. Finally, it has been examined that a multiple metric approach allows adapting the clustering process to the requirements of the stakeholders and the design of the software system to optimize the clustering result.
MPhil (Hons.I), The Value and Validity of Software Effort Estimation Models built from a Multiple Organization Data Set, AUT, K. Deng, 2009 (Role: Primary Supervisor) http://hdl.handle.net/10292/473
The objective of this research is to empirically assess the value and validity of a multi-organization data set in the building of prediction models for several ‘local’ software organizations; that is, smaller organizations that might have a few project records but that are interested in improving their ability to accurately predict software project effort. Evidence to date in the research literature is mixed, due not to problems with the underlying research ideas but with limitations in the analytical processes employed:
- the majority of previous studies have used only a single organization as the „local” sample, introducing the potential for bias
- the degree to which the conclusions of these studies might apply more generally is unable to be determined because of a lack of transparency in the data analysis processes used.
It is the aim of this research to provide a more robust and visible test of the utility of the largest multi-organization data set currently available – that from the ISBSG – in terms of enabling smaller-scale organizations to build relevant and accurate models for project-level effort prediction.
Stepwise regression is employed to enable the construction of ‘local’, ‘global’ and ‘refined global’ models of effort that are then validated against actual project data from eight organizations. The results indicate that local data, that is, data collected for a single organization, is almost always more effective as a basis for the construction of a predictive model than data sourced from a global repository. That said, the accuracy of the models produced from the global data set, while worse than that achieved with local data, may be sufficiently accurate in the absence of reliable local data – an issue that could be investigated in future research.
The study concludes with recommendations for both software engineering practice – in setting out a more dynamic scenario for the management of software development – and research – in terms of implications for the collection and analysis of software engineering data.
In contemporary software development, an emergent understanding of the problem domain and envisioned goals forms the basis of designing, testing and development activities. Lack of a common understanding of the domain can result in costly rework or client dissatisfaction. Research shows that the development of shared understanding in this context is a complex and error-prone process and there is room for improvement. Is this because practitioners are not following suggested practice from literature? Or are the actual barriers to shared understanding not being addressed by current tools and techniques? Is the development of shared domain understanding even viewed as problematic (or even important) by practitioners? These are some questions that need to be investigated in order to effectively design process improvements and tool support in this area, yet there is little information related to this.
This study takes a multi-case study approach, which incorporate semi-structured interviews with representative from ten small-to-medium organisations. This study focuses on the vendor’s perspective and includes a mix of application-domains. Result of the interviews is analysed to discover themes and patterns related to an analysis framework constructed from the literature review.
The findings indicate that vendors perceive the process of developing shared application domain understanding with their clients as being both problematic and important to a successful implementation. Twelve barriers have been identified from the analysis. The results also confirm that the process of sharing understanding development is generally perceived as being evolutionary and collaborative. This process is described by most interviewees comprises iterative phases of elicitation, confirmation and refinement of the understanding. A definite preference for face-to-face interaction is evident at regular times throughout development, particularly in early stages, although the importance of ad-hoc communications by phone or email, as domain knowledge needs arise, is also emphasised. Access to cooperative domain-expert throughout development is generally seen as a critical success factor. Several companies report using in-house domain-experts as client “proxies” in this regard. There is a mix of attitudes apparent regarding the direct communications of developers with client stakeholders. This ranged from insisting that developers are involved from initial elicitation and “kick-off” meetings, to “shielding” developers almost entirely from client. In terms of representations of understanding, participants relate natural-language, screen-shots, mock-ups, prototypes and product demonstrations as the most useful artefacts for sharing and confirming understanding of the problem domain. They emphasise the importance of flexibility and client familiarity with the representations. In general, there is no clear separation between problem and solution spaces evident when the interviewees discussed representations of understanding, and the preference seems to be for concrete rather than abstract representations.
In conclusion, comparisons between the findings and literature generally confirm contemporary thinking regarding domain knowledge sharing, although a number of barriers were given particular emphasis in this field study. The use of computer-based tool support is not widespread and the need to improve the domain knowledge sharing process and tool support in practice is widely acknowledged by the participants in this investigation. This study has identified some fruitful areas of research in this regard.
Software engineering techniques have been employed for many years to guide software product creation. In the last decade the appropriateness of many techniques has been questioned, given unacceptably high rates of software project failure. In light of this, there have emerged a new set of agile software development methodologies aimed at reducing software projects risks, on the basis that this will improve the likelihood of achieving software project success. Recent studies show that agile methods have been gaining increasing industry attention. However, while the practices recommended by agile methodologies are said to reduce risks, there exists little evidence to verify this position. In addition, it is posited that the very processes recommended by agile methodologies may themselves introduce other risks.
Consequently, this study addresses the risks inherent in the human collaboration practices that are central to agile methods. An analysis of the risk management literature reveals that personality conflicts and customer-developer disagreements are social risks that occur through human collaboration. These risks negatively affect team cohesion and software project success. Personality conflicts are said to be mostly influenced through poor team formation, whereas customer-developer disagreements are induced through excessive customer direct interaction. However, these risks are not adequately addressed by standard risk management theories. Furthermore, an evaluation reveals that these risks are also not considered by existing software tools.
This study therefore designs and implements a web-based solution to lessen the social risks that may arise in agile projects. The Agile Social-Risk Mitigation Tool (ASRMT) offers support for personnel capability assessment and management and for remote customer feature management, extending the customer’s access through an interface. Using software engineering experts to evaluate ASRMT, the tool is shown to effectively address social risk management theories, and is considered likely to assist agile developers in their handling of social risks. In addition, above and beyond its intended purpose, ASRMT is also likely to assist agile teams with general project management. The findings of the ASRMT user evaluations demonstrate sufficient proof of concept to suggest that such a tool could have value in live software projects.
This thesis investigates project risk management issues in smaller software teams. Certain gaps in the literature are identified. There is limited literature on what risk management techniques software practitioners use. The studies that are published tend to focus on large software teams. This thesis investigates what risks these smaller teams consider to be important. It also investigates what techniques are perceived to address these risks and how effective those techniques are considered to be. One of those risks is found to be of primary importance, yet this risk is not suggested by the project management literature. This thesis goes on to conduct a more in-depth exploration of that specific risk in the context of these smaller teams.
Interviews were selected as the most appropriate method to achieve the objectives of the thesis. Nineteen interviews in eight software organisations are conducted to collect data for this thesis. Three different perspectives on project risk were investigated. Those were the perspectives of the; service managers, project managers and developers. Hence a large store of rich information is collated. The results are analysed and a rich set of information is presented in this thesis.
As a result of this research it is suggested that smaller software teams may find it useful to consider the 16 risks discussed in this research and how applicable those risks are to their individual organisation. Service managers may need to do more to raise the awareness of the importance of risks associated with ‘customer relationship issues’ within their own organisations.
Three risks stood out as areas where future research might be most fruitful. They were; customer relationship issues, introduction of new technology and unrealistic schedules and budgets. Risks related to customer relationship issues were of particular significance and have tended to be overlooked in the project management literature. It is submitted that research into standard project risk management approaches may need to be combined with business risk management approaches to gain a full understanding of the risks faced and addressed by these smaller teams.
|
{"Source-Url": "http://infosci.otago.ac.nz/assets/staff/smacdonell/SelectedThesisAbstracts.pdf", "len_cl100k_base": 7537, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 45146, "total-output-tokens": 8194, "length": "2e12", "weborganizer": {"__label__adult": 0.000705718994140625, "__label__art_design": 0.0008993148803710938, "__label__crime_law": 0.0007538795471191406, "__label__education_jobs": 0.0977783203125, "__label__entertainment": 0.00018668174743652344, "__label__fashion_beauty": 0.0003123283386230469, "__label__finance_business": 0.001369476318359375, "__label__food_dining": 0.0006804466247558594, "__label__games": 0.0010976791381835938, "__label__hardware": 0.0006723403930664062, "__label__health": 0.0008111000061035156, "__label__history": 0.0005884170532226562, "__label__home_hobbies": 0.00024509429931640625, "__label__industrial": 0.0004706382751464844, "__label__literature": 0.0015630722045898438, "__label__politics": 0.0004315376281738281, "__label__religion": 0.0007219314575195312, "__label__science_tech": 0.01458740234375, "__label__social_life": 0.0006022453308105469, "__label__software": 0.012969970703125, "__label__software_dev": 0.861328125, "__label__sports_fitness": 0.00044345855712890625, "__label__transportation": 0.0006961822509765625, "__label__travel": 0.0003204345703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42734, 0.00663]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42734, 0.19795]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42734, 0.95042]], "google_gemma-3-12b-it_contains_pii": [[0, 1435, false], [1435, 5277, null], [5277, 7293, null], [7293, 11686, null], [11686, 13854, null], [13854, 17267, null], [17267, 20527, null], [20527, 22800, null], [22800, 25791, null], [25791, 28269, null], [28269, 30310, null], [30310, 32239, null], [32239, 34509, null], [34509, 38145, null], [38145, 40552, null], [40552, 42734, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1435, true], [1435, 5277, null], [5277, 7293, null], [7293, 11686, null], [11686, 13854, null], [13854, 17267, null], [17267, 20527, null], [20527, 22800, null], [22800, 25791, null], [25791, 28269, null], [28269, 30310, null], [30310, 32239, null], [32239, 34509, null], [34509, 38145, null], [38145, 40552, null], [40552, 42734, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42734, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42734, null]], "pdf_page_numbers": [[0, 1435, 1], [1435, 5277, 2], [5277, 7293, 3], [7293, 11686, 4], [11686, 13854, 5], [13854, 17267, 6], [17267, 20527, 7], [20527, 22800, 8], [22800, 25791, 9], [25791, 28269, 10], [28269, 30310, 11], [30310, 32239, 12], [32239, 34509, 13], [34509, 38145, 14], [38145, 40552, 15], [40552, 42734, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42734, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
cb177ea3ba152f653d053df0a35ed7e8fcf0bff3
|
Rally Software Development Corporation
Whitepaper
Mastering the Iteration: An Agile White Paper
Dean Leffingwell
Abstract:
The heartbeat of Agile development is the iteration – the ability of the team to create working, tested, value-delivered code in a short time box – with the goal of producing an increment of potentially shippable code at the end of each iteration. This is a significant challenge for the team, and mastering this skill takes guidance and practice. In this white paper, Dean Leffingwell describes the basic iteration pattern and the activities that a team engages in to meet this key challenge. This paper is an excerpt from Dean’s latest book, Scaling Software Agility: Best Practices for Large Enterprises, Addison-Wesley, 2007.
Table of Contents
Table of Contents .............................................................................................................................................2
Iteration: The Heartbeat of Agility ...................................................................................................................3
The Standard, Two-Week Iteration? ...............................................................................................................3
Planning and Executing the Iteration ..............................................................................................................4
Iteration Planning .............................................................................................................................................4
Iteration Execution ...........................................................................................................................................7
Iteration Tracking and Adjusting ..................................................................................................................10
Iteration Cadence Calendar ...........................................................................................................................12
About the Book ...............................................................................................................................................14
Iteration: The Heartbeat of Agility
The base construct of agile and iterative development is the iteration—the ability of the team to create working, tested, value-delivered code in a short time box—with the goal producing an increment of potentially shippable code at the end of each iteration. This is a significant challenge for the team, and mastering the process takes some time to accomplish. In this chapter, we describe the basic iteration pattern and the activities that a team engages in to meet this key challenge.
The Standard, Two-Week Iteration?
Before we begin, however, we must first entertain another small debate: What is the optimal length of an iteration? Most people agree that iterations are a fixed, constant length and typically should not vary in length over the course of a release or a project. But the length of the iteration is an agile variable. From the literature, XP recommends a length of 1 to 4 weeks, Scrum recommends 30 day sprints, and RUP recommends flexibility of from 2 to 6 weeks.
In practice, however, all but a very few teams we have worked with have come to the same conclusion over time: a week may be too short and 30 days is too long. The conclusion they typically arrive at is to standardize on iterations of 2 weeks in length, and this is our general recommendation.¹
There are many advantages to this approach:
- There is some overhead in planning and closing an iteration; in a 2-week iteration, the overhead is well proportioned to the amount of work that can be accomplished in the period.
- It forces work into byte-sized chunks where the define/build/test cycle has to be concurrent. With longer iterations, there is a tendency for teams to build a more waterfall-like process.
- Two weeks is sufficient time to get some amount of meaningful development done.
- This cycle provides more opportunities to succeed or fail early. For example, with releases of approximately 90 days, 2-week iterations give teams time for five “construction” iterations and one “hardening” iteration at the end (see Chapter 13), so there are a number of intermediate checkpoints on the way to the release.
- The 2-week rhythm is a natural calendar cycle that is easy for all participants to remember. Scheduling is trivial: “If it’s the second Wednesday, I need to be preparing my demo for Friday.” “Demo day” occurs every other week, at exactly the same time and place, allowing key stakeholders to attend.
- The cycle lines up well with typical 1- or 2-week vacation cycles, simplifying capacity estimates for the team.
- Velocity can be measured and scope can be adjusted more quickly.
For these and many other reasons, we recommend that iterations be standardized to 2 weeks. Also, in order to facilitate coordination with other teams and larger releases, we recommend that all teams on a project apply this same iteration time box where possible, although start and stop days may be different to support those managers and team leaders engaged with more than one team.
¹ In practice in my personal experience set, only one team operates in 1-week iterations; all the others use a 2-week cycle.
Planning and Executing the Iteration
No matter the length, all iterations have the same pattern, and that is part of the discipline and manufacturing-like routine of agile development. An iteration consists of three phases, as Figure 11–1 illustrates. The first phase is a short planning session (less than a day) during which the iteration backlog is reviewed and prioritized, estimates are established, and the team commits to the work in the iteration. The second is the development phase, when the backlog items are implemented in code and tests. The final phase involves delivery of the new system increment built during the iteration and assessment of the iteration.
Iteration Planning
At the beginning of each iteration, the team holds an iteration planning session during which it reviews the prioritized items in the backlog, selects and reviews the stories for the current iteration, and defines and estimates the tasks necessary to deliver the increment of work. There is a high degree of granularity involved because tasks are typically estimated in “ideal developer days” or even hours. In keeping with the agile practice of just-in-time design, it is during this session that the details of the requirements are discussed and negotiated.
Preparation for the Iteration Planning Meeting
Because the iteration planning meeting is short and time-boxed, the team has to enter the meeting in a prepared state, and all team members have some responsibility to prepare for iteration planning. For example, the development team and the product owners have specific areas of responsibility, which are outlined in Table 11–1.
Table 11–1 Iteration Planning Meeting Responsibilities
<table>
<thead>
<tr>
<th>Product Owner Responsibilities</th>
<th>Development Team Responsibilities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Review the release plan to make sure the vision and goals are still appropriate.</td>
<td>Review the top priority items in the backlog and prepare any questions.</td>
</tr>
<tr>
<td>Review the items in the backlog and reprioritize if necessary. This includes stories that (a) were already there (originally defined in previous release planning sessions); (b) have been added since the last release planning session; (c) failed acceptance in a prior iteration; (d) are generated from defects or bugs.</td>
<td>Consider technical issues, constraints, and dependencies, and be prepared to share these concerns.</td>
</tr>
<tr>
<td>Understand how the reprioritization may affect other teams who are dependent on a deliverable committed to during release planning. Coordinate with other product managers as needed to resolve dependency issues.</td>
<td>Think about the work involved in delivering the functionality in the stories, in order to be prepared to make estimates in the meeting.</td>
</tr>
<tr>
<td>Understand the customer needs and the business value that each story is to deliver.</td>
<td>Understand what your iteration velocity should be for the upcoming iteration, based on team discussions at the last review.</td>
</tr>
<tr>
<td>Be prepared to further elaborate on the details of each story.</td>
<td></td>
</tr>
</tbody>
</table>
**Participants**
Everyone who will directly affect the product outcome should attend the iteration planning meeting. Those who may indirectly affect the product outcome are strongly encouraged to attend as well. Outside stakeholders are also welcome to attend, but once planning begins, they must take on the role of chickens² and refrain from speaking. Required attendees typically include the Agile team leader/Scrum Master; product owner/business analyst; developer; test, QA, and documentation personnel; and architect.
**Iteration Planning Meeting**
```
Prioritize
Iteration Planning
Negotiate Scope
Estimate Tasks
```
The primary concern of iteration planning is to define and accept a reasonable scope for the iteration. Iteration planning begins with revising and refining the list of prioritized work in the backlog. Product owners and the development team may add or reduce features, defects, and other infrastructure work on the basis of the
---
² This metaphor is based on the "chicken and pig" joke discussed in Scrum. A chicken and pig are discussing opening a restaurant to serve ham and eggs. The pig thinks for a second and says, “Wait a minute, if we do this, you are only involved, but I am committed!”
current business situation. Business priority, risk, and rough estimates are assigned to new items or may be revised for existing items. The product owners then rerank the work items and select a scope of work to propose for the iteration.
The development team is given an opportunity to discuss the proposed work with product owners until each item is well enough understood for the development team to prepare a list of engineering tasks and provide detailed estimates. The development team then estimates the engineering tasks for each proposed backlog item. The development team then presents its estimates to the product owners.
By adding up the development team’s estimates, the team can calculate the apparent scope of the iteration and have an indication of whether the scope can be achieved. However, the final scope of the iteration is the result of a negotiation between product owners and the development team. During this negotiation, product owners may adjust certain backlog items in ways that make them less costly to develop, trade out entire backlog items for others, or ask for adjustments to certain estimates provided by development.
Mary and Tom Poppendieck [2003] point out the benefits of this form of collaborative iteration planning:
*If you ask a team to choose items from the top of a list that the members believe they can do in a short time-box, the team will probably choose and commit to a reasonable set of features. Once the team members have committed to a set of features that they think they can complete, they will probably figure out how to get those features done within the time-box.*
At the end of the iteration planning meeting, the product owners and development team jointly commit to the iteration plan. Then, it is usually a rule that only the development team can change the scope of the iteration. Product owners must wait until the next iteration to change the direction of the development effort. (In a sea of change, some things must be constant.)
**Result: The Iteration Plan**
The end result of an iteration planning meeting is an iteration plan that contains:
- An iteration theme—a statement of what the iteration is intended to accomplish.
- A prioritized list of stories to work on for the iteration.
- The stories’ estimated tasks and each task’s assignments (task owner).
- A commitment by the team to the objectives of the iteration.
- Documentation of the plan in a visible place or in a widely accessible tool.
**Additional Iteration Planning Guidelines**
In addition, the team should keep the following guidelines in mind during the iteration planning meeting:
- Have the iteration planning meeting on the first day of the iteration, first thing in the morning. The meeting should last no more than 4 hours.
- Create task estimates for each story on the basis of ideal effort hours, or points.
- When estimating, if a story breaks out into seven or more tasks, consider splitting the story.
- Be sure that there is at least one story with a demonstrable function included in the iteration.
- Remember that the product manager owns the story priorities, and the development team owns the tasks and the estimates for those stories.
- When first implementing agile practices, consider setting a “code freeze” a few days prior to the end of the iteration, because the team is likely still practicing waterfall within the iteration.
• Remember that the team’s velocity (available resources factored by existing velocity) changes from iteration to iteration.
• Once the iteration is underway, no change requests by the product manager are allowed. Any new or changed stories should go into the backlog.
Iteration Planning with Distributed Teams
When the teams are distributed, it is preferable to bring the entire team together in one location for planning sessions, especially if the team is new to agile software development or the team itself is new. However, doing so often isn’t feasible for teams with staff spread across continents. If your project team has limitations that prevent the team members from gathering together in one location for planning meetings, then attending the release planning meeting (Chapter 12) should be considered the highest priority, followed by the iteration planning meeting.
If using an agile project management tool that holds stories, iterations, and releases, have the team log in. Identify one driver to update the tool as discussions occur, and be sure to limit conversations to one at a time so that everyone can hear what is being said.
If using sticky notes and flip-charts, take pictures and e-mail/post so absent members can see the progress and results. The Agile/Scrum Master should later phone those not in attendance and review the pictures to ensure clear understanding.
Iteration Execution
Having committed to the iteration plan, the team starts development. Each developer (or pair of developers in organizations doing pair programming) will follow the same basic process (see Figure 11–2) repeatedly throughout the iteration until there is no more work in the backlog:
1. **Take responsibility** for an assigned backlog item (e.g., story, use case, defect fix, other).
2. **Develop** (design, code, integrate, and test) the backlog item.
3. **Deliver** the backlog item by integrating it into a system build.
4. **Declare** the backlog item as developed, signaling that it is ready for acceptance testing.
This cycle repeats within an iteration as each developer ultimately takes responsibility for all the backlog items in his or her queue. In most organizations, developers also support the management of the process by estimating actual and remaining effort for the backlog items that they are responsible for.
Take Responsibility
Having committed to the iteration plan, the team is faced with the question of how to allocate that work to the members of the development team. The two basic approaches to allocating work are that the Agile/Scrum Master can assign work to developers or the developers can choose the work they will do.
*When things are happening quickly, there is not enough time for information to travel up the chain of command and then come back down as directives.*
—Poppendieck and Poppendieck [2003]
The process of developers taking responsibility for work must be supported by a visible indicator showing who is responsible for what work and by daily status meetings during which status and issues can be discussed.
Develop
Once a developer takes responsibility for a backlog item, he or she then:
- Elaborates requirements (if not already elaborated).
- Designs.
- Writes the test (first in some practices) and writes the code.
- Executes the test suite on the build.
- Integrates program code and tests into a build of the system.
As described earlier, because the story is pliable, these activities happen in parallel, and the objective is to deliver a working story (as it evolves) into the baseline. (Definition affects design, design affects test, test affects design, etc.) During a typical development iteration, the developer cycles through most of these activities many times for each story. The order in which these activities occur is a matter of the developer’s programming practices and the particular situation. The developer continues the activities until his or her tests for the new functionality or defect pass. At that point, the developer can confidently include the new functionality in a build of the system.
**Deliver Story**
The developer delivers the new functionality or defect fix by checking the code into the source control system and including it in a build of the system. The unit test suite and other appropriate tests are run before the code is checked in to ensure that the changes do not break the build.
**Declare Story Completion**
Once the developer has integrated his or her work into a build of the system, the backlog item is declared complete, signaling other members of the development organization that the backlog item is ready. For example, the testing group now knows it can include the new functionality or defect fix in its testing efforts and can start automated tests of various kinds (functional, acceptance, performance, etc.).
**Accepting the Iteration**
The primary mechanism that allows a team to steer toward its release goals is demonstrating working software early and often to product owners, customers, and hopefully to end users. It is a reality of software development that customers’ understanding of their requirements for a software system tend to evolve as they see and use the software.
Thus, every iteration is an opportunity for the team to get feedback and guidance from customers about how to make the system the most valuable that it can be to the customer. This feedback is typically structured as a 1-hour demonstration at the end of the iteration. The format for this meeting is as follows:
- Presentation of each story by the responsible party
- Discussion and feedback with stakeholders
- Product owners move story to accepted state or split story to be worked on in the next iteration
The final activity in an iteration is to reflect on and assess the results. The goal of assessment is to mine the lessons learned during the iteration and then adapt the development process accordingly. The assessment allows the team to continually improve the throughput of the development process and the quality of the resulting system.
The other major activity that occurs during assessment is a “closing” process whereby unfinished items are put back into the release backlog as work to be done. The iteration is concluded, and the closed iteration backlog becomes a record of the work completed during the iteration.
Underlying the iteration-based development process is the continuous activity of tracking status and adjusting course. Even within the course of a short iteration, scope must be managed, and deviations from plan will occur. The tracking and adjusting activity is focused on getting an objective, real-time picture of where the software development effort is and whether the team is likely to "land" (complete on time) the iteration in process.
Tracking progress of the current iteration requires having visibility into the status of the stories, defects, and other tasks that are being worked on during the iteration. In particular, it's important to be able to understand how quickly the team is moving through the scheduled work and how accurate its estimates were.
The progress toward the release can be understood by considering the status of the stories, defects, and other tasks across the iterations in the release. Iterative and incremental processes tend to favor a schedule-driven approach, so at the release level, it is most important to understand which chunks of planned work are done and how fast the team is producing work. This information allows the team to deliver the most valuable software on the committed release date by making decisions about what work to do next and what work to defer.
Tracking in Daily Stand-ups
One of key heartbeats of agile development is the practice of daily stand-up meetings, a daily event that all team members attend, and while remaining standing, relate their status to the other team members. The standing part helps keep the meetings short—stand-ups should run only 10 to 15 minutes. The stand-up's primary purpose is to quickly share information about the progress being made by each individual in the current iteration.
Participants can be classified as pigs or chickens. The development team consists of pigs: coders, testers, analysts, tech writers, architects, product owners, and the team leader.
Guidelines for Daily Stand-ups
- Have the meeting at the same time every day; the team decides when this should be.
- Have the meeting in the same location every day; this avoids the frustration of having to secure a room, find the room, and communicate room shifts to team members.
- Make sure everyone stands up; sitting promotes problem solving, discussions that should be held after the stand-up.
- Limit the meeting to a 15-minute time box; stick to 1 to 2 minutes per report.
- Start on time; late-comers can adversely affect your ability to keep the meeting to 15 minutes.
- Participants should include all members of the development team: the folks who have their hands on the keyboards and the folks who understand the details behind the user stories.
Scrum prescribes a standard process whereby each team member reports on:
- What I did yesterday.
• What I am doing today.
• What is getting in my way? (Am I blocked?)
Pay attention to what is related—if the tasks on which individuals are reporting are not part of what was committed to for the iteration, this deviation should raise a red flag.
The stand-up is for the benefit of the team, so members should address each other and not just the team leader. The team leader, however, should pay careful attention to how things are going, make sure the team stays within the time box, and then be prepared to afterwards tackle the roadblocks that are preventing team members from doing their work.
Tracking Iteration Status
Since there are relatively few stories in an iteration, tracking the status of each is a fairly simple matter, and it provides an objective look at progress. Status can be tracked by hanging visual indicators of state information (defined, complete, accepted) on the wall, moving cards from one area of a wall to another, or for larger and distributed teams, using automated tools, as Figure 11–3 illustrates.

Tracking with Burn-Down Charts
Since iterations are fixed in duration, another primary way the team and its managers can gauge progress in the aggregate is to continuously monitor current status and also estimate how much work remains. Computing the estimated remaining work in the iteration at a given point in time requires two pieces of information: the total of the estimates for all backlog items that are not yet started and the estimated remaining effort for any in-progress backlog item. The sum of those two amounts represents the estimated remaining work to be completed during the iteration. Plotting this value each day of the iteration produces what is called a burn-down chart, as the example in Figure 11–4 illustrates.
Iteration Cadence Calendar
As part of establishing a rhythm of agile development time boxes, meetings, and checkpoints, teams typically establish a “cadence calendar” at the start of each project. The cadence calendar helps the team set its schedules so that members can set aside time for planning meetings, daily stand-ups, demos, reviews, and retrospectives.
Figure 11–5 provides a high-level outline of a 2-week iteration, N, as well as the major milestones that occur in that iteration. It also provides a picture of what should happen in preparation for iteration N, as illustrated on the left in N–1.
<table>
<thead>
<tr>
<th>Release “X”</th>
<th>Iteration “N–1”</th>
<th>Iteration “N”</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Day 1</td>
<td>Day 1–10</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>- Verify iteration “N” priorities</td>
<td></td>
<td></td>
</tr>
<tr>
<td>- Confirm resource assumptions for iteration “N”</td>
<td></td>
<td></td>
</tr>
<tr>
<td>- Compile requirements, design and testing; elaborations in preparation for detailed planning and estimating</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Iteration planning meeting</td>
<td>Design, develop, test, fix, and accept</td>
</tr>
</tbody>
</table>
Figure 11–5 Two iterations in a snapshot
Table 11–2 Two-Week Meeting Schedule
<table>
<thead>
<tr>
<th>Week 1</th>
<th>Day 1</th>
<th>Day 2</th>
<th>Day 3</th>
<th>Day 4</th>
<th>Day 5</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Iteration N–1: demo and acceptance<sup>(1)</sup></td>
<td>Stand-up</td>
<td>Stand-up</td>
<td>Stand-up</td>
<td>Stand-up</td>
</tr>
<tr>
<td></td>
<td>Iteration N–1: retrospective<sup>(2)</sup></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Iteration N: planning and story review<sup>(3)</sup></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Iteration N: estimating<sup>(4)</sup></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Iteration N: refactor and commit<sup>(5)</sup></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Agendas
1. 5-min. demo for each story.
- Update acceptance.
- Test automation demo.
2. Grade iteration (metrics)
- What went well?
- What didn’t go well?
- What will we do different next time?
3. Product owner presents stories for next iteration.
4. Teams separate and bid stories; take responsibility.
5. Team reconvenes, assesses estimates and velocity.
- Refactor plan as necessary.
- Commit.
6. Assess each story for the iteration; take any corrective action.
7. Brief meeting led by product owner highlighting plans for next iteration.
---
Table 11–2 Two-Week Meeting Schedule (continued)
<table>
<thead>
<tr>
<th>Week 2</th>
<th>Day 1</th>
<th>Day 2</th>
<th>Day 3</th>
<th>Day 4</th>
<th>Day 5</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Mid-iteration review<sup>(6)</sup></td>
<td>Stand-up</td>
<td>Stand-up</td>
<td>Stand-up</td>
<td>Stand-up</td>
</tr>
<tr>
<td></td>
<td>Iteration N + 1 planning<sup>(7)</sup></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Agendas
6. Assess each story for the iteration; take any corrective action.
7. Brief meeting led by product owner highlighting plans for next iteration.
About the Book
"Companies have been implementing large agile projects for a number of years, but the ‘stigma’ of ‘agile only works for small projects’ continues to be a frequent barrier for newcomers and a rallying cry for agile critics. What has been missing from the agile literature is a solid, practical book on the specifics of developing large projects in an agile way. Dean Leffingwell’s book *Scaling Software Agility* fills this gap admirably. It offers a practical guide to large project issues such as architecture, requirements development, multi-level release planning, and team organization. Leffingwell’s book is a necessary guide for large projects and large organizations making the transition to agile development."
—Jim Highsmith, director, Agile Practice, Cutter Consortium, author of Agile Project Management
Agile development practices, while still controversial in some circles, offer undeniable benefits: faster time to market, better responsiveness to changing customer requirements, and higher quality. However, agile practices have been defined and recommended primarily to small teams. In *Scaling Software Agility*, Dean Leffingwell describes how agile methods can be applied to enterprise-class development.
- Part I provides an overview of the most common and effective agile methods.
- Part II describes seven best practices of agility that natively scale to the enterprise level.
- Part III describes an additional set of seven organizational capabilities that companies can master to achieve the full benefits of software agility on an enterprise scale.
This book is invaluable to software developers, testers and QA personnel, managers and team leads, as well as to executives of software organizations whose objective is to increase the quality and productivity of the software development process but who are faced with all the challenges of developing software on an enterprise scale.
Copyright Disclaimer:
This chapter is excerpted with permission from the book, "Scaling Software Agility: Best Practices for Large Enterprises," authored by Dean Leffingwell, published as part of the Addison-Wesley Professional Agile Software Development Series, Copyright 2007 Pearson Education, Inc. ISBN0321458192 For more information, please visit: www.awprofessional.com. It may not be reproduced, in whole or in part, without express permission from the publisher.
|
{"Source-Url": "https://scalingsoftwareagility.files.wordpress.com/2007/09/mastering-the-iteration-an-agile-white-paper.pdf", "len_cl100k_base": 6078, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 29833, "total-output-tokens": 6391, "length": "2e12", "weborganizer": {"__label__adult": 0.0004589557647705078, "__label__art_design": 0.00031304359436035156, "__label__crime_law": 0.0003325939178466797, "__label__education_jobs": 0.0016689300537109375, "__label__entertainment": 4.661083221435547e-05, "__label__fashion_beauty": 0.00019025802612304688, "__label__finance_business": 0.0007414817810058594, "__label__food_dining": 0.0004887580871582031, "__label__games": 0.00045871734619140625, "__label__hardware": 0.00043082237243652344, "__label__health": 0.00037980079650878906, "__label__history": 0.000186920166015625, "__label__home_hobbies": 9.506940841674803e-05, "__label__industrial": 0.0003085136413574219, "__label__literature": 0.00027298927307128906, "__label__politics": 0.00027298927307128906, "__label__religion": 0.00039887428283691406, "__label__science_tech": 0.0010700225830078125, "__label__social_life": 9.846687316894533e-05, "__label__software": 0.0034885406494140625, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0004146099090576172, "__label__transportation": 0.0004167556762695313, "__label__travel": 0.0002491474151611328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29498, 0.00785]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29498, 0.19315]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29498, 0.93029]], "google_gemma-3-12b-it_contains_pii": [[0, 757, false], [757, 2173, null], [2173, 5315, null], [5315, 6949, null], [6949, 9597, null], [9597, 13000, null], [13000, 15345, null], [15345, 16396, null], [16396, 19361, null], [19361, 22187, null], [22187, 24041, null], [24041, 25387, null], [25387, 27099, null], [27099, 29498, null]], "google_gemma-3-12b-it_is_public_document": [[0, 757, true], [757, 2173, null], [2173, 5315, null], [5315, 6949, null], [6949, 9597, null], [9597, 13000, null], [13000, 15345, null], [15345, 16396, null], [16396, 19361, null], [19361, 22187, null], [22187, 24041, null], [24041, 25387, null], [25387, 27099, null], [27099, 29498, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29498, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29498, null]], "pdf_page_numbers": [[0, 757, 1], [757, 2173, 2], [2173, 5315, 3], [5315, 6949, 4], [6949, 9597, 5], [9597, 13000, 6], [13000, 15345, 7], [15345, 16396, 8], [16396, 19361, 9], [19361, 22187, 10], [22187, 24041, 11], [24041, 25387, 12], [25387, 27099, 13], [27099, 29498, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29498, 0.13402]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
bc67482b092fad62c669b8b646f8422b439c8819
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/euc/euc2005/QuWLS05.pdf", "len_cl100k_base": 4939, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28522, "total-output-tokens": 6058, "length": "2e12", "weborganizer": {"__label__adult": 0.0003490447998046875, "__label__art_design": 0.0003709793090820313, "__label__crime_law": 0.00043487548828125, "__label__education_jobs": 0.0011987686157226562, "__label__entertainment": 0.0001913309097290039, "__label__fashion_beauty": 0.0001894235610961914, "__label__finance_business": 0.0007653236389160156, "__label__food_dining": 0.00046539306640625, "__label__games": 0.0006513595581054688, "__label__hardware": 0.00226593017578125, "__label__health": 0.0012292861938476562, "__label__history": 0.0004811286926269531, "__label__home_hobbies": 9.441375732421876e-05, "__label__industrial": 0.0005888938903808594, "__label__literature": 0.0005578994750976562, "__label__politics": 0.0002968311309814453, "__label__religion": 0.00047135353088378906, "__label__science_tech": 0.452392578125, "__label__social_life": 0.00011658668518066406, "__label__software": 0.051971435546875, "__label__software_dev": 0.483642578125, "__label__sports_fitness": 0.00030803680419921875, "__label__transportation": 0.0006532669067382812, "__label__travel": 0.0003020763397216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21931, 0.05861]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21931, 0.43112]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21931, 0.88122]], "google_gemma-3-12b-it_contains_pii": [[0, 2112, false], [2112, 5600, null], [5600, 8345, null], [8345, 11163, null], [11163, 13846, null], [13846, 16299, null], [16299, 17210, null], [17210, 18812, null], [18812, 19914, null], [19914, 21931, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2112, true], [2112, 5600, null], [5600, 8345, null], [8345, 11163, null], [11163, 13846, null], [13846, 16299, null], [16299, 17210, null], [17210, 18812, null], [18812, 19914, null], [19914, 21931, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21931, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21931, null]], "pdf_page_numbers": [[0, 2112, 1], [2112, 5600, 2], [5600, 8345, 3], [8345, 11163, 4], [11163, 13846, 5], [13846, 16299, 6], [16299, 17210, 7], [17210, 18812, 8], [18812, 19914, 9], [19914, 21931, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21931, 0.28926]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
7c32bd8ee6a49897ef4fcf29b275dc5c74ab711e
|
1. Show the following in some detail:
(a) \(5n^2 + 3n \log n + 2n + 5 \in \mathcal{O}(n^2)\)
for \(n \geq 2\) we have \(5n^2 + 3n \log n + 2n + 5 \leq (5 + 3 + 2 + 5)n^2\), so take \(c = 15\).
(b) \(20n^3 + 10n \log n + 5 \in \mathcal{O}(n^3)\)
for \(n \geq 1\) we have \(20n^3 + 10n \log n + 5 \leq 35n^2\), so take \(c = 35\).
(c) \(3 \log n + 2 \in \mathcal{O}(\log n)\)
for \(n \geq 2\) we have \(3 \log n + 2 \leq 5 \log n\) (nb: \(\log 1 = 0\)).
(d) \(2^{n+2} \in \mathcal{O}(2^n)\).
for \(n \geq 1\) we have \(2^{n+2} = 2^n \cdot 2^2 = 4 \cdot 2^n\), so take \(c = 4\).
(e) \(2n + 100 \log n \in \mathcal{O}(n)\).
for \(n \geq 2\) we have \(2n + 100 \log n \leq 102n\), so take \(c = 102\).
2. Give a concrete example of \(f_1\) and \(f_2\) such that \(f_1(n) \in \mathcal{O}(g_1(n))\) and \(f_2(n) \in \mathcal{O}(g_2(n))\) but \(f_1(n) - f_2(n) \notin \mathcal{O}(g_1(n) - g_2(n))\).
A possible solution: \(n^2 + n \in \mathcal{O}(n^2)\) and \(n^2 \in \mathcal{O}(n^2)\). However, \(n^2 + n - n^2 = n \notin \mathcal{O}(n^2 - n^2) = \mathcal{O}(0)\).
3. Describe a recursive algorithm for finding the maximum element in an array of \(n\) elements. Analyse the worst-case time complexity of your algorithm.
Solution:
\begin{verbatim}
Algorithm arrayMax(A, n):
if n = 0 then
return error
if n = 1 then
return A[1]
if n > 1 then
m := arrayMax(A, n - 1)
if m > A[n] then
return m
else
return A[n]
\end{verbatim}
A function for the worst-case time complexity of this algorithm can be given via the following recurrence equation:
\[
T(n) = \begin{cases}
1 & \text{if } n = 1 \\
T(n - 1) + 1 & \text{if } n > 1
\end{cases}
\]
We use the substitution method to solve the recurrence equation. For large \(n\):
\[
T(n) = T(n - 1) + 1 = T(n - 2) + 1 + 1 = T(n - 3) + 3 = \ldots = T(n - i) + i
\]
We substitute \(i = n - 1\). Then we have:
\[
T(n) = 1 + n - 1 = n
\]
So the worst-case time complexity of \texttt{arrayMax} is in \(\mathcal{O}(n)\).
4. Do we have \(2^{n+1} \in \mathcal{O}(2^n)\)? Do we have \(2^n \in \mathcal{O}(2^n)\)?
Solution:
We have \(2^{n+1} \in \mathcal{O}(2^n)\). We show: there exists \(c\) and there exists \(n_0\) such that \(2^{n+1} \leq c \cdot 2^n\) for all \(n \geq n_0\). Take \(c = 2\) and \(n_0 = 1\); indeed we have \(2^{n+1} \leq 2 \cdot 2^n\) for all \(n \geq 1\).
We do not have \(2^n \in \mathcal{O}(2^n)\). Suppose \(2^{2n} \in \mathcal{O}(2^n)\). There exists \(c\), and there exists \(n_0\) such that \(2^{2n} \leq c \cdot 2^n\) for all \(n \geq n_0\). That means \(2^{2n} = 2^n \cdot 2^n \leq c \cdot 2^n\) so \(2^n \leq c\). Such a constant \(c\) does not exist.
5. Show the following: if \(f_1(n) \in \mathcal{O}(g_1(n))\) and \(f_2(n) \in \mathcal{O}(g_2(n))\) then \(f_1(n) + f_2(n) \in \mathcal{O}(g_1(n) + g_2(n))\).
Solution:
Assume \(f_1(n) \in \mathcal{O}(g_1(n))\), then \(\exists n_1 \in \mathbb{N} \exists c_1 > 0 : n \geq n_1 \Rightarrow f_1(n) \leq c_1 \cdot g_1(n)\). Analogously, assume \(f_2(n) \in \mathcal{O}(g_2(n))\) then \(\exists n_2 \in \mathbb{N} \exists c_2 > 0 : n \geq n_2 \Rightarrow f_2(n) \leq c_2 \cdot g_2(n)\). Let \(n^* := \max\{n_1, n_2\}\) and \(c^* := \max\{c_1, c_2\}\). Then we have: if \(n \geq n^*\) then \(f_1(n) \leq c_1 \cdot g_1\) and hence \(f_1(n) \leq c^* \cdot g_1\), and \(f_2(n) \leq c_2 \cdot g_2\) and hence \(f_2(n) \leq c^* \cdot g_2\). So we have \((f_1 + f_2)(n) \leq c^*((g_1 + g_2)(n))\) voor \(n \geq n^*\).
6. Depict the max-heap [16, 14, 10, 8, 7, 9, 3, 2, 4, 1] as a tree.
Solution:
7. Give the array representation of the max heap of the following picture:
```
24
23 22
12 21 20 8
11 10 18 16 5
```
Solution:
`[24, 23, 22, 12, 21, 20, 8, 11, 10, 18, 16, 5]`
8. Is the array
`[23, 17, 14, 6, 13, 10, 1, 5, 7, 12]`
a max-heap?
Solution:
No, because of the branch from 6 to 7.
9. Is an array of decreasing numbers always a max-heap?
Solution:
Yes, because for every $i$ such that also $2i$ is an index in the array, we have $A[i] > A[2i]$, and similarly for every $i$ such that also $2i + 1$ is an index in the array, we have $A[i] > A[2i + 1]$.
10. Show that in any subtree of a max-heap, the root of the subtree contains the largest value occurring anywhere in that subtree.
Solution:
Also for the subtree the max-heap property holds. That is, walking from a given node downwards in the tree visualisation of the heap, the keys do not increase. So the keys on the nodes below the root of the subtree are all smaller than or equal to the key on the root of the subtree.
11. Where in a max-heap is the smallest element, assuming that all elements are distinct?
Solution:
Because all elements are different, there is a unique smallest key in the heap, which must be the key of one of the leaves.
12. What are the minimum and maximum numbers of elements in a heap of height $h$?
Solution:
Recall that a heap is an almost complete binary tree.
The minimum number of nodes in a heap of height $h$ is when the last level (nodes at depth $h$) has just one node. Then it has $2^0 + 2^1 + 2^2 + \ldots + 2^{h-1} + 1 = 2^h - 1 + 1 = 2^h$ nodes. For instance, a smallest heap of height 3 has $2^3 = 8$ nodes.
The maximum number of nodes in a heap of height $h$ is when the last level (nodes at depth $h$) is completely filled. (The heap then actually has the shape of a complete binary tree.) Then it has $2^0 + 2^1 + \ldots + 2^h = 2^{h+1} - 1$ nodes. For instance, a largest heap of height 3 has $2^4 - 1 = 15$ nodes.
13. Show that an $n$-element heap has height $\lceil \log(n) \rceil$.
Given an $n$-element heap, we know from the previous exercise that its height $h$ should satisfy $2^h \leq n \leq 2^{h+1} - 1$. The inequality $2^h \leq n$ yields $h \leq \log n$. From the inequality $n \leq 2^{h+1} - 1$ follows $n < 2^{h+1}$ and hence $\log n < h + 1$. Hence $h \leq \log n < h + 1$, so $h = \lceil \log n \rceil$.
14. Turn the following sequences into a heap, using the procedure `downMaxHeap` from the slides which is the same as the procedure `MaxHeapify` from the book. First determine (by hand) the index of the node that should be bubbled. Use the Figure in the book as model.
$1 8 6 5 3 7 4$
$27 17 3 16 13 10 1 5 7 12 4 8 9 0$
The first array contains $2^3 - 1 = 7$ elements. The for-loop from the procedure `buildMaxHeap` considers indices 3, 2, 1.
Applying `downMaxHeap` to index 3 yields bubbling down the 6. In a picture:
```
1 8 6 5 3 7 4
27 17 3 16 13 10 1 5 7 12 4 8 9 0
```
1
Then considering index $i = 2$ does not result in a swapping, because 8 is larger than 5 and larger than 3.
Finally index $i = 1$ is considered. This results in two calls of \texttt{downMaxHeap} that bubbles down the key 1. The first step gives
```
1
/ \
8 7
/ \ / \
5 3 6 4
```
and the second step gives
```
8
/ \
1 7
/ \ / \
5 3 6 4
```
So the final result in array representation is [8, 5, 7, 1, 3, 6, 4].
The second sequence is turned into a max-heap by applying \texttt{downMaxHeap} to index 3.
```
27
/ \
/ \
17 3
/ /
/ / \
16 13 10 1
/ / / \
/ / / / \
5 7 12 4 8 9 0
```
15. Provide pseudo-code for a procedure `downMinHeap` or in book-terminology `MinHeapify`, similar to `downMaxHeap` or `MaxHeapify` that takes as input an array `A` and an index `i` in `A`, and that let the key at `i` bubble down to restore the min-heap property.
Solution:
Algorithm `downMinHeap(A, i)`:
1. `l := left(i)`
2. `r := right(i)`
4. `smallest := l`
5. else
6. `smallest := i`
8. `smallest := r`
9. if `smallest ≠ i` then
10. `swap(A[i], A[smallest])`
11. `downMinHeap(A, smallest)`
16. Illustrate the operation of buildMaxHeap on the sequence
\[5\ 3\ 17\ 10\ 84\ 19\ 6\ 22\ 9\]
using Figure 6.3 from the book as model.
Solution:
Here using the array-representation:
\[
\begin{array}{c}
[5, 3, 17, 10, 84, 19, 6, 22, 9] \\
\rightarrow (\text{swap 10 and 22}) \\
[5, 3, 17, 22, 84, 19, 6, 10, 9] \\
\rightarrow (\text{swap 17 and 19}) \\
[5, 3, 19, 22, 84, 17, 6, 10, 9] \\
\rightarrow (\text{swap 84 and 3}) \\
[5, 84, 19, 22, 3, 17, 6, 10, 9] \\
\rightarrow (\text{swap 5 and 84}) \\
[84, 5, 19, 22, 3, 17, 6, 10, 9] \\
\rightarrow (\text{swap 5 and 22}) \\
[84, 22, 19, 5, 3, 17, 6, 10, 9] \\
\rightarrow (\text{swap 5 and 10}) \\
[84, 22, 19, 10, 3, 17, 6, 5, 9] \\
\end{array}
\]
done
17. Use heapsort to sort the following sequences; use Figure 6.4 in the book as model.
\[
\begin{align*}
1 &\ 2 &\ 3 &\ 4 &\ 5 \\
5 &\ 4 &\ 3 &\ 2 &\ 1 \\
5 &\ 13 &\ 2 &\ 25 &\ 7 &\ 17 &\ 20 &\ 8 &\ 4 \\
\end{align*}
\]
Solution:
Sorting 1 2 3 4 5, here using the array-representation:
First build a max-heap:
\[
[1, 2, 3, 4, 5] \rightarrow [1, 5, 3, 4, 2] \rightarrow [5, 1, 3, 4, 2] \rightarrow [5, 4, 3, 1, 2]
\]
Then we continue with the swap, remove, reconstruct steps.
We swap 1 and 5, disconnect 5 from the heap, and apply down-heap bubble at position 1. This yields:
\[
[4, 2, 3, 1 | 5]
\]
We swap 4 and 1, disconnect 4 from the heap, and apply down-heap bubble at position 1. This yields:
\[
[3, 2, 1 | 4, 5]
\]
We swap 3 and 1, disconnect 3 from the heap, and apply down-heap bubble at position 1. This yields:
\[
[2, 1 | 3, 4, 5]
\]
We swap 2 and 1, disconnect 2 from the heap, and apply down-heap bubble at position 1. This yields:
\[ [1 \mid 2, 3, 4, 5] \]
Done, result \([1, 2, 3, 4, 5]\).
Sorting 5 4 3 2 1, here using the array-representation:
First build a max-heap: nothing to be done. We swap 5 and 1, disconnect 5 from the heap, and apply down-heap bubble:
\[ [4, 2, 3, 1 \mid 5] \]
We swap 4 and 1, disconnect 4 from the heap, and apply down-heap bubble:
\[ [3, 2, 1 \mid 4, 5] \]
We swap 3 and 1, disconnect 3 from the heap, and apply down-heap bubble:
\[ [2, 1 \mid 3, 4, 5] \]
We swap 2 and 1, disconnect 2 from the heap, apply down-heap bubble:
\[ [1 \mid 2, 3, 4, 5] \]
Done.
Sorting 5 13 2 25 7 17 20 8 4, here using the array-representation:
(to be done)
18. Describe an algorithm for checking whether an array of numbers is a max-heap, and determine its worst-case time complexity.
Solution:
\[
\textbf{Algorithm} \ \text{isMaxHeap}(A): \\
\text{\hspace{1em} } n := A.length \\
\text{\hspace{1em} } i := \lfloor n/2 \rfloor \\
\text{\hspace{1em} } \text{if \ } n \text{ \ even \ then} \\
\text{\hspace{2em} } \text{if \ not \ } A[i] \geq A[2i] \text{ \ then} \\
\text{\hspace{3em} } \text{return \ false} \\
\text{\hspace{2em} } i := i - 1 \\
\text{\hspace{1em} } \text{while \ } i > 0 \text{ \ do} \\
\text{\hspace{2em} } \text{if \ (not \ } A[i] \geq A[2i]) \text{ \ or \ (not \ } A[i] \geq A[2i+1]) \text{ \ then} \\
\text{\hspace{3em} } \text{return \ false} \\
\text{\hspace{2em} } i := i - 1 \\
\text{\hspace{1em} } \text{return \ true}
\]
This algorithm is in $O(n)$.
19. Give a definition of a ternary max-heap.
Solution:
Intuition: a ternary max-heap is a almost complete ternary tree, where on a path downwards in the tree the keys do not increase.
A ternary tree is a tree in which every node has at most 3 successors. A complete ternary tree is a ternary tree is a tree in which all leaves occur at the same depth, and the last depth is completely filled. As a consequence, every node has either 0 or 3 successors. An almost complete ternary tree is a complete ternary tree where the last depth is filled from left to right, but not necessarily completely.
How to view an almost complete ternary tree as an array? The root gets index 0. So we will also use an array starting with index 0. The children of a node with label $i$ get labels $3i + 1$, $3i + 2$, $3i + 3$ for the left, middle, right child. We can give the algorithms for $\text{parent}$, $\text{left}$, $\text{middle}$, and $\text{right}$.
Algorithm $\text{left}(i)$:
\[
\text{return } 3i + 1
\]
Algorithm $\text{middle}(i)$:
\[
\text{return } 3i + 2
\]
Algorithm $\text{right}(i)$:
\[
\text{return } 3i + 3
\]
Algorithm $\text{parent}(i)$:
\[
\text{return } \lfloor (i - 1)/3 \rfloor
\]
We will use ternary heaps in the array-representation. A ternary heap is an array $T$ with first index 0, that satisfies the condition
\[
T[\text{parent}(i)] \geq i
\]
|
{"Source-Url": "http://www.cs.vu.nl/~femke/courses/ds/exercises/3s.pdf", "len_cl100k_base": 4854, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28782, "total-output-tokens": 5713, "length": "2e12", "weborganizer": {"__label__adult": 0.00047469139099121094, "__label__art_design": 0.0004453659057617187, "__label__crime_law": 0.0005598068237304688, "__label__education_jobs": 0.004665374755859375, "__label__entertainment": 0.00014770030975341797, "__label__fashion_beauty": 0.0002148151397705078, "__label__finance_business": 0.0003552436828613281, "__label__food_dining": 0.0008072853088378906, "__label__games": 0.0012722015380859375, "__label__hardware": 0.002506256103515625, "__label__health": 0.0010194778442382812, "__label__history": 0.0007042884826660156, "__label__home_hobbies": 0.00029087066650390625, "__label__industrial": 0.0012273788452148438, "__label__literature": 0.00058746337890625, "__label__politics": 0.0004642009735107422, "__label__religion": 0.0010356903076171875, "__label__science_tech": 0.219970703125, "__label__social_life": 0.00017690658569335938, "__label__software": 0.007671356201171875, "__label__software_dev": 0.7529296875, "__label__sports_fitness": 0.0006465911865234375, "__label__transportation": 0.0012254714965820312, "__label__travel": 0.00037384033203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12372, 0.09339]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12372, 0.08814]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12372, 0.74867]], "google_gemma-3-12b-it_contains_pii": [[0, 1523, false], [1523, 3611, null], [3611, 4326, null], [4326, 6537, null], [6537, 7269, null], [7269, 7869, null], [7869, 9433, null], [9433, 10979, null], [10979, 12372, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1523, true], [1523, 3611, null], [3611, 4326, null], [4326, 6537, null], [6537, 7269, null], [7269, 7869, null], [7869, 9433, null], [9433, 10979, null], [10979, 12372, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12372, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12372, null]], "pdf_page_numbers": [[0, 1523, 1], [1523, 3611, 2], [3611, 4326, 3], [4326, 6537, 4], [6537, 7269, 5], [7269, 7869, 6], [7869, 9433, 7], [9433, 10979, 8], [10979, 12372, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12372, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
ba9aac4309541f7b6fcb682bed056ee22a920607
|
[REMOVED]
|
{"Source-Url": "http://www.cs.uml.edu/~adufilie/feathered-tiles.pdf", "len_cl100k_base": 7658, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 35482, "total-output-tokens": 10591, "length": "2e12", "weborganizer": {"__label__adult": 0.00035190582275390625, "__label__art_design": 0.0015716552734375, "__label__crime_law": 0.0004324913024902344, "__label__education_jobs": 0.0009288787841796876, "__label__entertainment": 0.00017178058624267578, "__label__fashion_beauty": 0.00023281574249267575, "__label__finance_business": 0.0003058910369873047, "__label__food_dining": 0.0003638267517089844, "__label__games": 0.0009069442749023438, "__label__hardware": 0.0019044876098632812, "__label__health": 0.0004818439483642578, "__label__history": 0.0011568069458007812, "__label__home_hobbies": 0.0001367330551147461, "__label__industrial": 0.0007390975952148438, "__label__literature": 0.00035762786865234375, "__label__politics": 0.0003616809844970703, "__label__religion": 0.0005445480346679688, "__label__science_tech": 0.316162109375, "__label__social_life": 0.00010097026824951172, "__label__software": 0.037841796875, "__label__software_dev": 0.6337890625, "__label__sports_fitness": 0.000308990478515625, "__label__transportation": 0.0006966590881347656, "__label__travel": 0.0003628730773925781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44448, 0.02496]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44448, 0.46784]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44448, 0.86351]], "google_gemma-3-12b-it_contains_pii": [[0, 2387, false], [2387, 5788, null], [5788, 8647, null], [8647, 11658, null], [11658, 14658, null], [14658, 16214, null], [16214, 17370, null], [17370, 20430, null], [20430, 23431, null], [23431, 26170, null], [26170, 29796, null], [29796, 31972, null], [31972, 33609, null], [33609, 35447, null], [35447, 38564, null], [38564, 41863, null], [41863, 44448, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2387, true], [2387, 5788, null], [5788, 8647, null], [8647, 11658, null], [11658, 14658, null], [14658, 16214, null], [16214, 17370, null], [17370, 20430, null], [20430, 23431, null], [23431, 26170, null], [26170, 29796, null], [29796, 31972, null], [31972, 33609, null], [33609, 35447, null], [35447, 38564, null], [38564, 41863, null], [41863, 44448, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44448, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44448, null]], "pdf_page_numbers": [[0, 2387, 1], [2387, 5788, 2], [5788, 8647, 3], [8647, 11658, 4], [11658, 14658, 5], [14658, 16214, 6], [16214, 17370, 7], [17370, 20430, 8], [20430, 23431, 9], [23431, 26170, 10], [26170, 29796, 11], [29796, 31972, 12], [31972, 33609, 13], [33609, 35447, 14], [35447, 38564, 15], [38564, 41863, 16], [41863, 44448, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44448, 0.03478]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
32b241a96deff2cb7094eae75829923bcc4070df
|
PATH PLANNING FOR MINE COUNTERMEASURES
Inventors: Christopher B. McCubbin, New Market, MD (US); Christine D. Platko, Columbia, MD (US); Steven J. Marshall, Baltimore, MD (US); Jessica L. Pistoie, Baltimore, MD (US); Cheryl L. Resch, Columbia, MD (US); Paul McNamee, Ellicott City, MD (US)
Assignee: The Johns Hopkins University, Baltimore, MD (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 291 days.
Appl. No.: 11/080,143
Filed: Mar. 15, 2005
Prior Publication Data
Related U.S. Application Data
Provisional application No. 60/553,521, filed on Mar. 16, 2004.
Int. Cl.
G01C 21/00 (2006.01)
U.S. Cl. 701/202; 701/200; 701/23
Field of Classification Search 701/202–202, 211, 213–214, 23, 25; 340/988; 342/357.06, 357.12
See application file for complete search history.
References Cited
U.S. PATENT DOCUMENTS
ABSTRACT
An apparatus for mine avoidance that enables a battle group commander to rapidly detect, classify, and identify mines, and form a tactical picture of mined areas. The system is enabled by a graphical user interface which generates a display of the minefield and an avoidance path which is calculated by a rapidly-explored random trees algorithm interacting with algorithms defining avoidance object spaces inserted by an operator in two or three dimensions and transit vehicle characteristics. The apparatus enables a method which establishes the parameters of an area representing a minefield. The area is then populated with obstacle spaces whose areas are functions of the probability of the existence of a mine and through which an avoidance route through the minefield may not pass. The start and end points of the avoidance route are entered on a graphic display of the minefield and an avoidance path there between is calculated and presented as an avoidance route.
10 Claims, 3 Drawing Sheets
Fig. 1
Fig. 2
Fig. 3
PATH PLANNING FOR MINE COUNTERMEASURES
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of prior filed co-pending U.S. application Ser. No. 60/553,521, filed on Mar. 16, 2004.
STATEMENT OF GOVERNMENTAL INTEREST
This invention was made with Government support under Contract No. N00024-08-D-8124 awarded by the Office of Naval Research. The Government has certain rights in the invention.
BACKGROUND OF THE INVENTION
Battle group commanders have a need to develop low risk paths through minefields and evaluate alternative route options. To satisfy this need, they require planning aids which will develop paths that balance the utility and risk associated with different routes. They need tools for minefield transit planning which integrate path risk, path time, and dynamic and kinematic concerns.
FIELD OF THE INVENTION
The present invention relates to applications of path planning algorithms for quantitatively evaluating potential paths through minefields.
DISCUSSION OF THE RELATED ART
Attempts have been made to solve problems similar to those faced by commanders needing to pass through a minefield but they have not addressed the specific problems associated with such a transit. For instance, H. Duvoisin, III, et al, in U.S. Pat. No. 5,835,901 for “Perceptive System Including a Neural Network”, issued Nov. 10, 1998, teaches the use of a target location and classification system for mines in a minefield but fails to suggest an avoidance system.
U.S. Pat. No. 5,999,881 issued to C. Law, et al, for “Automated Path Planning” on Dec. 7, 1999, presents a path planning method which considers avoidance of a plurality of obstacles, but the patent is not concerned with mines or the problems inherent in the transition of a minefield by a battle group.
Route planning is addressed by P. Galkowski, et al, in U.S. Pat. No. 6,259,988 for “Real-Time Mission Adaptable Route Planner,” issued Jul. 10, 2001, but it is concerned with fixed navigational obstacles, not a dynamic situation involving a minefield. It employs a grid based route optimization method for determining the most economical path between two fixed points while the present need requires an ability to chart a path between dynamic points.
OBJECTIVES OF THE INVENTION
The primary objective of the invention is to provide a means for mine avoidance that enables a battle group commander to rapidly detect, classify, and identify mines, and form a tactical picture of minefield areas.
Another objective is to integrate algorithms defining avoidance object space in two and three dimensions with transit vehicle characteristics algorithms for quantitatively evaluating potential paths through minefields.
A further objective is to integrate path planning Rapidly-explored Random Trees algorithms with algorithms defining avoidance object space in three dimensions.
A still further objective is to integrate path planning Rapidly-explored Random Trees algorithms with algorithms defining avoidance object space in two dimensions.
Another objective is to integrate path planning Rapidly-explored Random Trees algorithms with algorithms defining vehicle characteristics for quantitatively evaluating potential paths through minefields.
A further objective is to provide path planning techniques to develop paths that balance the utility and risk associated with different routes through a minefield.
Another objective is to integrate path risk, path time, and dynamic and kinematic concerns to plan paths through minefields.
SUMMARY OF THE INVENTION
A method and apparatus for developing potential routes through a minefield and assessing risk and economy factors for each route is provided by a group of interacting algorithms having inputs and final solution outputs provided by a graphic user interface. The preferred embodiment of the present invention combines Rapidly-explored Random Trees algorithms with algorithms defining avoidance object space in at least two dimensions and transit vehicle characteristics algorithms.
The software system consists of four separate programs, 1) a general path planner for all of the executive interfacing of outside systems with planning algorithms and store a consistent world-view database which is updated by outside systems and used by system planning algorithms, 2) a path planning algorithm for the mines domain, 3) a Rapidly-explored Random Trees, RRT, package which contains several closely related algorithms for producing avoidance plans, and 4) the GUI program which drives a graphical user interface and provides input to and output from the general path planner program.
The hardware is comprised of computing means using a Java SDK 1.4 platform to run and integrate the software programs and a graphical user interface, GUI.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of the general path planner.
FIG. 2 illustrates the graphical user interface displaying a plan generation.
FIG. 3 is a graphic representation of the RRT algorithm cone model.
DESCRIPTION OF THE PREFERRED EMBODIMENT
The general path planner program is the heart of the system. It performs the algorithmic processing of the data input through the graphical interface. A block diagram of the general path planner is presented in FIG. 1 which illustrates the interaction of the several code groups required to implement the system. The executive code group 10, the communications code group 20, the messages code group 30, the path solver code group 40, the World-View code group 50, and the GUI/output code group 60. Each of these groups performs one or more related tasks within the system. The overall architecture of the system is a single threaded,
message-driven server architecture. Each message received is processed in turn, without threading. Output is sent back through the use of an observer design pattern, much like Java's generator-listener mechanisms for GUIs. The system uses a command design pattern, where messages contain their handling code as methods. The system may be implemented on a platform such as the Java SDK 1.4 and the basic path planning algorithms are variants of the Rapidly-exploring Random Trees algorithm described by S. M. LaValle in a paper titled, "Rapidly-exploring Random Trees: A new tool for path planning," published as TR 98-11 by the Computer Science Dept., Iowa State University, in October 1998. The paper is incorporated herein by reference.
The executive code group 10 is the container classes for all of the statically-held objects in the system. The objects in this group act as traffic cops to the other code groups, and initialize objects that require initialization at startup time.
The communications code group 20 handles all I/O for the system. It contains an incoming message queue which is polled to see if new messages have arrived. Once messages arrive, they are executed in the order they arrive, one at a time. The Communications group also holds all of the output message generators that the system creates. These message generators send the output messages created by the system to registered listeners as soon as the output message arrives. Input messages contain their handling code in an overridden execute abstract method. In this way, new input message types, i.e., commands, can be added easily by creating a new message class and defining the code to handle that message. Messages also contain a halt method, which will be called if a special halt method is called on the path planning component. The halt method can be used to safely stop current message execution.
The path solvers code group 40 represents path-planning package interfaces that the system can use. In the preferred embodiment the path-planning solvers are based on RRT algorithms. The solver is able to set its state using a World-View object plan given a set of way points and halt its planning if has to. Also included in this group is a reflective method to set a planner's parameters when given a list of parameter names and values. The set parameters method looks for methods in the planner class which have "set" appended to the parameter name and take the object corresponding to the parameter name as the sole argument. It then executes that method on the passed-in argument object.
The World-View code group 50 is in charge of holding a database of world objects, such as mines, bathymetry data, shorelines, etc. The World-View interface is fairly empty, and only has a clear method on it. In the preferred embodiment the only World-View implementation is the Mines Only World-View, which is simply a list of mine locations and probabilities on the plane. This preferred embodiment is presented as exemplary; additional embodiments are anticipated to expand the versatility of the system.
The Output Code Group 60 is in charge of supplying output to interested parties. It uses the observer design pattern, and is modeled after Java AWT's event system. There are message event generators for each type of output method produced, and listener interfaces that listening classes must implement to receive the output. There are two generators: a generator for path solutions, and a generator for algorithm progress messages.
The preceding code groups are operational in any one of three different modes or use cases which are defined by the input message type. The use cases are: administrative use-case, state use-case, and path query. To perform a use-case, a corresponding message is sent to the system through the communications group.
The administrative use-case is intended to be used to send high-level commands to the system. It operates reflectively so that an administrative message is created with a string representing one of the methods defined on the administrative message class. When sent to the system, the method named by the string will be invoked inside the system. Current administrative messages include: printing the administrative message (a debugging tool), printing the World-View, clearing the World-View, and exiting the program. New commands can be added by simply adding a method to the administrative class.
The state use-case message is used to update the current state held in the system's World-View. Simple-State, a subclass of state use, is used with the Mines-Only World-View, and contains a list of added or a list of deleted mines (but not both). To execute this use-case, the Simple-State message either adds or deletes its contents to the World-View, as appropriate.
The path query message defines the use-case of requesting a path solution by the system. It is parameterized by the name of the algorithm desired, algorithm parameters, and a list of way points to be followed for the path. The first way point is the start point and the last way point is the goal point. If less than two way points are present, behavior is undefined. The path query execution involves the following steps: 1) A path planner solver is fetched or created for the desired algorithm. 2) The planner is loaded with the current World-View state. 3) Any parameters present are loaded into the algorithm. 4) The planning algorithm is executed with the passed way points. 5) If a solution is returned, it is sent to the solution event generator in the communications code group 20.
The GUI incorporates a means for linking designated portions of a display with an electronic address. It provides an interface with the multi-algorithm capabilities of the general path planner architecture while providing an interactive view of plan progress and an intuitive means to design World-View configurations. The GUI performs its tasks sequentially.
The GUI uses a component based design and performs in collaboration with the general path planner and the separate GUI components through message passing. Several dialogs are used to perform more complex configurations while the most common actions are laid out in toolbar fashion along the left side of the GUI as illustrated in FIG. 2.
The GUI user is first presented with an area 71 defined by coordinates and representing an open expanse of ocean on which the user may create minefields. The user may place and remove mines 72 on the minefield with mine editing tools 62 and 63 on a tool bar 61. In addition to the manual method of mine layout, the user may generate a uniform random distribution of mines throughout the area 71. Minefields can be saved and loaded from the file menu as well. The toolbar, 61 of FIG. 2, is like that of a typical drawing program, with buttons depicting various tasks that interact with the canvas. The add mine, remove mine, and mine line buttons, 62 through 64 respectively, provide direct manipulation of mines 72 in the field 71. The mine line tool 64 allows the user to click and drag, creating an elongated randomized field of mines. The thickness and density of this line is configured from the mine line preferences dialog accessed from the edit menu. Once the line is dragged on the area map view 71, mines 72 are randomly placed in the box defined by the line's endpoints and thickness setting. In
addition to these direct methods of mine layout, the new minefield dialog in the file menu allows the user to generate a uniform random distribution of mines throughout the area. Minefields can then be saved and loaded from the file menu as well.
The Mine Probability slider, 65, at the end of the toolbar modifies the probability of subsequent mines added to the canvas. The minefield is rendered differently for each algorithm available by a reflectively loaded minefield render class. In this way the characteristics of the risk function for each algorithm can be displayed.
Once a minefield is loaded, waypoints 73 can be placed in sequential order from route start 74 to route end 75 using the waypoint tool. The algorithm menu allows the user to select an algorithm for use with the minefield. Here, the options associated with each algorithm are reflectively selected from the algorithm’s implementation of the path planner solver abstract class. The World-View and waypoints generated from the GUI are able to be sent to any of the planners such that multiple plans may be generated for the same minefield.
Once the algorithm is properly configured, the start button sends a path query message to the selected planner and a plan is generated. If the show progress check box is activated in the algorithm menu, algorithm-specific progress information will be presented in the navigation box. The progress animation can be saved to disk as a series of images if the option is selected in the algorithm menu. In addition, messages about plan progress will be printed in the lower text window. After the algorithm has completed, the result will be displayed as a route 76 over the area 71. For example, in FIG. 2, the RRT algorithm has produced a plan, 76, which is colored according to the speed of each path segment.
Statistics about the current solution are displayed in the lower right panel. These statistics include the computed risk of the final path found by the algorithm, the time the path would take to follow, and other statistics about the solution.
When the RRT algorithm is implemented as a solver for the general path planner two main code groups are employed, the tree algorithm code group and the models and actions code group. The tree algorithm code group runs the RRT algorithm proper and the models and actions code group determines the state space and robot/ship properties that will be used when running the algorithm.
The tree algorithm code group implements the RRT tree algorithm proper. This algorithm is a space-filling randomized algorithm. The algorithm produces a tree in the state space of the problem. After each iteration, the algorithm attempts add a node which minimizes the distance from any point in the state space to a point on the tree. Finding the optimal point to add at each iteration exactly is prohibitively expensive, so the RRT algorithm approximates this by choosing a point with some randomness. The pseudocode for the algorithm is as follows:
1. Choose a random point r in state space with some random point generator.
2. Choose the closest existing tree node to r, call it t.
3. For each possible legal extension of the tree from r, find the closest point to t, call it t_0.
4. Add t_0 to the tree.
5. Repeat these steps until the space is filled enough.
For path planning, the tree starts at the starting point of the plan. The algorithm steps when the tree is less than some critical distance to the goal point. Since the tree is rooted at the start point, the path can then be reconstructed by following the ancestors of the node closest to the goal back to the root.
The models and actions code group allows for wide flexibility in the type of path planning done by the RRT algorithm. The model interface is the entity which all models will implement. This interface has three important purposes: 1) defining appropriate geometries for the model, 2) defining a random state space point generator, and 3) selecting the “best” action for a given point in the state space. The RRT code group uses a concrete models methods to build the RRT tree in an appropriate manner.
Actions are model-specific actuators that can be perform and which extend from an abstract class named action. Models are loaded with a set of actions on instantiation. Each concrete action must define what will happen to the state space if the action is applied for a certain amount of time. This is known as the “integration” method.
Three different models for generating obstacle space geometries are available. They are: the ship model, the bull’s eye model and the cone model. The basic ship model uses Java AWT Shape classes to define the obstacles in the minefield as keep-out regions. Mines are added as circular shapes whose radii depend on the probability of the existence of a mine. The greater the probability of the existence of a mine, the greater the radius of the circle defining the obstacle space. It selects the best action by integrating each action over a small unit time and selecting the point closest to the goal point which does not collide with any obstacle. In this model, distance is taken to be the four-dimensional Euclidian distance in planar space and planar velocity.
The bull’s eye model is similar to the ship model except it contains a second, larger set of obstacles superimposed over the smaller obstacles. It has a radius which is a function of the probability of a mine plus an added factor to increase the radius. If no action can be found using the more conservative obstacles, an action is searched for using the more liberal obstacles. Mines are added as a bull’s eye of two concentric circles.
The cone model uses a half-space of three-dimensional obstacles in the dimensions of x, y and speed. The actions are again checked for collisions with the obstacles, this time in three dimensions. Mines are added as cones with a circle of baseline keep-out region at no speed and a linearly increasing radius as the objects speed increases, see FIG. 3. The baseline keep-out region size depends on the probability of the mine but the cone’s slope is fixed for each instantiation of the model. This model is the default model used in the general path planner.
In an alternate RRT variant using the cone model, when the path gets within a certain distance of the goal, the RRT random point selection is chosen close to the goal. This is the “Goal Directed” RRT variant.
Changing a model’s actions dramatically changes the algorithm operation. Two types of actions are available: point-robot actions and ship-like actions. The point-robot actions integrate by applying a simple x-y force. For example, one could create a point-robot action which applies a east-facing force of 10 meters per second squared to the planned object. Ship-like actions take an approximation of rudder position and forward throttle, and integrate an approximate new position using these parameters. With ship-like actions, the turning radius of the generated path, 66 of FIG. 2, may be limited to that of the planned-for object.
The system performs the following steps when generating a route through a minefield:
1) establish the parameters of an area representing a minefield;
2) populate the area with obstacle spaces defined as coordinate bound areas through which the route may not pass;
3) establish the coordinates of a route start point within the area and outside the coordinate bound areas defining obstacle spaces;
4) establish the coordinates of a route end point within the area and outside the coordinate bound areas defining obstacle spaces;
5) generate a series of connected vectors between the start and end points which do not pass through the coordinate bound areas of the obstacle spaces;
6) establish at least one set of parameters defining the coordinates for a waypoint along the route and on one of the vectors;
7) define a path created by the connected vectors as the route through the minefield.
While preferred embodiments of this invention have been illustrated and described, variations and modifications may be apparent to those skilled in the art. Therefore, we do not wish to be limited thereto and ask that the scope and breadth of this invention be determined from the claims which follow rather than the above description.
What is claimed is:
1. An apparatus for plotting a route through a minefield, comprising:
a computing means for manipulating algorithms for generating obstacle spaces and a path through said obstacle spaces; and
a graphical user interface means for inputting data to said algorithms and displaying solutions created by said algorithms,
wherein said obstacle spaces are three dimensional cones having radii about the vertical axis proportional to the speed of transit along said path and said radii equals the probability of the existence of a mine.
5. An apparatus as plotting a route through a minefield, comprising:
a computing means for manipulating algorithms for generating obstacle spaces and a path through said obstacle spaces; and
a graphical user interface means for inputting data to said algorithms and displaying solutions created by said algorithms,
wherein said algorithms include a rapidly-explored random trees algorithm means for calculating said path.
6. An apparatus as defined by claim 5, wherein said means for generating said path comprises:
communication means for providing intercourse between said computing means and said graphical user interface;
a database containing world objects; and
a path solver interface means for coupling said means for calculating said path to said database and said communication means.
7. An apparatus as defined by claim 3, wherein said algorithms include a rapidly-explored random trees algorithm means for calculating said path.
8. An apparatus as defined by claim 7, wherein said means for generating said path comprises:
communication means for providing intercourse between said computing means and said graphical user interface;
a database containing world objects; and
a path solver interface means for coupling said means for calculating said path to said database and said communication means.
9. An apparatus as defined by claim 4, wherein said algorithms include a rapidly-explored random trees algorithm means for calculating said path.
10. An apparatus as defined by claim 9, wherein said means for generating said path comprises:
communication means for providing intercourse between said computing means and said graphical user interface;
a database containing world objects; and
a path solver interface means for coupling said means for calculating said path to said database and said communication means.
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/7272492", "len_cl100k_base": 5232, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 9176, "total-output-tokens": 5788, "length": "2e12", "weborganizer": {"__label__adult": 0.0011243820190429688, "__label__art_design": 0.00066375732421875, "__label__crime_law": 0.00951385498046875, "__label__education_jobs": 0.0014095306396484375, "__label__entertainment": 0.00018537044525146484, "__label__fashion_beauty": 0.00036072731018066406, "__label__finance_business": 0.0010461807250976562, "__label__food_dining": 0.0007519721984863281, "__label__games": 0.00904083251953125, "__label__hardware": 0.00940704345703125, "__label__health": 0.0007753372192382812, "__label__history": 0.0012311935424804688, "__label__home_hobbies": 0.0004267692565917969, "__label__industrial": 0.01543426513671875, "__label__literature": 0.000576019287109375, "__label__politics": 0.0010051727294921875, "__label__religion": 0.0008406639099121094, "__label__science_tech": 0.3056640625, "__label__social_life": 0.00013399124145507812, "__label__software": 0.060211181640625, "__label__software_dev": 0.5625, "__label__sports_fitness": 0.0015058517456054688, "__label__transportation": 0.01568603515625, "__label__travel": 0.0004715919494628906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25864, 0.04046]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25864, 0.40056]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25864, 0.91132]], "google_gemma-3-12b-it_contains_pii": [[0, 1972, false], [1972, 1979, null], [1979, 1986, null], [1986, 1993, null], [1993, 7709, null], [7709, 15118, null], [15118, 22356, null], [22356, 25864, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1972, true], [1972, 1979, null], [1979, 1986, null], [1986, 1993, null], [1993, 7709, null], [7709, 15118, null], [15118, 22356, null], [22356, 25864, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25864, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25864, null]], "pdf_page_numbers": [[0, 1972, 1], [1972, 1979, 2], [1979, 1986, 3], [1986, 1993, 4], [1993, 7709, 5], [7709, 15118, 6], [15118, 22356, 7], [22356, 25864, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25864, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7b7b3e64354610e63d13ff99783a764a3e8eb7fc
|
CORBA and HLA: Enabling Future Network-Centric Vehicle Systems?
William Pritchett & William Protzman
DCS Corporation
Robert Kling
US Army TACOM
Abstract
Increased emphasis on simulation-based acquisition and network-centric warfare within the objective force has posed key challenges within the embedded ground vehicle community. While simulation-based acquisition enables a faster time to field through increased concurrency, tighter decision cycles, more efficient and effective testing; the ultimate goal of rapidly transitioning outputs from the simulation phase of a program to the vehicle design and integration phase is minimally met in today’s environment due to varied architectures, levels of fidelity, and design methodologies. In the same respect, the dynamic nature of network-centric warfare further strains legacy architectures, as they cannot handle the new and often dynamic operational requirements. Two technologies currently under consideration with respect to their ability to solve some or all of these design and architectural gaps are the Common Object Request Broker Architecture (CORBA) family of services and the High-Level Architecture (HLA). CORBA is a distributed object architecture developed by the Object Management Group (OMG), and though originally developed for information systems, has been extended to accommodate real-time and embedded requirements. HLA is the Department of Defense mandated architecture for distributed simulation. HLA includes a runtime infrastructure (RTI) and one or more federates combined to form the simulation system, or federation. Though both CORBA and HLA are used to build complex, distributed systems, there are differences in these two architectures that complicate their use within a single system. This paper evaluates the suitability of both CORBA and HLA for use in ground combat vehicles, as both stand-alone technologies and in conjunction with one another.
1 Background
To better respond to changing world threats, the Army is in the midst of a major force transformation grounded in concepts for future joint and combined operations. At the center of the transformation is the unit of action, a strategically responsive force, rapidly deployable anywhere in the world in 96 hours after liftoff, providing overmatching lethality with advanced survivability against any threat, versatile and responsive to the needs of the Joint Task Force Commander, while able to transition rapidly between missions, tactical engagements and battles with minimal organizational adjustment. The makeup of a unit of action includes a highly-networked combination of manned and unmanned ground vehicles being developed under the auspices of the Future Combat System (FCS).
From the ground combat vehicle weapon system perspective, the paradigm shift catalyzed by the FCS program is based in an accelerated convergence of architectures (depicted in Figure 1). These architectures are the Platform Architecture (e.g., vehicle computers, weapons, mobility, crew stations, and electronics), C4ISR Architecture (e.g., sensors, battlefield command and control, situational awareness), and Modeling and Simulation Architecture (e.g., computer-based and embedded training, mission rehearsal, terrain registration).
Increased emphasis on simulation-based acquisition and network-centric warfare within the objective force has posed key challenges within the embedded ground vehicle community. While simulation-based acquisition enables a faster time to field through increased concurrency, tighter decision cycles, more efficient and effective testing; the ultimate goal of rapidly transitioning outputs from the simulation phase of a program to the vehicle design and integration phase is minimally met in today's environment due to varied architectures, levels of fidelity, and design methodologies. In the same respect, the dynamic nature of network-centric warfare further strains legacy architectures, as they cannot handle the new and often dynamic operational requirements. Two technologies currently under consideration with respect to their ability to solve some or all of these design and architectural gaps are the Common Object Request Broker Architecture (CORBA) family of services and the High-Level Architecture (HLA). CORBA is a distributed object architecture developed by the Object Management Group (OMG), and though originally developed for information systems, has been extended to accommodate real-time and embedded requirements. HLA is the Department of Defense mandated architecture for distributed simulation. HLA includes a runtime infrastructure (RTI) and one or more federates combined to form the simulation system, or federation. Though both CORBA and HLA are used to build complex, distributed systems, there are differences in these two architectures that complicate their use within a single system. This paper evaluates the suitability of both CORBA and HLA for use in ground combat vehicles, as both stand-alone technologies and in conjunction with one another.
There is, however, an underlying barrier present in today’s environment that limits the ability to leverage across these architectures. That barrier is that these architectures are grounded in distinct domains, which have traditionally focused on and evolved from discrete technologies and infrastructure to realize the development and integration of resultant domain applications. For example, platform applications tend to drive towards embedded real-time systems, imposing resource and timing constraints; C4ISR applications tend to be developed as enterprise/information technology systems targeted for workstation class computers; and modeling and simulation applications tend to focus on force simulations, system-level model based acquisition, and training.
Today, cross-architecture technologies are typically incorporated into Army vehicle weapon systems via a bolt on vs. integrated approach. An example is an FBCB2 appliqué computer running a command and control/situational awareness application providing its own display and integrated to the vehicle system via Ethernet and radio interfaces. As specific FCS vehicle system(s) architectures are designed and implemented, a high degree of cohesion among these three converging architectures will be not only desired, but necessary in order to integrate/leverage required technologies in accordance with the Objective Force system/system of system concepts and requirements (to include acquisition, development, deployment, management and sustainment).
An approach to facilitating this cross-architecture application reuse/leveraging within FCS can be realized through the analysis of the application infrastructure (distribution middleware) and identification of an approach to better correlate these middleware layers leading to application insertion as opposed to application and computer bolt on integration.
Figure 2 depicts the three architectures, highlighting the middleware dominant within the application domains. Applications within the Platform Domain utilize a real-time operating system (RTOS) and often incorporate an operating environment (OE) middleware isolation layer, isolating hardware dependencies and maximizing application portability. Applications within the C4ISR Domain utilize workstation-based operating systems (e.g., Unix, Linux) and incorporate CORBA services. Applications within the M&S domain utilize a variety of operating systems (dependent on the simulation) and are developed in HLA and integrated via the HLA RunTime Infrastructure (RTI).
This paper provides an analysis of HLA and CORBA to identify their suitability to facilitate the leveraging, development, and integration of co-resident C4ISR and M&S applications within the ground vehicle weapon system platform.
1.1 High-Level Architecture
The High-Level Architecture (HLA) [1], shown in Figure 3, is a component-based software architecture developed by the U.S. Department of Defense during the 1990’s to provide a low-cost, high-capability simulation infrastructure. The architecture calls for a federation of individual simulations called federates. A federate can represent varying levels of abstraction from a cockpit simulator to an entire fighting force. A federation execution is a session of a federation executing together. Along with individual federates, a federation contains a runtime infrastructure (RTI) and a common object model for data exchanged between federates in a federation, called the Federation Object Model (FOM). The specification of the architecture contains three main components: 1) ten rules that define relationships among federation components, 2) an object model template which specifies the form in which simulation elements are described, and 3) an interface specification that describes the way simulations interact during execution.
Of the ten rules specified by the architecture, five are federation rules and five are federate rules. The federation rules are as follows: 1) the federation shall have a FOM and shall be in object model template (OMT) format; 2) all representation of objects shall be in the federates and not the RTI; 3) during federation execution, all exchange of FOM data shall be via the RTI; 4) during federation execution, all federates shall interact with the RTI in accordance with the interface specification; and 5) during federation
execution, an attribute of an instance of an object may be owned by only one federate at a
given time. The federate rules are: 1) federates shall have a SOM in OMT format; 2)
federates shall be able to update/reflect attributes and send/receive data in accordance
with their SOM; 3) federates shall be able to transfer/accept attribute ownership in
accordance with their SOM; 4) federates shall be able to vary the conditions under which
they provide attribute updates in accordance with their SOM; and 5) federates shall be
able to manage the local time in a way which will allow them to coordinate data
exchange with other members of the federation.
The object model template provides a common framework for HLA object model
documentation and fosters interoperability and reuse of simulations via the specification
of a common representational framework. The FOM is a description of all shared
information (objects, attributes, interactions, and parameters) essential to a particular
federation. The Simulation Object Model (SOM) describes objects, attributes, and
interactions in a particular simulation that can be used externally in a federation. An
attribute is the named portion of an object’s state. An interaction is a change in the
sending object state that may cause a state change in another (receiving) object. A
parameter is the information associated with an interaction provided by the sending
object to the receiving object. Federates update attributes by providing the new instance
attribute value for an attribute, and reflect attribute changes by receiving the new instance
attribute value for an attribute.
The HLA interface specification defines the access to RTI services. The interface
specification is provided as an application programmer interface (API) in several forms
including C++, Ada 95 and Java. The interface specification has six basic RTI service
groups: federation management, declaration management, object management,
ownership management, time management, and data distribution management.
1.2 Common Object Request Broker Architecture
The Common Object Request Broker Architecture (CORBA) [2] is an open, vendor-
independent architecture and infrastructure that computer applications use to work
together over networks. Using the standard Internet Inter-ORB protocol (IIOP), a
CORBA-based program from any vendor, on almost any computer, operating system,
programming language, and network, can interoperate with a CORBA-based program
from the same or another vendor, on almost any other computer, operating system,
programming language, and network. The Object Management Group (OMG), a
consortium of vendors and other distributed computing stakeholders, produces and
maintains the family of CORBA-related specifications.
CORBA applications are composed of objects, individual units of running software that
combine functionality and data. For each object type, developers define an interface
using the OMG Interface Definition Language (IDL). The interface is the syntax part
of the contract that the server object offers to the clients that invoke it. Any client that
wants to invoke an operation on the object must use this IDL interface to specify the
operation it wants to perform, and to marshal the arguments that it sends. When the
invocation reaches the target object, the object request broker (ORB) uses the same
interface definition to unmarshal the arguments so that the object can perform the requested operation with them. The ORB then uses the interface definition to marshal the results for their trip back, and to unmarshal them when they reach their destination.
This separation of interface from implementation, as shown in Figure 4, is the essence of CORBA—how it enables interoperability and location transparencies. Object interfaces have very strict definitions. In contrast, the ORB hides implementation details of an object - its running code, and its data from the rest of the system behind a boundary that the client may not cross. Clients access objects only through their advertised interface, invoking only those operations that the object exposes through its IDL interface, with only those parameters (input and output) that are included in the invocation.
Though originally developed for information systems, the OMG has created specialized CORBA extensions to address real-time and embedded systems. Most notable of the adopted specialized specifications are dynamic scheduling, minimum CORBA, CORBA event-service, CORBA notification service, and real-time CORBA. Real-time CORBA extends CORBA by facilitating the end-to-end predictability of activities in the system and by providing support for the management of resources. The dynamic scheduling specification extends real-time CORBA to encompass dynamically scheduled systems. The CORBA event and notification services provide services for asynchronous notification of events. Minimum CORBA defines a minimal subset of CORBA for resource constrained (embedded) environments.
The OMG also has related specifications under development including the data distribution service, real-time notification service, and reliable ordered multicast protocol. The data distribution service supports a Data-Centric Publish-Subscribe (DCPS) for real-time systems and offers, optionally, a Data Local Reconstruction Layer (DLRL). The real-time notification service will address priority ordering of events and deadline scheduling of events. Finally, the OMG is specifying a reliable ordered multicast protocol for those services, such as fault tolerance, that can benefit from such a protocol.
### 1.3 CORBA/HLA Relationship
Though dissimilar, there is a relationship between CORBA and the HLA. Developers can use CORBA IDL to specify the interface between federates and the RTI. Further, the OMG has specified the RTI itself in IDL and has standardized this interface as their Facility for Distributed Simulation Systems [3]. This means that CORBA-based
applications can instantiate RTI objects as CORBA objects and incorporate them into their system. Additionally, the nature of the ORB makes it a good candidate technology for implementing an HLA RTI, and in fact, ORB-based implementations of both RTIs and federates exist.
2 Suitability of HLA in Future Ground Vehicles
While HLA may be a suitable technology for large-scale distributed simulations, current implementations have major drawbacks as a solution for real-time embedded systems. Foremost is the lack of RTI implementations for real-time operating systems such as Wind River’s VxWorks. There are, however, RTI products that run on Linux, an operating system gaining in popularity within the ground vehicle community. More investigation is needed, however, to determine whether or not Linux is a viable operating system for a total vehicle solution.
Another weakness to using HLA for ground vehicle embedded simulation is the fact that HLA does not specify timeliness criteria, thus limiting its usefulness in real-time simulations. McLean et. al. states the “lack of timeliness requirements, or any additional guidance, is a critical limitation for real-time simulation systems where the amount and predictability of RTI overhead is an important design factor” [4]. This is important because real-time distributed interactive simulation requires “a real-time response and predictable behavior from the end systems in order to interact with the physical world within the specific delay bounds and present data, images, audio, video, etc. to the users on a real-time basis” [5]. Numerous research efforts to extend HLA to real-time applications are addressing this problem [4][5].
The last major shortcoming to using HLA for real-time embedded simulation is the fact that HLA is not interoperable across languages or RTI implementations. Although HLA has multilanguage support, “the responsibility for interoperability between federates in different languages is placed on the RTI implementers” [6]. Further, HLA leaves the choice of protocol up to implementations. This forces systems into using a single RTI implementation and a single programming language. This is not always feasible within modern ground combat systems.
3 Suitability of CORBA in Future Ground Vehicles
While RTI has major limitations for use in embedded systems, CORBA is increasingly gaining acceptance in military domains as shown by the numerous programs that have fielded CORBA-based applications [7][8][9][10]. Common reasons cited for choosing CORBA include improved quality of service (QoS), predictable response times, small footprint, and real-time performance. One benchmarking study analyzed the performance of three available real-time ORBs and found data transfer rates as high as 5 MB/s using TCP/IP over a 10BaseT Ethernet on [11]. This performance is more than adequate for the closed loop control rates needed for most Vehicle Electronics (vetronics) applications. Further, CORBA’s emphasis on interfaces over implementations leads to well-defined architectures that are more maintainable and supportable over the life cycle of a vehicle.
Additionally, the use of CORBA for middleware eliminates the need for application programmers to have detailed knowledge of the distribution mechanisms, thus allowing them to focus on the military-unique domain applications.
CORBA, however, is not without limitations too. The client/server paradigm used by CORBA may not be suitable for all ground vehicle applications. While this paradigm is proper for one-to-one object calls, network bandwidth may suffer in a data-oriented system in which data must flow from a single source to multiple destinations. The OMG is addressing this deficiency with their emerging data distribution specification. This specification will enable vehicle developers, where appropriate, to use the standards-based publish/subscribe paradigm in their software design to facilitate low-latency, high-bandwidth communications.
Another downside to using CORBA in vetronics applications is the lack of a suitable asynchronous notification mechanism. Though CORBA does have event and notification services, these do not allow for deadline-based event specification or event priorities, making it inefficient for real-time applications. The OMG is addressing these deficiencies with the emerging real-time notification service.
4 **Heterogeneous HLA/CORBA-based System Example**
This section provides an overview of an example system incorporating an HLA/CORBA based architecture to cohesively integrate a disparate M&S component into a ground vehicle weapon system platform. The example is taken from a current system design employed within the Army R&D community that interfaces an embedded training capability to a ground vehicle weapon system.
The current architecture employed within the embedded simulation system is presented in Figure 5.
As depicted in the figure, a distributed simulation environment, representing force-level battlespace entities, is connected to an embedded simulation computer via HLA. The embedded simulation computer provides a gateway to the vehicle (the vehicle is represented in the figure as an embedded computer). The embedded simulation gateway enables the vehicle to participate in the force-level simulation as an individual entity by
providing correlated virtual world sensor views to the vehicle and representing the vehicle via operator and system interaction (e.g., mobility, targeting, firing, …) within the virtual battlespace. The connection between the embedded simulation system and the vehicle embedded computer is defined via an A/B kit interface, which is realized as a series of bi-directional data based TCP/IP messages. The embedded simulation and vehicle applications cannot be co-resident because they utilize different technologies and network communication protocols (with the exception of well-defined A/B kit interface).
The ultimate goal of this analysis is to provide a robust, extensible, standards-based embedded simulation capability that minimally impacts the operational characteristics of the vehicle and is consistent with the rest of the vetronics architecture. To meet that goal, the simulation infrastructure must support real-time quality of service, must be interoperable across languages and implementations, and must be available on many different platforms. Further, to minimize the impact on the rest of the system, the simulation infrastructure must be able to seamlessly share data with operational vehicle applications. This is contrary to the way developers currently design systems where they typically integrate three distinct architectures—vetronics, C4ISR, and simulation. Each of these usually uses different middleware products for distributed communication. Vetronics applications use real-time operating environments, C4ISR applications use CORBA or a similar commercial middleware product, and embedded simulations use the A-kit/B-kit approach described in Section 1. One way to harmonize these three architectures is to use real-time CORBA, in conjunction with some of the related emerging specifications (real-time notification and data distribution), as the distributed communications middleware for all vetronics, C4ISR, and embedded simulation applications.
As shown in Figure 6, the proposed architecture uses real-time CORBA as the common infrastructure bridging the embedded simulation and vehicle applications. Vehicle applications use real-time CORBA for distributed communication, as does the HLA RTI. This would allow a more efficient and maintainable coupling between the operational vehicle software and the embedded simulation capabilities, as developers need only integrate and maintain one middleware product. For example, consider a vehicle with three major states as shown in Figure 7.
In this example, transitions to the operational or training states can only occur from the initialization state. This makes it easy to setup the required resources for a single state at the same time. An integrated CORBA/HLA-based architecture could consist of application objects inheriting from a CORBA object as shown in Figure 8.
Then, depending on the state, i.e., training, or operational, a different implementation of the application object could be “bound” (resolved) at runtime to achieve either simulated or operational behavior as shown by the collaboration diagram in Figure 8. This approach provides a flexible, non-intrusive embedded simulation capability that is transparent to the vehicle applications. The actual vehicle software is literally unaware of whether or not the object being interacted with is “real” or simulated. The overall system is also more supportable and maintainable as both the embedded simulation capabilities and the remaining vehicle software share a common distribution infrastructure. Further, it is possible to specify the interface to the application objects in a way that hides the actual middleware (contains no visible references to CORBA), to preserve the investment in the application objects as both the middleware and simulation technologies evolve.
5 Summary and Conclusions
This paper presents an open standards based architecture that seamlessly integrates embedded simulation capabilities into a real-time embedded ground vehicle system. Of the two standards proposed, only CORBA is mature enough for use in ground combat vehicles. HLA, though promising, still has drawbacks in the areas of real-time computing, interoperability, and platform availability. The future addition of CORBA-based HLA run-time infrastructures, however, can alleviate those shortcomings. Further work is needed, however, to validate the architecture approach and ensure that both of these technologies can live in harmony within a single integrated platform. This validation can be accomplished in multiple stages with the first stage consisting of CORBA-based vehicle software and a non-CORBA, workstation-based RTI implementation. This would both ensure that the architecture approach is sound and would start to mature the application object interfaces. The full validation could occur as CORBA-based RTI implementations become available on multiple platforms.
6 References
|
{"Source-Url": "http://www.dtic.mil/get-tr-doc/pdf?AD=ADA639965", "len_cl100k_base": 4709, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25956, "total-output-tokens": 5614, "length": "2e12", "weborganizer": {"__label__adult": 0.0007805824279785156, "__label__art_design": 0.0004422664642333984, "__label__crime_law": 0.0012722015380859375, "__label__education_jobs": 0.0009212493896484376, "__label__entertainment": 0.000213623046875, "__label__fashion_beauty": 0.0002732276916503906, "__label__finance_business": 0.00078582763671875, "__label__food_dining": 0.0005331039428710938, "__label__games": 0.002521514892578125, "__label__hardware": 0.0107269287109375, "__label__health": 0.00081634521484375, "__label__history": 0.0007801055908203125, "__label__home_hobbies": 0.00011146068572998048, "__label__industrial": 0.002838134765625, "__label__literature": 0.0003902912139892578, "__label__politics": 0.0007643699645996094, "__label__religion": 0.0006880760192871094, "__label__science_tech": 0.3994140625, "__label__social_life": 0.00012803077697753906, "__label__software": 0.0338134765625, "__label__software_dev": 0.52490234375, "__label__sports_fitness": 0.0007467269897460938, "__label__transportation": 0.0157928466796875, "__label__travel": 0.0004379749298095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27008, 0.01337]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27008, 0.34994]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27008, 0.92074]], "google_gemma-3-12b-it_contains_pii": [[0, 3276, false], [3276, 5057, null], [5057, 5057, null], [5057, 6933, null], [6933, 9423, null], [9423, 12795, null], [12795, 15403, null], [15403, 18546, null], [18546, 20749, null], [20749, 23273, null], [23273, 24577, null], [24577, 26208, null], [26208, 27008, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3276, true], [3276, 5057, null], [5057, 5057, null], [5057, 6933, null], [6933, 9423, null], [9423, 12795, null], [12795, 15403, null], [15403, 18546, null], [18546, 20749, null], [20749, 23273, null], [23273, 24577, null], [24577, 26208, null], [26208, 27008, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27008, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27008, null]], "pdf_page_numbers": [[0, 3276, 1], [3276, 5057, 2], [5057, 5057, 3], [5057, 6933, 4], [6933, 9423, 5], [9423, 12795, 6], [12795, 15403, 7], [15403, 18546, 8], [18546, 20749, 9], [20749, 23273, 10], [23273, 24577, 11], [24577, 26208, 12], [26208, 27008, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27008, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
e9467eb303470e9220783a81cc3272109f2000cf
|
Normative Requirements as Linked Data
Fabien Gandon, Guido Governatori, Serena Villata
To cite this version:
Fabien Gandon, Guido Governatori, Serena Villata. Normative Requirements as Linked Data. JURIX 2017 - The 30th international conference on Legal Knowledge and Information Systems, Dec 2017, Luxembourg, Luxembourg. pp.1-10. <hal-01643769>
HAL Id: hal-01643769
https://hal.archives-ouvertes.fr/hal-01643769
Submitted on 22 Nov 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Normative Requirements as Linked Data
Fabien GANDON\textsuperscript{a}, Guido GOVERNATORI\textsuperscript{b} and Serena VILLATA\textsuperscript{a}
\textsuperscript{a} Université Côte d’Azur, Inria, CNRS, I3S, France
\textsuperscript{b} Data61, CSIRO, Australia
Abstract. In this paper, we propose a proof of concept for the ontological representation of normative requirements as Linked Data on the Web. Starting from the LegalRuleML ontology, we present an extension of this ontology to model normative requirements and rules. Furthermore, we define an operational formalization of the deontic reasoning over these concepts on top of the Semantic Web languages.
Keywords. Linked data, Semantic Web, Deontic rules, Ontology
1. Introduction
The Linked Data principles\cite{3} provide a standard approach to weave a Web of data, linking datasets across the world and virtually in any domain. The semantic Web frameworks additionally provide standard means to publish data (RDF\cite{4}), ontological knowledge (RDFS\cite{5} and OWL\cite{6} schemata), and to query and reason on them (SPARQL\cite{7}). Despite existing approaches to model legal ontological knowledge\cite{9,1,2}, little work has been devoted towards the definition of an end-to-end framework to represent, publish and query ontological knowledge from the legal domain using such standards. In this paper, we study how Semantic Web frameworks could apply to the formalization, publication and processing of legal knowledge, and in particular, normative requirements and rules.
A linked data based deontic representation and reasoning allow us to (a) rely on Web standard to represent, exchange and foster interoperability between deontic rule bases and reasoning systems, (b) rely on existing standards (e.g. SPARQL) and infrastructures (e.g. triple stores) to implement deontic systems and (c) combine linked data and semantic Web reasoning and formalisms (e.g. OWL) with deontic reasoning to support more inferences.
Our research question is: Can we represent and reason on the deontic aspects of normative rules with standard Semantic Web languages? We focus here on two sub-questions: For which aspects schema-based reasoning (RDFS, OWL) is relevant? and Can we operationally formalize other deontic reasoning rules with RDF and SPARQL?
We first survey the related work to show that current legal vocabularies on the Semantic Web do not provide the expressiveness we need (Section\ref{related-work}). Then we specify and formalize of the ontology we require (Section\ref{ontology}). We describe how normative requirements can be represented as Linked Data (Section\ref{linking-requirements}), and why the states of affairs should be represented as RDF 1.1 named graphs (Section\ref{graph-modeling}). Relying on this modeling, we show that some aspects of deontic reasoning cannot be covered by the OWL formalization whilst...
they can be captured with SPARQL rules (Section 6). We experiment this approach with a proof of concept (Section 7) before concluding.
2. Related Work
We performed a search on LOV [8], a directory of Semantic Web vocabularies and schemata, to see how legal concepts are covered in published ontologies. Among the retrieved vocabularies, we identified that:
- the General Ontology for Linguistic Description (GOLD) includes a “Deontic Modality” concept but it is essentially defined from a linguistic point of view with the goal to perform natural language analysis.
- the Public Procurement Ontology (PPROC) has the notion of “Contract additional obligations” which is a class limited to describing the additional obligations a contract requires.
- the Open Standards for Linking Governments Ontology (OSLO) includes an upper class “permission”, but attached to the role of an individual in a society.
- the notions of rights, permissions and licenses are mentioned in schemata such as Dublin Core, Creative Commons or ODRL, but to describe the possible uses of a digital resource and they remain at a descriptive non-formalized level.
Current ontologies are often limited to a specific domain of application and have very shallow coverage of deontic concepts. They are not designed with the goal to support deontic reasoning above Semantic Web frameworks. Their primitives are designed to annotate resources with the goal of documenting or supporting some degree of interoperability, but they are not intended to support Semantic Web based reasoning and processing of the normative requirements and rules. Closer to our goal is the LegalRuleML Meta Model [9] providing primitives for deontic rule and normative requirement representation (Permission, Obligation, Prohibition). We started from this model and extended it with a new ontology focusing on the deontic aspects, integrating notions from an existing abstract formal framework for normative requirements of regulatory compliance [10], and previous on modal defeasible reasoning for deontic logic on the Semantic Web [11].
3. Ontological extension of the LegalRuleML Meta Model
In this section, we first describe the competency questions that motivate our extension of the LegalRuleML ontology, and then we detail the core concepts of our new legal ontology as well as their formalization in OWL.
---
*Keywords include: obligation, prohibition, permission, rights and licences.*
http://purl.org/linguistics/gold/DeonticModality
http://linguistics-ontology.org/gold/DeonticModality
http://purl.org/oslo/ns/localgov#Permission
http://dublincore.org/
http://creativecommons.org/ns
http://w3c.github.io/poe/vocab/
3.1. Motivating scenarios and competency questions
Among the many approaches to design an ontology \cite{12}, the writing of motivating scenarios is a very usual initial step of specifications to capture problems that are not adequately addressed by existing ontologies \cite{13}. The motivating scenario for us here is to support the annotation, detection and retrieval of normative requirements and rules. We want to support users in information retrieval with the ability to identify and reason on the different types of normative requirements and their statuses. This would be possible through ontology population approaches, but the lack of an existing ontology covering these aspects slows this process, as well as the further development of more advanced applications in legal computer science.
In a second step of ontology specification, a standard way to determine the scope of the ontology is to extract from the scenarios the questions and answers it should be able to support if it becomes part of knowledge-based system. These so-called competency questions \cite{13} place demands on the targeted ontology, and they provide expressiveness requirements. The competency questions we target for this ontology are:
- What are the instances of a given requirement and its sub-types, e.g. obligation?
- Is a requirement violated by one or more states of affairs, and if so, which ones?
- Is a given description of rules and states of affairs coherent?
- Which rules, documents and states of affairs are linked to a requirement and how?
3.2. Core primitives
To support the competency questions and relying on definitions from LegalRuleML \cite{9} and deontic reasoning \cite{10,11}, we identified a set of core primitives for an ontology capturing the different aspects of normative requirements, and supporting the identification and classification tasks. We called that ontology Normative Requirement Vocabulary (NRV), and made it available and dereferenceable following the Linked Data principles. The namespace is \url{http://ns.inria.fr/nrv#} with the preferred prefix \texttt{nrv} respectively submitted both to LOV \cite{8} and to \url{http://prefix.cc}.
The top class of the ontology is the \texttt{Normative Requirement} which is defined as the set of the requirements implying, creating, or prescribing a norm. Then we have a number of upper classes to capture different features of the requirements:
- Compensable Requirement, Non Compensable Requirement, Compensated Requirement are classes of requirements with different compensation statuses.
- the classes Violable requirement, Non Violable Requirement, Violated Requirement and Compliant Requirement characterize the requirements with respect to their relation to a Compliance or a Violation.
- the other classes follow the same logic, and they distinguish requirements with respect to their perdurance, persistence, co-occurrence and preemptiveness.
Using these upper classes, we positioned and extended three primitives from the LegalRuleML Meta Model (i.e., Prohibition, Permission, Obligation), each one inheriting from the appropriate super classes we introduced. For instance, Permission inherits from Non Violable Requirement and Non Compensable Requirement, while Obligation inherits from Violable Requirement and Compensable Requirement. Specializations of these classes are then used to introduce the notions of Achievement, Maintenance and
Punctual. For the complete list of classes and their definitions, we refer the reader to the online documentation available at the namespace URL. These primitives and definitions provide the taxonomic skeleton of our NRV ontology.
3.3. Formalization
In this section, we provide some formalization details (ontological commitment) and their translation into OWL (computational commitment). We will use the TriG syntax [14] for RDF, and the prefixes we use in the rest of this article are:
```
lrmlmm: http://docs.oasis-open.org/legalruleml/ns/v1.0/metamodel#
owl: http://www.w3.org/2002/07/owl#
rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns#
rdfs: http://www.w3.org/2000/01/rdf-schema#
rulemm: http://docs.oasis-open.org/legalruleml/ns/v1.0/rule-metamodel#
xsd: http://www.w3.org/2001/XMLSchema#
nrv: http://ns.inria.fr/nrv#
nru: http://ns.inria.fr/nrv-inst#
```
We captured the disjointedness expressed in the upper classes representing exclusive characteristics of normative requirements (compensable / non-compensable, violable / non-violable, persistent / non persistent):
```
:NormativeRequirement a rdfs:Class ;
owl:disjointUnionOf ( :CompensableRequirement :NonCompensableRequirement ) ;
owl:disjointUnionOf ( :ViolableRequirement :NonViolableRequirement ) ;
owl:disjointUnionOf ( :PersistentRequirement :NonPersistentRequirement ) .
```
We initially considered the disjointedness of a compliant requirement and a violated requirement, however this disjointedness is not global but local to a state of affairs and therefore it does not translate to a general disjointedness of classes, i.e., a requirement may be violated by a state of affairs but compliant with an other one at the same time. However, this led us to capture this issue as a property disjointedness, since a requirement cannot be violated and be compliant with the same state of affairs at the same time:
```
:hasCompliance a owl:ObjectProperty ;
rdfs:domain :ViolableRequirement ;
rdfs:range lrmlmm:Compliance ;
owl:propertyDisjointWith :hasViolation .
```
Obligations are an example of non disjoint union between achievements and maintenances, since a punctual requirement is both an achievement and a maintenance:
```
lrmlmm:Obligation a rdfs:Class ;
rdfs:subClassOf :ViolableRequirement ;
rdfs:subClassOf :CompensableRequirement ;
owl:unionOf ( :Achievement :Maintenance ) .
```
```
:Achievement a rdfs:Class ;
rdfs:subClassOf :ViolableRequirement ;
rdfs:subClassOf :CompensableRequirement ;
owl:unionOf ( :PreemptiveAchievement :NonPreemptiveAchievement ) ;
owl:disjointUnionOf ( :PerdurantAchievement :NonPerdurantAchievement ) ;
rdfs:subClassOf lrmlmm:Obligation .
```
```
:Maintenance a rdfs:Class ;
rdfs:subClassOf :ViolableRequirement ;
rdfs:subClassOf :CompensableRequirement ;
rdfs:subClassOf lrmlmm:Obligation .
```
Figure 1. Overview of the NRV ontology and its core primitives
Violated and compensated requirements could be defined with restrictions on the properties hasViolation and hasCompensation:
:ViolatedRequirement a rdfs:Class ;
rdfs:subClassOf :ViolableRequirement ;
owl:equivalentClass [ a owl:Restriction ;
owl:onProperty :hasViolation ;
owl:minCardinality 1 ] .
:CompensatedRequirement a rdfs:Class ;
rdfs:subClassOf :CompensableRequirement ;
owl:equivalentClass [ a owl:Restriction ;
owl:onProperty :hasCompensation ;
owl:minCardinality 1 ] .
We could now be tempted to define a compliant requirement with the following restrictions:
1 :CompliantRequirement a rdfs:Class ; rdfs:label "compliant requirement"@en ;
2 rdfs:subClassOf :ViolableRequirement ;
3 owl:equivalentClass [ a owl:Restriction ;
4 owl:onProperty :hasCompliance ;
5 owl:minCardinality 1 ] .
6 owl:equivalentClass [ a owl:Restriction ;
7 owl:onProperty :hasViolation ;
8 owl:maxCardinality 0 ] .
However we removed the second part (lines 6-8) of the restriction since it re-introduces a disjunction between the compliant and violated requirement classes. The notions of compliance and violation are not generally disjoint but only disjoint locally to a state of affair, i.e., a normative requirement can be violated and compliant at the same time but with respect to different states of affairs. However, OWL definitions cannot rely on RDF 1.1 named graphs, which we will use for representing states of affairs. Therefore we will need another mechanism to capture this kind of constraints.
Because we used disjoint unions, the ontology is in OWL DL, i.e., $\mathcal{SHOIN}^{(D)}$, more precisely in the $\mathcal{ALU^C^H^R^N}$ family, i.e., $\mathcal{AL}$ attributive language, $\mathcal{U}$ concept union, $\mathcal{C}$ complex concept negation, $\mathcal{H}$ role hierarchy, $\mathcal{R}$ limited complex role inclusion axioms, reflexivity, irreflexivity, role disjointedness, and $\mathcal{N}$ cardinality restrictions.
We decided to declare the signature of properties (e.g., hasViolation, hasCompensation) at the ability level (e.g., violable requirement, compensable requirement), and not at the effective status level (e.g., violated requirement, compensated requirement) because each status will be local to a state of affairs. Therefore, in the end, we avoided too strong restrictions and signatures. If we remove cardinality restrictions, unions and disjointedness, the ontology becomes compatible with OWL EL and OWL RL which could be interesting for implementations relying on rule-based systems, especially when we consider the extensions proposed in the following sections.
4. Requirements as Linked Data
Using the LegalRuleML Meta Model and the NRV ontology we can now start to represent normative requirements as Linked Data. Let us introduce two examples. The first one is a rule stating that according to Australian law one cannot drive over 90km/h:
5. State of affairs as named graphs.
The ability to define contexts and group assertions was one of the main motivations for having named graphs in RDF 1.1 [15]. The notion of state of affairs at the core of deontic reasoning is naturally captured by named graphs where all the statements of each state of affairs are encapsulated as RDF triples in a named graph, identifying that precise state of affairs. We provide here four examples of states of affairs respecting (2 and 3) or breaking (1 and 4) the rules of the normative statements described above. The core idea is to represent each state of affairs as a named graph typed as a factual statement of LegalRuleML.
```
:StateOfAffairs1 a lrmlm:FactualStatement .
GRAPH :StateOfAffairs1 { rdfs:label "Tom" ;
:Tom :activity [ a :Driving ;
:speed "100"^^xsd:integer ;
rdfs:label "driving at 100km/h"^^en ] . }
:StateOfAffairs2 a lrmlm:FactualStatement .
GRAPH :StateOfAffairs2 { :Jim :activity [ a :Driving ;
rdfs:label "driving at 90km/h"^^en ] . }
:StateOfAffairs3 a lrmlm:FactualStatement .
GRAPH :StateOfAffairs3 { :Jane :activity [ a :Driving ;
rdfs:label "Jane" ;
:speed "90"^^xsd:integer ;
rdfs:label "driving at 90km/h"^^en ] . }
:StateOfAffairs4 a lrmlm:FactualStatement .
GRAPH :StateOfAffairs4 { :Jane :activity [ a :Driving ;
:speed "100"^^xsd:integer ;
rdfs:label "driving at 100km/h"^^en ] . }
```
These second example is a rule stating that employees of CSIRO must wear their badges:
```
<http://csiro.au/security-rule> a lrmlm:Source ;
rdfs:label "security rules in CSIRO"^^en .
nr:SS2 a lrmlm:Sources ;
nr:LRD2 a lrmlm:LegalRuleMLDocument ;
lrmlm:hasLegalSources nr:SS2 ;
lrmlm:hasAlternatives [ lrmlm:fromLegalSources nr:SS2 ;
lrmlm:hasAlternative nr:PS2 ] ;
lrmlm:hasStatements nr:SS2 .
nr:PS2 a lrmlm:PrescriptiveStatement, lrmlm:Obligation ;
rdfs:label "you must wear your badge inside CSIRO facilities"^^en .
```
6. Deontic reasoning as SPARQL rules
Since the notion of named graph that appeared with RDF 1.1 (2014, [4]) is absent from OWL 2 (2012, [6]) and its constructors, we need to implement the reasoning on states of affairs by other means. The SPARQL language is both a standard and a language able to manipulate named graphs so we propose to use SPARQL rules. In this section, we explore the coupling of OWL reasoning with SPARQL rules to formalize and implement some deontic reasoning. Description Logics (DL) support reasoning on the description of concepts and properties of a domain (terminological knowledge or T-Box) and of their instances (assertional knowledge or A-box). They are the basis of the Web Ontology Language (OWL). The classical inferences supported by DL are instance checking, relation checking, subsumption checking, and consistency checking [16]. While these inferences are useful to reason about deontic knowledge (e.g., a compensable requirement must also be a violable requirement), they do not cover all the inferences we want to support here in particular deontic rules (e.g., a requirement is violated by a state of affairs if, during a specific period of time, a given constraint does not hold). These rules rely on complex pattern matching including, for instance, temporal interval comparison that go beyond OWL expressiveness. As a proof of concept, the following rules check the violation or compliance of the statements made by the previous states of affairs. The core idea is to add to each named graph of each state of affairs the deontic conclusions of the legal rules relevant to it. By relevant we mean here that the state of affairs describes a situation that falls under the application conditions of that legal rule. The following rules update compliance and violation for the driving speed requirement:
DELETE { graph ?g { nru:PS1 nrv:hasCompliance ?g } }
INSERT { graph ?g { nru:PS1 a nrv:ViolatedRequirement ; nrv:hasViolation ?g } }
WHERE { graph ?g { ?a a :Driving ; :speed ?s . }
FILTER (?s>9) ; }
DELETE { graph ?g { nru:PS1 a nrv:ViolatedRequirement ; nrv:hasViolation ?g } }
INSERT { graph ?g { nru:PS1 a nrv:ViolatedRequirement ; nrv:hasViolation ?g } }
WHERE { graph ?g { ?a a :Driving ; :speed ?s . }
FILTER (?s<=9) ; }
The following rules update compliance and violation for the CSIRO badge requirement:
INSERT { graph ?g { nru:PS2 a nrv:ViolatedRequirement ; nrv:hasViolation ?g } }
WHERE { graph ?g { ?x :location [ rdf:value ?o ; :start ?ls ; :end ?le . }
DELETE { graph ?g { nru:PS1 nrv:hasCompliance ?g } }
INSERT { graph ?g { nru:PS1 a nrv:ViolatedRequirement ; nrv:hasViolation ?g } }
WHERE { graph ?g { ?a a :Driving ; :speed ?s . }
FILTER (?s<90) ; }
DELETE { graph ?g { nru:PS1 a nrv:ViolatedRequirement ; nrv:hasViolation ?g } }
INSERT { graph ?g { nru:PS1 a nrv:ViolatedRequirement ; nrv:hasViolation ?g } }
WHERE { graph ?g { ?a a :Driving ; :speed ?s . }
FILTER (?s>90) ; }
The following rules update compliance and violation for the CSIRO badge requirement:
The following rules update compliance for the state of affairs after violations were checked:
\[
\text{INSERT} \{ \text{graph } ?g \{ ?n \text{ a nrv:CompliantRequirement } \} \}
\]
\[
\text{WHERE} \{ ?g \text{ a lrmlmm:FactualStatement} .
?n \text{ a nrv:ViolableRequirement} .
\text{graph } ?g \{ ?n \text{ nrv:hasCompliance } ?g \}
\text{minus} \{ \text{graph } ?g \{ ?n \text{ nrv:hasViolation } ?g \} \}\}
\]
\[
\text{DELETE} \{ \text{graph } ?g \{ ?n \text{ a nrv:CompliantRequirement } \} \}
\]
\[
\text{WHERE} \{ ?g \text{ a lrmlmm:FactualStatement} .
?n \text{ a nrv:ViolableRequirement} .
\text{graph } ?g \{ ?n \text{ nrv:hasViolation } ?g \} \}
\]
7. Proof of concept and experimentation
To validate and experiment with the ontology, the Linked Data and the rules, we used two established tools:
- the latest version of the Protége platform [17] and the reasoners it includes were used to check the NRV OWL ontology which was found coherent and consistent.
- the latest version of CORESE [18] was used to load the LegalRuleML and NRV ontologies, the Linked Data about the rules and the states of affairs, and the SPARQL rules to draw the conclusions as shown in Figure 2 for the two first states of affairs concerning speed limitation.
Figure 2. Extract of the quadruples (N-Quads) produced by CORESE after all the reasoning on the two first states of affairs concerning speed limitation showing one violated state (white background) and one compliant one (blue background). The columns indicate the named graph of the state of affairs (7g), the subjects (7lx), the predicates (7lp), and the objects (7lv) of the triples in this named graph.
8. Conclusions
In this paper, we addressed the fact that current vocabularies on the Semantic Web do not provide the expressiveness we need to support deontic reasoning on normative...
requirements and rules. As a contribution, we specified and formalized an ontology extending LegalRuleML, and we showed how it can be used to represent normative requirements as Linked Data with states of affairs represented as RDF 1.1 named graphs. Relying on this modeling, we proposed an approach based on SPARQL rules to cover some of the deontic aspects outside the expressiveness of OWL 2, and we experiment this approach with a proof of concept based on two established tools of the Semantic Web community. Future work includes extensive population and testing of the ontology on larger datasets and cases. In particular, we intend to go beyond the proof of concept by evaluating this end-to-end approach based on the Semantic Web languages on a business process compliance checking scenario [10]. As pointed by one of our reviewers, extensions of this work also include the possibility to represent differentiated classes of validity that would correspond to the actual structure of our legal system and non-binary modes that would be fit to process proportionality of legal principles. The introduction of a complete rule-based system is part of our future directions as well.
References
[15] F. Gandon, O. Corby, Name That Graph or the need to provide a model and syntax extension to specify the provenance of RDF graphs., W3C Workshop — RDF Next Steps, Jun 2010, Palo Alto, United States.
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01643769/file/GandonEtAl_Jurix_2017_final.pdf", "len_cl100k_base": 5735, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24395, "total-output-tokens": 7336, "length": "2e12", "weborganizer": {"__label__adult": 0.0012083053588867188, "__label__art_design": 0.001552581787109375, "__label__crime_law": 0.06561279296875, "__label__education_jobs": 0.00733184814453125, "__label__entertainment": 0.0003757476806640625, "__label__fashion_beauty": 0.00055694580078125, "__label__finance_business": 0.002460479736328125, "__label__food_dining": 0.0009150505065917968, "__label__games": 0.0016584396362304688, "__label__hardware": 0.0014123916625976562, "__label__health": 0.002132415771484375, "__label__history": 0.0017843246459960938, "__label__home_hobbies": 0.0002989768981933594, "__label__industrial": 0.0016450881958007812, "__label__literature": 0.0038051605224609375, "__label__politics": 0.00501251220703125, "__label__religion": 0.0013837814331054688, "__label__science_tech": 0.331298828125, "__label__social_life": 0.0005593299865722656, "__label__software": 0.0772705078125, "__label__software_dev": 0.489501953125, "__label__sports_fitness": 0.0004107952117919922, "__label__transportation": 0.001453399658203125, "__label__travel": 0.0004944801330566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26651, 0.024]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26651, 0.25255]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26651, 0.83019]], "google_gemma-3-12b-it_contains_pii": [[0, 985, false], [985, 3885, null], [3885, 6563, null], [6563, 9995, null], [9995, 12816, null], [12816, 12879, null], [12879, 15784, null], [15784, 17836, null], [17836, 20878, null], [20878, 22727, null], [22727, 26651, null]], "google_gemma-3-12b-it_is_public_document": [[0, 985, true], [985, 3885, null], [3885, 6563, null], [6563, 9995, null], [9995, 12816, null], [12816, 12879, null], [12879, 15784, null], [15784, 17836, null], [17836, 20878, null], [20878, 22727, null], [22727, 26651, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26651, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26651, null]], "pdf_page_numbers": [[0, 985, 1], [985, 3885, 2], [3885, 6563, 3], [6563, 9995, 4], [9995, 12816, 5], [12816, 12879, 6], [12879, 15784, 7], [15784, 17836, 8], [17836, 20878, 9], [20878, 22727, 10], [22727, 26651, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26651, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
9777c2db1aa486f6f68be847424d9cad6e3f6215
|
Role-Based Access Control Approaches
In Mangodb 2.4 and Informix Online Dynamic Server Version 7.2
Abubakar Sulaiman Gezawa¹, Ahmed Aliyu², Tong Yujun³, Saifullahi Aminu Bello⁴, Abubakar Ado⁵
System Analyst, MIS Department, Bayero University Kano, Kano, Nigeria¹
Lecturer, Department of Computer Science, Bauchi State University, Bauchi, Nigeria²
Professor, Electrical/Electronics, Liaoning University of Technology, Jinzhou, Liaoning Province, China³
Lecturer, Department of Computer science, Kano University of Science and Technology, Kano, Nigeria⁴
Lecturer, Department of Computer Science, Northwest University, Kano, Nigeria⁵
Abstract: This paper compares and contrast role based access control (RBAC) approaches in mangodb 2.4 and Informix Online Dynamic Server Version 7.2. We categorize RBAC features under two major areas. User role assignment and assigning privileges. Many commercially successful access control systems for mainframes implement role for secondary administration for example, an operator role could access all resources but not change access permissions, a security officer role could access all resources but have no access e.t.c. my findings is that these products provide a sound basis for implementing the basic features of RBAC, although there are significant differences. In particular, Informix restricts users to a single active role at any time, while mangodb allow multiple roles to be activated simultaneously as per the user's selection.
Keywords: RBAC; features; privileges; Approaches; Dynamic; Permission
I. INTRODUCTION
In computer systems security, role-based access control (RBAC) [1][2] is an approach to restricting systems access to authorized users. It is used by the majority of enterprises with more than 500 employees [3] and can implement mandatory access control (MAC) or discretionary access control (DAC) RBAC is sometimes referred to as role based security. In RBAC permissions are associated with roles, and users are made members of appropriate roles thereby acquiring the roles permissions. This greatly simplifies management of permissions. Roles are created for the various job functions in an organization and users are assigned roles based on their responsibilities and qualifications.
In DBMS, The RBAC model forms an integral component of the access control mechanism, and hence the RBAC model data is used for enforcing access control on the various resources (database objects) under the control of the DBMS product [4]. An application system developed using a DBMS can contain a large amount of data with highly differentiated access permissions for different users depending upon their function or role within the organization. Hence database management is a key area which needs mechanisms for management of authorization or privileges.
In this paper, we analyze compare and contrast RBAC approaches implemented in mangodb 2.4 and Informix Online Dynamic Server Version 7.2. The RBAC features that are supported have been categorized under two broad areas as follows
- User role assignment
- Assignable privileges
In comparing the features of this complex commercial software packages, it is not always possible to readily obtain the total set of all supported features from product manuals alone. However, it is possible to extract and compare the major differences in features from using the multiple manuals that come with the product this is the approach that has been adopted in this paper.
The paper is organized as follows. Section II give some brief discussion on related work done by some scholars about RBAC approaches in some commercial databases while section III and IV describes the roles and privilege assignment of both the two database, then section V presents the results of the discussion and finally section VI presents the conclusion.
II. RELATED WORK
Phillippe Balbiani [6] proposed an access control language in which RBAC, and following extensions namely Delegation, separation of duty and history based access control can be encoded. In contrast with Cassandra, they have not promoted role management mechanism to first-class citizenship, and have based their model on the assumption that access control system could be separated into a dynamic part that evolves according to action performed by users and a static part. Ravi S. Sandhu [7] introduces introduce a family of reference models for role-based access control (RBAC) in which permissions are associated with roles, and users are made members of appropriate roles. This greatly simplifies management of permissions. Roles are closely related to the concept of user groups in access control. However, a role brings together a set of users on one side and a set of permissions on the other, whereas user groups are typically defined as a set of users only [5].
III. INFORMIX ONLINE DYNAMIC SERVER VERSION 7.2
3.1 USER ROLE ASSIGNMENT
A role can be granted to a single user, a role, a list of users or-by using the keyword PUBLIC-to all users [5]. A user can be granted more than one role. Users who have been granted a role with the GRANT OPTION can further grant that role or delete it by using the DROP ROLE command.
A user can have only one role active at any point in time. Initially all users are assigned the role NULL or NONE, by default, when they sign on to a database. The user can enable an authorized role by means of the SET ROLE statement. The SET ROLE statement allows for specifying only one role, so the user can enable only one role at a time. Moreover, if a user executes the SET ROLE statement after a role is already set, the new role replaces the old role. This implies that a user can be active in one and only role at every moment. Informix provides no feature to specify a default active role, different from NULL or NONE, for a user.
3.2 Support for role relationships and constraints
As already stated, users who have been granted a role with GRANT OPTION as well as DBAs can grant a role to another role. This feature enables building nested roles, so it is possible to build a role hierarchy.
Informix has no features to specify mutually exclusive roles that are sets of roles which cannot be granted to the same user. Hence it does not support static separation of duty. There is also no support for cardinality constraint to restrict the maximum or minimum number of users that can be authorized for a role. Informix does in a sense support dynamic separation of duties that is specification of roles that cannot be simultaneously activated. However, this is more a side effect of the fact that only role can be activated at a time rather than an independent feature in its own right.
3.3 Assignable privileges
Informix divides the universe of all privileges that can be assigned into three categories: database-level privileges, table-level privileges and execute privilege.
Database-level privileges refer to privileges needed to connect to a database, add new objects and perform administrative functions like security management (including transfer of object ownerships) and space management etc. They include the CONNECT privilege (ability to establish the user context to a database schema so that the user can query and modify the objects in the schema depending upon the permissions and ownerships), RESOURCE privilege (ability to create new objects in a database schema like tables, indexes and procedures) and DBA privilege (grant privileges to another user or role, create new objects under a designated ownership the default owner of a database object is the one who created it, update rows of system catalog tables and control the growth of physical spaces by altering extent sizes etc)[6].
Table-level privileges refer to privileges that can be granted on a base table [6]. They include INSERT, DELETE and ALTER that are applicable for the table as a whole, SELECT and UPDATE privileges that can be selectively applied on one or more columns of a table, as well as REFERENCES (ability to reference one or more columns in referential constraints) and INDEX (ability to create permanent indexes). Privileges that can be granted on a view are SELECT, INSERT, DELETE and UPDATE. The last three privileges are only applicable if the view meets all the requirements for updating (updatable view). ALTER, REFERENCES and INDEX privileges cannot be granted on a view.
The EXECUTE privilege is applicable only for database stored procedures. It is a single privilege representing the ability to execute the stored procedure. Informix allows only the Table-level privileges and the EXECUTE privilege to be granted to roles. Database-level privileges cannot be granted to roles.
The DBA and the owner of a database object can grant privileges to a role and can revoke that privilege later on. Informix has a AS GRANTOR clause in the statement that grants privileges to roles. Using this it is possible to designate someone else as the grantor of the specified privilege to a role. However the person who originally executed the grant statement with AS GRANTOR option can no longer revoke that privilege from the role. It is also interesting to note that the user who has been granted a role WITH GRANT OPTION can also revoke privileges from a role.
IV. MONGODB 2.4
4.1 USER ROLE ASSIGNMENT
Roles: Roles in Mongodb provide users with a set of specific privileges, on specific logical databases. Users may have multiple roles and may have different roles on different logical database. Mongodb provides built-in roles, each with a dedicated purpose for a common use case.
The major roles in Mangodb are [5]:
- read
- readWrite
- dbAdmin
- userAdmin
- clusterAdmin
- readAnyDatabase
- readWriteAnyDatabase
- userAdminAnyDatabase
- dbAdminAnyDatabase
4.2 Role Assignment to Users
User administrators create the users that access the system’s databases. Through “user management commands” let administrators create users and assign them roles. The first role assigned in a database should be either userAdmin or userAdminAnyDatabase. This user can then create all other users in the system. Table 1 below shows User Management Commands according to [5].
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>createUser</td>
<td>Creates a new user.</td>
</tr>
<tr>
<td>updateUser</td>
<td>Updates a user’s data.</td>
</tr>
<tr>
<td>dropUser</td>
<td>Removes a single user.</td>
</tr>
<tr>
<td>dropAllUsersFromDatabase</td>
<td>Deletes all users associated with a database.</td>
</tr>
<tr>
<td>grantRolesToUser</td>
<td>Grants a role and its privileges to a user.</td>
</tr>
<tr>
<td>revokeRolesFromUser</td>
<td>Removes a role from a user.</td>
</tr>
<tr>
<td>usersInfo</td>
<td>Returns information about the specified users.</td>
</tr>
</tbody>
</table>
Table 1: User Management Commands according to [5].
Copyright to IJIRSET www.ijirset.com 11738
Table 2 below shows Role Management Commands according to [5].
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>createRole</td>
<td>Creates a role and specifies its privileges.</td>
</tr>
<tr>
<td>updateRole</td>
<td>Updates a user-defined role.</td>
</tr>
<tr>
<td>dropRole</td>
<td>Deletes the user-defined role.</td>
</tr>
<tr>
<td>dropAllRolesFromData</td>
<td>Deletes all user-defined roles from a database.</td>
</tr>
<tr>
<td>grantPrivilegesToRole</td>
<td>Assigns privileges to a user-defined role.</td>
</tr>
<tr>
<td>revokePrivilegesFromRole</td>
<td>Removes the specified privileges from a user-defined role.</td>
</tr>
<tr>
<td>grantRolesToRole</td>
<td>Specifies roles from which a user-defined role inherits privileges.</td>
</tr>
<tr>
<td>revokeRolesFromRole</td>
<td>Removes specified inherited roles from a user-defined role.</td>
</tr>
<tr>
<td>rolesInfo</td>
<td>Returns information for the specified role or roles.</td>
</tr>
</tbody>
</table>
The following example creates the user `recordsUserAdmin` on the records database [5]:
```javascript
db.createUser(
{
user: "recordsUserAdmin",
pwd: "password",
roles: [
{
role: "userAdmin",
db: "records"
}
]
}
)
```
### 4.3 Assignable privileges
A privilege consists of a specified resource and the actions permitted on the resource. A privilege `resource` is a database, collection, set of collections, or the cluster. If the cluster, the affiliated actions affect the state of the system rather than a specific database or collection.
An `action` is a command or method the user is allowed to perform on the resource. A resource can have multiple allowed actions. For example, a privilege that includes the update action allows a user to modify existing documents on the resource. To additionally grant the user permission to create documents on the resource, the administrator would add the insert action to the privilege. A role can include one or more existing roles in its definition, in which case the role inherits all the privileges of the included roles. A role can inherit privileges from other roles in its database. A role created on the admin database can inherit privileges from roles in any database.
The following is a sample document for a user-defined role `appUser` defined for the `myApp` database [5]:
```javascript
{
_id: "myApp.appUser",
role: "appUser",
db: "myApp",
privileges: [
{ resource: { db: "myApp", collection: "" },
actions: ["find", "createCollection", "dbStats", "collStats"] }
]
}
```
The privileges array lists the five privileges that the appUser role specifies:
- The first privilege permits its actions ("find", "createCollection", "dbStats", "collStats") on all the collections in the myApp database excluding its system collections.
- The next two privileges permit additional actions on specific collections, logs and data, in the myApp database.
- The last two privileges permit actions on two system collections in the myApp database. While the first privilege gives database-wide permission for the find action, the action does not apply to myApp’s system collections. To give access to a system collection, a privilege must explicitly specify the collection.
As indicated by the empty roles array, appUser inherits no additional privileges from other roles.
V. RESULT DISCUSSION
A summary of role based access control features that are supported or not supported in the two DBMS products studied in this paper is given in table 3 below.
<table>
<thead>
<tr>
<th>Item</th>
<th>Feature</th>
<th>Informix</th>
<th>Mongodb</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Ability for a role grantee to grant that role to other users</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>2</td>
<td>Build a role hierarchy</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>3</td>
<td>Multiple active roles for a user session</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>4</td>
<td>Grant DBMS System Privileges to a Role</td>
<td>No</td>
<td>Yes</td>
</tr>
</tbody>
</table>
Table 3: A summary of role based access control features that are supported or not supported in the two DBMS
Features 1 and 3 pertain to user role assignment
Features 2 pertain to support for role relationships and constraints
Features 4 pertain to assignable privileges
TABLE 3: SUMMARY
OBJECT PRIVILEGES ALLOW USERS TO PERFORM A PARTICULAR ACTION ON A SPECIFIC TABLE, VIEW, SEQUENCE OR STORED PROCEDURE. THEY INCLUDE THE SELECT, UPDATE, INSERT, DELETE OPERATIONS ON TABLES AND VIEWS, ALTER, CREATE INDEX OPERATIONS ON TABLES ALONE, AND EXECUTE OPERATION ON PROCEDURES AND FUNCTIONS
Both categories of privileges can be granted to roles. System Privileges can only be granted by the DBA or by a user who has been granted that privilege with the ADMIN OPTION. Object Privileges can only be granted to roles either by owner of the object or by a user who has been granted that privilege with the GRANT OPTION.
VI. CONCLUSION
In the area of user role assignment, we found that in both MongoDB 2.4 and Informix Online Dynamic Server Version 7.2. The task of assigning roles to users can be implemented as a discretionary access control mechanism by enabling the role grantee to grant that role to other users. While MongoDB provides for multiple roles to be activated in a user session, Informix has provision for only one role to be active. Since Informix does support role hierarchies it can be argued that by suitable definition of roles and by use of role-to-role assignments, this limitation can be overcome. However, this would require anticipation of role combinations that users would like to activate and definition of a senior role in the hierarchy which combines these together in one.
In summary we found that MongoDB provides more features than Informix in the areas of user role assignment and assignable privileges. Overall our conclusion is that these products provide a sound basis for implementing the basic features of RBAC, although there are significant differences.
REFERENCES
BIOGRAPHY
Abubakar Sulaiman Gezawa: is a system analyst at the department of Management Information System (MIS), Bayero University, Kano. He is currently a master student of Computer science and Technology, Liaoning University of Technology, Jinzhou, Liaoning Province, China. His area of interest is knowledge discovery and Intelligent decision support system. He has published a paper titled “A B-S Model for Online Integrated Information system for Bayero University Kano, Nigeria”.
Ahmed Aliyu is working with the department of computer Science at Bauchi State University, Bauchi-Nigeria. He Obtained his B. Tech Computer Science from Abubakar Tafawa Balewa University, Bauchi-Nigeria. He is currently a master’s student at Liaoning University of technology, Jinzhou-China. His area of interests is Computer network security and information system. He has published a paper titled “An Integrated Framework for Detecting and prevention of Trojan Horse (BINGHE) in a Client-Server Network”.
Copyright to IJIRSET www.ijirset.com 11741
Tong Yujun: is lecturer at School of electrical/Electronic Engineering Liaoning University of Technology. He is currently an Associate Professor, his Research area include: Web Data Mining, Distributed Database and Software Engineering.
Saifullahi Aminu Bello: is working with department of computer science, Kano University of Science and Technology, Kano-Nigeria. He obtained his B.sc computer science from the same university. And currently studies M.sc Computer Science and Technology at Liaoning University of Technology, Jinzhou-China. His area of interests is Data mining and distributed databases. He received an award as the best graduating student in B.sc Computer Science 2008/2009 by Kano University of Science and Technology, Nigeria.
Abubakar Ado: is working with department of Computer Science, Northwest University, Kano-Nigeria. He Obtained his B. Tech Computer Science from Abubakar Tafawa Balewa,] Bauchi-Nigeria. His area of interests is Database and information system. He has published a Paper titled “Building a Diabetes Data Warehouse to Support Decision making in healthcare industry”.
|
{"Source-Url": "http://www.rroij.com/open-access/rolebased-access-control-approachesin-mangodb-24-and-informix-online-dynamicserver-version-72.pdf", "len_cl100k_base": 4175, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21435, "total-output-tokens": 4967, "length": "2e12", "weborganizer": {"__label__adult": 0.0003561973571777344, "__label__art_design": 0.0004119873046875, "__label__crime_law": 0.0017499923706054688, "__label__education_jobs": 0.0025463104248046875, "__label__entertainment": 9.691715240478516e-05, "__label__fashion_beauty": 0.0001856088638305664, "__label__finance_business": 0.0013875961303710938, "__label__food_dining": 0.00029969215393066406, "__label__games": 0.0005459785461425781, "__label__hardware": 0.0013151168823242188, "__label__health": 0.0006937980651855469, "__label__history": 0.000293731689453125, "__label__home_hobbies": 0.0001512765884399414, "__label__industrial": 0.0006184577941894531, "__label__literature": 0.00029730796813964844, "__label__politics": 0.0004830360412597656, "__label__religion": 0.0003380775451660156, "__label__science_tech": 0.1109619140625, "__label__social_life": 0.0002038478851318359, "__label__software": 0.14990234375, "__label__software_dev": 0.7265625, "__label__sports_fitness": 0.0002065896987915039, "__label__transportation": 0.0003027915954589844, "__label__travel": 0.00018489360809326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21456, 0.01664]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21456, 0.23515]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21456, 0.88588]], "google_gemma-3-12b-it_contains_pii": [[0, 3477, false], [3477, 8380, null], [8380, 11000, null], [11000, 13820, null], [13820, 15710, null], [15710, 20344, null], [20344, 21456, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3477, true], [3477, 8380, null], [8380, 11000, null], [11000, 13820, null], [13820, 15710, null], [15710, 20344, null], [20344, 21456, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21456, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21456, null]], "pdf_page_numbers": [[0, 3477, 1], [3477, 8380, 2], [8380, 11000, 3], [11000, 13820, 4], [13820, 15710, 5], [15710, 20344, 6], [20344, 21456, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21456, 0.17931]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
fdf4076ac92ff5eb5105b6c18014f34eb2eb5ac1
|
[REMOVED]
|
{"Source-Url": "https://projects.iq.harvard.edu/files/teamcore/files/2001_11_teamcore_incs01.pdf", "len_cl100k_base": 6857, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31535, "total-output-tokens": 7820, "length": "2e12", "weborganizer": {"__label__adult": 0.00030684471130371094, "__label__art_design": 0.0005340576171875, "__label__crime_law": 0.0003919601440429687, "__label__education_jobs": 0.0013704299926757812, "__label__entertainment": 0.00012695789337158203, "__label__fashion_beauty": 0.00017750263214111328, "__label__finance_business": 0.00040531158447265625, "__label__food_dining": 0.00035262107849121094, "__label__games": 0.0006732940673828125, "__label__hardware": 0.0014743804931640625, "__label__health": 0.0006318092346191406, "__label__history": 0.0004193782806396485, "__label__home_hobbies": 0.00013625621795654297, "__label__industrial": 0.0006513595581054688, "__label__literature": 0.0003502368927001953, "__label__politics": 0.00036263465881347656, "__label__religion": 0.0004274845123291016, "__label__science_tech": 0.228759765625, "__label__social_life": 0.00015604496002197266, "__label__software": 0.022552490234375, "__label__software_dev": 0.73828125, "__label__sports_fitness": 0.0002856254577636719, "__label__transportation": 0.0008463859558105469, "__label__travel": 0.0002684593200683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37478, 0.02056]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37478, 0.45097]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37478, 0.93098]], "google_gemma-3-12b-it_contains_pii": [[0, 2876, false], [2876, 6249, null], [6249, 8221, null], [8221, 10367, null], [10367, 13592, null], [13592, 17190, null], [17190, 20940, null], [20940, 24507, null], [24507, 27868, null], [27868, 29921, null], [29921, 31121, null], [31121, 32571, null], [32571, 35836, null], [35836, 37478, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2876, true], [2876, 6249, null], [6249, 8221, null], [8221, 10367, null], [10367, 13592, null], [13592, 17190, null], [17190, 20940, null], [20940, 24507, null], [24507, 27868, null], [27868, 29921, null], [29921, 31121, null], [31121, 32571, null], [32571, 35836, null], [35836, 37478, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37478, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37478, null]], "pdf_page_numbers": [[0, 2876, 1], [2876, 6249, 2], [6249, 8221, 3], [8221, 10367, 4], [10367, 13592, 5], [13592, 17190, 6], [17190, 20940, 7], [20940, 24507, 8], [24507, 27868, 9], [27868, 29921, 10], [29921, 31121, 11], [31121, 32571, 12], [32571, 35836, 13], [35836, 37478, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37478, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
cf91fe4dbed958b6bafc19c01cbf1d549ceaac7d
|
Two Semantics for Step-Parallel Planning: Which One to Choose?
Tomáš Balyo\textsuperscript{1}, Daniel Toropila\textsuperscript{1,2}, and Roman Barták\textsuperscript{1}
\textsuperscript{1} Faculty of Mathematics and Physics, Charles University
Malostranské nám. 2/25, 118 00 Prague, Czech Republic
\{tomas.balyo,daniel.toropila,roman.bartak\}@mff.cuni.cz
\textsuperscript{2} Computer Science Center, Charles University
Ovocný trh 5, 116 36 Prague, Czech Republic
Abstract. Parallel planning is a paradigm that provides interesting efficiency improvements in the field of classical AI planning and it is one of the key components of successful SAT-based planers. Popularized by the Graphplan algorithm, it provides more structure information about the plan for a plan executor when compared to the traditional sequential plan, which is one of the crucial facts of its usability in real-life scenarios and applications. Our latest research shows that different semantics can be used for parallel planning, while deciding which semantic to use for given application can have significant influence on both practical planning domain modeling and also on the computational efficiency of searching for a plan.
Keywords: automated planning, parallel semantics, SAT, domain modeling
1 Introduction
Classical AI planning deals with finding a sequence of actions that transfer the world from some initial state to a desired state. We assume a world which is fully observable (we know precisely the state of the world), deterministic (the state after performing the action is known), and static (only the entity for which we plan changes the world), with a finite (though possibly large) number of states. We also assume actions to be instantaneous so we only deal with action sequencing. Actions are usually described by a set of preconditions – features that must hold in a state to make the action applicable to that state – and a set of effects – changes that the action makes to the state. Action sequencing is naturally restricted by causal relations between the actions, i.e., the effect of certain action gives a precondition of another action.
Traditional sequential planning algorithms explore directly the sequences of actions. One of the disadvantages of this approach is liability to exploring symmetrical plans where some actions can be swapped without changing the overall effect. Hence if some sequence of actions does not lead to a goal then the algorithm may explore a similar sequence of actions where certain actions are swapped though this sequence leads to exactly same non-goal state. This is called plan-permutation symmetry [1]. It is possible to remove some of these symmetries by symmetry breaking constraints as suggested in [2] or [3]. Another way to resolve this problem is partial-order planning where the plans are kept as partially ordered sets of actions (the partial order respects the causal relations). CPT planner [4] is probably the most successful (in terms of International Planning Competition) constraint-based planner that does partial-order planning. A half way between partial-order and sequential planning is parallel planning, where the plan is represented as a sequence of sets of actions such that any ordering of actions in the sets gives a traditional sequential plan. This concept was popularised by the Graphplan algorithm [5] that introduced a so called planning graph to efficiently represent causal relations between the actions. Planning graph became a popular representation of parallel plans for approaches that translate the planning problem to other formalisms such as Boolean satisfiability or constraint satisfaction [6] [7].
Besides the elegant handling of the symmetries, another advantage of parallel planning is its strong usability for actual plan execution as the concept of parallel plan is much closer to the real-life scenarios. As an example we can imagine a transportation planning domain, such as logistics [8], where the available actions represent mainly movements of vehicles and loading/unloading of these. Obviously, it is proficient for a plan executor to know that loading of one vehicle can be realized in parallel with sending another vehicle from one location to another. Other useful examples can be devised very easily.
Even though the concept of parallel planning is straightforward, our latest research shows that multiple useful semantics can be introduced for parallel planning by modifying the constraints specifying
which actions can appear together in a single (parallel) step of a plan. In this paper we study the differences between the two main semantic concepts, proving the answers for two main questions: first we are investigating the influence of these differences on practical domain modeling, while, second, our main focus is exploring the impact on performance of searching for a plan.
The paper is organized as follows. First we provide necessary theoretical concepts together with the description of the two different parallel plan semantics we studied. After that, following sections discuss the impact of the differences between the two semantics on the practical usability for planning domain modeling and also on the computational efficiency. We then conclude by providing the experimental evaluation and final discussion of our results.
2 SAS+ Planning
We use SAS+ formalism to formalize the planning problem. This formalism is based on so called multi-valued state variables, as mentioned in [9] or [10]. For each feature of the world, there is a variable describing this feature, for example the position of a robot. World state is then specified by values of all state variables at the given state. Hence the evolution of the world can be described as a set of statevariable functions where each function specifies evolution of values of certain state variable. Actions are then the entities changing the values of state variables. Each action consists of preconditions specifying state variables at the given state. Hence the evolution of the world can be described as a set of state-
3 Two Semantics
When having a closer look at a sequential plan, one can very often realize that some of the actions included in the plan could be in fact swapped, or even executed together, without affecting the final
result of the plan execution. This is probably not surprising observation, as many plans typically involve almost independent sub-plans, that have to be synchronized with each other only in a few specific time steps. As an example of such we can use a plan that involves activities for two trucks, each of which can have its own separate sub-plan of moves and loading/unloading some cargo, while the only needed synchronization between the two trucks is necessary in case one of them is waiting for the cargo that other truck is to deliver first, or in case both of them need to load/unload their cargo in a location where only one truck can operate at a time. Instead of a plain action sequence \( \pi = \{a_1, \ldots, a_k\} \) we could express a solution for a given planning problem using a richer structure – by providing a sequence of sets of actions \( \pi^i = \{A_1, \ldots, A_n\} \) \( \forall A_i \exists j : a_j \in A_i \), where for each set \( A_i \) all of the actions contained within could be executed in parallel. Of course, one could go further and provide a full graph of causal relations of individual actions contained in the plan, obtaining thus a partial-order plan [4]. However, constructing such a structure might not be too simple, and is out of the scope of this paper.
There are two motivations behind using parallel plans. First, it is a natural way how to remove some of the plan-permutation symmetries, so the planner does not have to explore the plans where a pair of actions within a set corresponding to single time step is swapped. This fact should, ultimately, help to improve the performance of a planner. Second motivation is even more practical – the knowledge provided by a parallel plan structure is, in fact, very useful for the plan execution phase, since the executor knows which actions can be performed in parallel (if possible), and thus the whole plan can be executed using less time steps.
There are, however, two different ways to define which actions can appear together in a single (parallel) time step, providing thus two different semantics for the step-parallel planning.
### 3.1 Strict Semantics
Probably the most natural requirement on the actions in a single set \( A_i \) is their strict pairwise independence, as defined in [12], and also as originally described in the Graphplan planning system [5], which was the first one to introduce step-parallel plans as a technique to rapidly improve the efficiency of solving planning problems. Needless to say, the preconditions of the actions within \( A_i \) must be non-conflicting.
We say that two actions \( a_1, a_2 \) are independent in case \( a_1 \) does not destroy the preconditions or effects of \( a_2 \), and vice versa. The consequence of the requirement of pairwise independence between actions from \( A_i \) together with their non-conflicting preconditions is that the actions from a single set \( A_i \) can be executed in an arbitrary sequential or parallel order. Therefore the only synchronization requirement on the plan execution is that prior to executing actions from a set \( A_i \) all action from \( A_{i-1} \) must be executed.
If two actions cannot appear within the same set \( A_i \), we say they are mutually exclusive, i.e., they are in action mutex. The fact of which actions are allowed to appear within the same set \( A_i \) can be also expressed using the corresponding transitions. The actions \( a_1, a_2 \) are in action mutex if there exists a pair of transitions \( \delta_1 \in M(a_1) \) and \( \delta_2 \in M(a_2) \) such that these two transitions are in transition mutex. The two different definitions of the transition mutex will help us draw the difference between the two semantics discussed in this paper.
First, let us define the transition mutex for the strict semantics. A pair of transitions \( \delta_1, \delta_2 \) is in strict transition mutex if there exists \( x \in X \) such that \( \delta_1 \in T(x) \) and also \( \delta_2 \in T(x) \), meaning they are both based on the same state variable. In other words, all transitions of a given state variable are in transition mutex.
The above relation is a corollary of the following observation. Given the actions \( a_1, a_2 \), if there is no pair of transitions \( \delta_1^x \in M(a_1), \delta_2^y \in M(a_2) \) such that \( x = y \), i.e., both transitions are based on the same state variable, then \( a_1, a_2 \) are independent and have non-conflicting preconditions. For the sake of evidence we include a sketch of the proof. If there is no common state variable for any pair of transitions \( \delta_1^x, \delta_2^y \), the actions \( a_1, a_2 \) can neither affect each other nor have conflicting preconditions, since if they did so, there would be a state variable proving the conflict together with the associated transitions – which proves the required implication.
Interestingly, it turns out that the reverse implication does not hold. Hence, that means that the definition using the transition mutex is stricter than the one using the action independence (the fact of which also provided the name for this semantics). However, as proven above, it still guarantees the possibility of an arbitrary ordering during the execution of actions within \( A_i \).
Table 1. Example of a set of actions that can be executed only in parallel.
<table>
<thead>
<tr>
<th>Action</th>
<th>Preconditions</th>
<th>Effects</th>
<th>Associated Transitions</th>
</tr>
</thead>
<tbody>
<tr>
<td>a1</td>
<td>x = a</td>
<td>x ← b, z ← f</td>
<td>δ_{x→b}, δ_{z→f}</td>
</tr>
<tr>
<td>a2</td>
<td>y = c</td>
<td>y ← d, x ← b</td>
<td>δ_{y→d}, δ_{x→b}</td>
</tr>
<tr>
<td>a3</td>
<td>z = c</td>
<td>z ← f, y ← d</td>
<td>δ_{z→f}, δ_{y→d}</td>
</tr>
</tbody>
</table>
3.2 Synchronized Semantics
For the previous semantics we required the universal interchangeability and executability (whether sequential or parallel) for all actions within a single set $A_i$. In other words, all possible orderings of the actions led to the same state. It is however possible to introduce a less strict semantics, that will in turn pose some additional requirements on the action execution phase. Let us first define the new semantics using a different specification of transition mutex.
For synchronized semantics, two different transitions $\delta_1$ and $\delta_2$ are mutually exclusive, i.e., $\delta_1$ and $\delta_2$ are a pair of transition mutex, if there exists a state variable $x \in X$ such that $\delta_1, \delta_2 \in T(x)$, and either of the following holds:
1. Neither $\delta_1$ nor $\delta_2$ is a mechanical transition.
2. Both $\delta_1$ and $\delta_2$ are mechanical transitions.
3. Only one of $\delta_1$ and $\delta_2$ is a mechanical transition and they do not transit to the same variable assignment.
To better interpret the definition above, there are only two cases when $\delta_1, \delta_2 \in T(x)$, $\delta_1 \neq \delta_2$ are not mutex. The first case is, without loss of generality, when $\delta_1 = \delta_{e \rightarrow f}$ and $\delta_2 = \delta_{x \rightarrow y}$ for any value $f \in Dom(x)$. The second case is, when $\delta_1 = \delta_{e \rightarrow f}$ and $\delta_2 = \delta_{c \rightarrow d}$ for any values $c, f \in Dom(x), c \neq f$.
In order to illustrate the semantical difference between the two definitions of transition mutex, consider the three actions from Table 1. According to the strict semantics, no two actions of these are allowed to appear within a set $A_i$, since $a_1, a_2, a_3$ are all pairwise mutex. On the other hand, according to the synchronized semantics, none of these actions are mutually exclusive, and therefore all of them can appear together within a single parallel step as long as their preconditions are met. However, there is a following complication: no valid sequential ordering exists for these three actions! The reason for this is that anytime an action is executed, it destroys a precondition of some other action. Still, it is possible to execute these three actions in a valid way within a single step – but only with the condition of their perfectly synchronized parallel execution.
As we depicted using the above example, the synchronized semantics does not guarantee the possibility of an arbitrary sequential or parallel order of action execution. In fact, there might be no sequential execution available at all for a given parallel step $A_i$, in case of which the synchronized parallel execution of some of the actions might be required in order not to break the plan validity.
4 Impact on Usability
Clearly, both semantics allow different actions within the steps of a parallel plan. Before we study the performance of these two semantics, let us have a closer look at their influence on the planning domain modeling.
The main objection that can be raised against the practical use of the synchronized semantics is that the precise synchronization of the start of the actions is mostly infeasible in practice, so the planning domain designer does not want his model to be too fragile on the time constraints. Even though being a bit absurd, consider the following Bomb Terrorist planning domain. Let our modeled world consist of three terrorist $t_1, t_2, t_3$, each of them wearing a pack of dynamite. As usual in their community, only a terrorist that actually fires his explosives can reach the eternal fame, which is, obviously, everyone’s ultimate goal. However once a bomb explodes, everything around is exterminated. Given the terrorist are located inside a room (together with a secret wooden box they are supposed to eliminate), the actions from Table 2 are available in the modeled world.
It is very unlikely that in such scenario it would be possible for all of the terrorists to fire their explosives at once. Still, it is absolutely correct to ask planner the question whether a plan of providing the fame for all three terrorists exists. For our model, the answer of the planner would depend on the
underlying semantics it implements. A planner that complies with the strict semantics would answer that no such plan exists, while, on the other hand, a planner compliant with the synchronized semantics would return a plan consisting of a single parallel step that would include all available fire-actions.
Using the example above we illustrated that different semantics of the step-parallel planning used for the same domain can provide completely different results regarding the existence of a plan. Moreover, other examples can be constructed where the return plans will differ in the plan length or total number of used actions. Since we used very small and simple example, it is very easy to see the potential issues, however for the greater models with tens or hundreds of actions and state variables similar issues might be very difficult to trace.
The construction of the planning domain model is purely a task of a domain designer, however a special care must be taken in order to keep the required relevance to the reality, in case the planning technology to be used returns parallel plans. An example of a planner that uses synchronized semantics is SASE planning system, as described and published in [11].
5 Impact on Performance
Regardless of the usability of the described parallel plan semantics, we were interested in their practical performance. In other words, we wanted to know the use of which semantics leads to finding plans faster. In order to find out we implemented the two versions of a SAT-based planner, which differ only in the definition of the transition mutex relation, as described in one of the earlier chapters. For both versions, the total number of the instances solved within a time limit together with the time required to solve them was measured on a large set of standard planning benchmark problems from the previous International Planning Competitions (IPC). Further details of our planner and the experiments will be described later in this paper.
Knowing which parallel plan semantics is practically faster would be useful when deciding which semantics to use at the phase of planning domain design.
5.1 Planner description
Our planner is a Java application which takes SAS+ files as the input. We will refer to it as SasPlan. In order to translate PDDL files to SAS+ formalism we used Helmert’s Translator tool implemented in Python [10]. For each planning problem within our benchmark set it took at most a few seconds for the Translator to generate the SAS+ input file. Since our goal was to compare the efficiency of the two parallel plan semantics, for the final evaluation we only measured the runtime of our application, not including the translation time (which was also fast compared to the time necessary for solving).
SasPlan is basically a black-box planner which encodes SAS+ problems into satisfiability (SAT) problems using the SASE transition-based encoding [11]. The operation of the planner can be roughly summarized as follows. First we generate a SAT formula that is satisfiable if and only if there is a plan with timespan equal to one. If the formula is unsatisfiable, we increase the timespan by one. Ultimately, we generate a satisfiable formula, and then we can extract a valid parallel plan from its satisfying assignment.
To solve the generated SAT problems we used SAT4J – a Java library for satisfiability by Daniel Le Berre [13]. Although SAT4J is a few times slower than state of the art solvers written in C/C++, it supports incremental solving and is very easy to use within a Java application. Employing a state-of-the-art C++ solver would considerably increase the overall performance of the planner, since most of the runtime is spent on solving the formulas. MiniSat 2.2 [14] would be an appropriate choice, as it is one of the fastest SAT solvers, supporting also the incremental solving technique. Nevertheless, in order to examine the impact of the different semantics on the performance, the use of Java-based SAT solver was sufficient.
Table 3. Total number of solved instances in the time limit of 30 minutes, per domain.
<table>
<thead>
<tr>
<th>Domain</th>
<th>Strict</th>
<th>Synchronized</th>
<th>Difference</th>
</tr>
</thead>
<tbody>
<tr>
<td>Airport</td>
<td>30</td>
<td>27</td>
<td>+3</td>
</tr>
<tr>
<td>Depots</td>
<td>13</td>
<td>12</td>
<td>+1</td>
</tr>
<tr>
<td>Driverlog</td>
<td>15</td>
<td>14</td>
<td>+1</td>
</tr>
<tr>
<td>Elevator</td>
<td>46</td>
<td>47</td>
<td>-1</td>
</tr>
<tr>
<td>FreeCell</td>
<td>4</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>Openstack</td>
<td>5</td>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>Rovers</td>
<td>31</td>
<td>32</td>
<td>-1</td>
</tr>
<tr>
<td>TPP</td>
<td>27</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>Zenotravel</td>
<td>15</td>
<td>14</td>
<td>+1</td>
</tr>
</tbody>
</table>
The way we use the support for incremental solving is very straightforward. When we need to generate and solve the formula for the next timespan, we first remove the clauses that require the goal conditions to hold at the end of the plan. Second, we add new variables for the next time step, together with all the clauses that ensure plan validity and the goal conditions. During the tests performed prior to the evaluation itself we observed that the SAT solver did gain some benefit from being used incrementally, compared to getting a new unknown formula at each time step. We did not however do thorough experiments on a large data set in order to compare the incremental and non-incremental approach in bigger detail, since in all tests we made the incremental version was always faster. Therefore we decided to use the incremental solving approach as a default configuration for both versions of our planner.
As stated earlier, we created two versions of SasPlan that correspond to the two transition mutex definitions. During the implementation phase we noticed that the strict semantics is much easier to encode. Since according to the strict mutex definition all transitions of a state variable are mutex, it is sufficient to add one constraint for every state variable ensuring that at most one of the corresponding transitions can be selected at given time step. In other words, a graph of transition mutex relations is a clique, which can be encoded into SAT clauses very efficiently, and which is also one of the reasons why our strict semantics is defined in a little bit stricter way compared to the definition using action independence. Actually, the interface of SAT4J provides a method for adding the at-most-one constraints and thus the transition mutex constraints can be added by just one method invocation for each variable. On the other hand, the mutex graph for the synchronized semantics is not a clique since not all transitions of a state variable are pairwise mutex. Therefore these relations cannot be encoded that efficiently.
### 5.2 Experiments
For the experiments we used a PC with 3.2GHz Intel Core i7 processor and 24GB of RAM. We limited the memory to 4GB and used a time limit of 30 minutes per instance. For the evaluation of the planner performance the following domains were selected from the available International Planning Competition (IPC) benchmark set: Airport, Depots, Driverlog, Elevator, FreeCell, Openstacks, Rovers, TPP and Zenotravel.
Before the actual experiments it was difficult to predict the results of our experiments, and therefore our expectations were not clear. The strict transition mutex definition constrains the problem more and thus the resulting SAT formula has less or equal solutions for a given makespan than for the other mutex definition. This could make us believe that the performance can be decreased. On the other hand, more constraints help unit propagation to prune the search space more efficiently. Because of these reasons it was very hard to tell in general which reasoning was stronger and how the performance of the SAT solver will be influenced.
In Table 3 we provide the total count of solved instances within the time limit, per domain. The differences are not big and for all domains except Elevator and Rovers the strict semantics solved greater or equal number of instances. Figure 1 presents the runtime increase ratio when solving the problems using the synchronized semantics compared to the strict semantics. We only considered the running times for those problems that were solved by both versions of SasPlan. Values above zero mean that the strict semantics is faster. For example, the \( y \)-value of 1 means that the runtime using the synchronized semantics was twice as long as the runtime using the strict semantics, i.e., \( y = 1 \).
Fig. 1. Box plot representation of the runtime increase ratio when solving the problems using the synchronized semantics compared to the strict semantics. Values above zero mean that the strict semantics is faster. For example, the y-value of 1 means that the runtime using the synchronized semantics was twice as long as the runtime using the strict semantics. The box plots represent the results for domains in following (alphabetical) order: Airport, Depots, Driverlog, Elevator, FreeCell, Openstacks, Rovers, TPP and Zenotravel.
\[(\text{runtime}_{\text{sync}} - \text{runtime}_{\text{strict}})/\text{runtime}_{\text{strict}}\]. From the box plots we can see that even though the strict semantics is not a clear winner in all domains, for some problems its runtime was over 3x faster compared to the synchronized semantics, while the experienced slowdown for other problems was less evident.
Overall, we can conclude that even though the difference between the efficiency of the presented semantics for the parallel planning is not dramatic, the strict semantics exhibited better performance compared to the synchronized one. Since the strict semantics is also more natural and practical for the real-world applications, we believe it should be preferred for majority of cases.
6 Conclusion
Step-parallel planning is a paradigm useful both for dealing with some of the plan-permutation symmetry problems and also for a practical execution of plans.
In this work we described the two available semantics for the parallel planning, for which also the implementations exist: strict semantics and synchronized semantics. After formally describing the theoretical concepts behind the two semantics, we demonstrated that the planning domain designer’s awareness
of the selected parallel planning technology and its underlying semantics is a crucial component of a successful application of automated planning in the real-world scenarios.
Finally, we provided the empirical evaluation of the performance of the two presented semantics by integrating them into the implemented SAT-based planning system. The experiments, for which we used some of the traditional IPC benchmark domains, showed that the strict semantics provides better performance in terms of runtime, while it is also more natural for the practical application.
Acknowledgment
The research is supported by the Czech Science Foundation under the projects no. P103/10/1287, 201/09/H057, SVV project number 263 314 and by the Charles University Grant Agency under contracts no. 9710/2011 and 266111.
References
|
{"Source-Url": "http://ktiml.mff.cuni.cz/~balyo/papers/plansig2011.pdf", "len_cl100k_base": 5908, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25144, "total-output-tokens": 6933, "length": "2e12", "weborganizer": {"__label__adult": 0.0004377365112304687, "__label__art_design": 0.0005660057067871094, "__label__crime_law": 0.0007233619689941406, "__label__education_jobs": 0.0017671585083007812, "__label__entertainment": 0.00016868114471435547, "__label__fashion_beauty": 0.00027871131896972656, "__label__finance_business": 0.0006661415100097656, "__label__food_dining": 0.0004639625549316406, "__label__games": 0.0015163421630859375, "__label__hardware": 0.0010986328125, "__label__health": 0.0008721351623535156, "__label__history": 0.0005979537963867188, "__label__home_hobbies": 0.00019490718841552737, "__label__industrial": 0.001247406005859375, "__label__literature": 0.0005903244018554688, "__label__politics": 0.0005669593811035156, "__label__religion": 0.0005950927734375, "__label__science_tech": 0.46337890625, "__label__social_life": 0.00018167495727539065, "__label__software": 0.02008056640625, "__label__software_dev": 0.5009765625, "__label__sports_fitness": 0.00047659873962402344, "__label__transportation": 0.0022430419921875, "__label__travel": 0.0003147125244140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29662, 0.03515]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29662, 0.53726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29662, 0.91855]], "google_gemma-3-12b-it_contains_pii": [[0, 4508, false], [4508, 6323, null], [6323, 11616, null], [11616, 16256, null], [16256, 20289, null], [20289, 24815, null], [24815, 26580, null], [26580, 29662, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4508, true], [4508, 6323, null], [6323, 11616, null], [11616, 16256, null], [16256, 20289, null], [20289, 24815, null], [24815, 26580, null], [26580, 29662, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29662, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29662, null]], "pdf_page_numbers": [[0, 4508, 1], [4508, 6323, 2], [6323, 11616, 3], [11616, 16256, 4], [16256, 20289, 5], [20289, 24815, 6], [24815, 26580, 7], [26580, 29662, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29662, 0.15686]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
56831cb9ae15feaef321e4bb3f2fc8255faec3b8
|
Ontology Alignment Technique for Improving Semantic Integration
Mohammad Mustafa Taye
Faculty of Information Technology
Philadelphia University, Jordan
mtaye@philadelphia.edu.jo
Nasser Alalwan
Software Technology Research Laboratory (STRL),
De Montfort University
Leicester, UK
nasser@dmu.ac.uk
Abstract—A new technique for ontology alignment has been built by integrating important features of matching to achieve high quality results when searching and exchanging information between ontologies. The system is semi-automatic and enables syntactical and semantic interoperability among ontologies. Moreover, it is a multi-strategy algorithm which can deal with and solve more than one critical problem. Therefore, it is likely to be more conveniently applicable in different domains. Also, we improve a semantic matcher based on combining lexical matcher with several rules and facts. Moreover, our technique illustrates the solving of the key issues related to heterogeneous ontologies, which uses combination-matching strategies to execute the ontology-matching task. Therefore, it can be used to discover the matching between ontologies. The main aim of the work is to introduce a method for finding semantic correspondences among heterogeneous ontologies, with the intention of supporting interoperability over given domains. Our goal is to achieve the highest number of accurate matches.
Keywords—Ontology; Semantic Interoperability; Heterogeneous; Ontology Alignment.
I. INTRODUCTION
Ontology [1] has been developed to offer a commonly agreed understanding of a domain that is required for knowledge representation, knowledge exchange and reuse across domains. Therefore, ontology organizes information into taxonomies of terms (i.e., concepts, attributes) and shows the relationships between them. In fact, it is considered to be helpful in reducing conceptual confusion for users who need to share applications of different kinds, so it is widely used to capture and organize knowledge in a given domain.
Although ontologies are considered to provide a solution to data heterogeneity, from another point of view, the available ontologies could themselves introduce heterogeneity problems.
In order to deal with these problems, ontologies must be available for sharing or reusing; therefore, semantic heterogeneity and structural differences need to be resolved among ontologies. This can be done, in some cases, by aligning or matching heterogeneous ontologies. Thus, establishing the relationships between terms in the different ontologies is needed throughout ontology alignment [4, 5, 7, 14].
Semantic interoperability can be established in ontology reconciliation. The original problem is called the “ontology alignment”. The alignment of ontologies is concerned with the identification of the semantic relationships (subsumption, equivalence, etc.) that hold between the constituent entities (which can be classes, properties, etc.) of two ontologies.
In this paper, an ontology alignment technique has been developed in order to facilitate communication and build a bridge between ontologies. An efficient mechanism has been developed in order to align entities from ontologies in different description languages (e.g., OWL, RDF) or in the same language. This approach tries to use all the features of ontologies (concept, attributes, relations, structure, etc.) in order to obtain efficiency and high quality results. For this purpose, several matching techniques have been used such as string, structure, heuristic and linguistic matching techniques with thesaurus support, as well as human intervention in certain cases, to obtain high quality results. This paper is organized as follows: section II overview about our system; Section III describes our system in details. Section IV and Section V shows the related work and the evaluation process. Finally Section VI concludes the paper.
II. SYSTEM ARCHITECTURE
A framework relies on a well-established measure for comparing the entities of two ontologies which are combined in a homogeneous way. The Figure 1 shows the system components.
III. DETAILED SYSTEM
The system starts by loading two ontologies and extracts useful features such as class names, property names and subsumption relationships from them. In case ontology does not exist, we use our algorithm in [22] to transform relational database to OWL ontology.
A. String Matching
In general, the name of a class (i.e., label) is presented as a chain of characters without space characters. It is used to provide a human-readable description of class. Therefore, a name of class may be a word, or a combination of words. In fact, the name of each class should be unique in the ontology.
Terminological methods compare strings. Hence, these methods can be applied to the name, the label or the comments concerning entities to discover those which are similar. In general, it can be used for comparing class names and/or URIs.
A string matcher [2, 3, 7] usually takes as input the names of two concepts, then calculates the distance between them by distance functions that map a pair of strings to a real number. Consequently, the output will be a numeric value $c \in [0, 1]$ to represent the confidence of the similarity. The main reason for using such measures is the fact that similar entities have similar names and descriptions across different ontologies.
String similarities are based on the assumption that the names of concepts and properties representing semantic similarity will have similar syntactic features. Thus, a string matcher usually first normalizes the input string of names and/or descriptions via stemming and tokenization. In the simplest form, the equality of tokens will be obtained and combined to give a score of the equality for the whole string.
In general, two properties are used to identify terms: the label and the name. The label is a string usually expressed in natural language to describe the purpose of the field to humans, while the name can be any string that is constrained by some name rules. These techniques are usually applied to names, labels, comments concerning entities and the URI. The scaled range is $[0, 1]$ for comparing strings. To achieve high quality results and based on many experiments, the system disregards similarities that are smaller than a threshold of 0.65, and matches similarities greater than 0.65 to the full range $[0, 1]$.
B. Linguistic Matching
The terminology used for naming and labeling concepts and properties is an important aspect of ontologies and provides information on the similarity between the ontology elements. However, linguistic features are also important for deriving an initial set of alignments to be refined by exploiting other kinds of matching. In fact, names of classes or properties are considered to provide one of the most important clues as to whether two terms are equal or not; therefore, we try to find relations between terms from different ontologies based on the details of their names. Such linguistic matching relies on algorithms and the use of external lexicon-based resources such as dictionaries, which are typically used to find close relationships such as synonymy between two terms and to compute the semantic distance between them in order to decide if a relationship holds.
This process is based on linguistic analysis [10,16]. There are two general techniques for label matching, the first of which employs linguistic analysis steps, such as abbreviations, avoiding recurrence and particle-ending. The other is matching the labels to determine the relationship between them.
In general, the linguistic similarity between terms is computed by considering labels and descriptions. Knowledge-based matchers take as input two concept (or synset) identifiers defined in WordNet [12] and produce semantic relations by exploiting their structural properties. They are often based on either similarity or relatedness measures. If the value of the measure exceeds the given threshold, a certain semantic relation is produced. Otherwise, “Idk” (I don’t know) is returned. This technique is implemented by using thesauri and WordNet, following an approach which is essentially the structural congruence between labels based on the hidden meanings of the words that they represent. WordNet, which takes two concept (synset) identifiers as input and returns the semantic relation holding between them, is considered not only to provide synonyms, hypernyms and hyponyms, but also to exploit additional structure to detect relationships between concepts (dinner $\rightarrow$ meal). For example, it considers synonyms as equivalent and hyponyms as subsumed, finding Match and Alignment to be similar classes (car $\rightarrow$ automobile).
In using a WordNet-based matcher we have to translate the (lexical) relations, which are provided by WordNet to logical relations [12], based on the following rules:
- $A \subseteq B$, if $A$ is a hyponym or meronym of $B$. For example, author is a hyponym of creator, therefore deducing that author $\subseteq$ creator.
- $A \supseteq B$, if $A$ is a hypernym or holonym of $B$. For example, Asia is a holonym of Jordan, therefore deducing that Asia $\supseteq$ Jordan.
- $A = B$, if $A$ and $B$ are connected by a synonymous relation or they belong to one synset. For example, quantity and amount are synonyms, therefore deducing that quantity $=$ amount.
- $A \perp B$, if $A$ and $B$ are connected by antonymy relations or are siblings in a part of hierarchy. For example, Jordan and Syria are siblings in the WordNet part of hierarchy, therefore deducing that Jordan $\perp$ Saudi Arabia.
C. Structure Matching
The aim of structural matching is to link an element of source taxonomy with an element of target taxonomy. The mappings generated are mainly specialization matches, based on calculations of the similarity of the source element with all those under the target taxonomy. Indeed, this kind of matching depends on what are considered the most important features of ontology nodes (e.g., class: super-classes and
Sub-class, property: super properties and sub properties). Therefore, similarity is based on the nodes of graphs.
The similarities between two concepts can be obtained from the language and from real attributes; and not only the similarities between the descriptions of their components, but also from similarities between the structures of the graphs representing them. The internal structure of similarities can be obtained by calculating the number of similar attributes divided by the attributes of a class.
Taxonomy is generally represented by an acyclic graph whose nodes are concepts and arcs corresponding to linked subclasses. Class inheritance analysis (is-a) considers the hierarchical connection between classes in order to identify “is-a” relationships.
Taxonomy (C, HC) includes a set of concepts C and a hierarchy subsumption between concepts HC. A concept is defined by its label and subclass relationships, which connect to other concepts. The label is a name (string) which describes entities in natural language and can be an expression composed of several words. Subclass relations establish links between concepts.
The intuition behind the algorithm is that if two concepts lie in similar positions with respect to is-a or part-of hierarchies relative to concepts already aligned in the two ontologies, then they are likely to be similar as well. For each pair of concepts (C1, C2) in the original list of alignment relationships, the structural matcher augments the original similarity value for pairs of concepts (C’1, C’2), such that C’1 and C’2 are equivalent to, are in an is-a relationship with or participate in a part-of relationship with C1 and C2 respectively. The augmentation depends on both the relationship and the distance between the concepts in the is-a and part-of hierarchies. It is important to mention here two important rules that help to ensure correct matching. First, if the super-concepts of two classes are the same, then these two concepts may be similar to each other. The second rule is that if the sub-concepts of two classes are the same, we can say that the concepts are also similar.
Structural analysis identifies identical classes by looking at their attributes and related (linked) classes. The main idea is that two classes of two ontologies are similar or identical if they have the same attributes and the same neighbor classes. Hence, matching concepts are based on structural similarity with regard to class hierarchy.
D. Heuristic-based Strategies
This phase of our system uses several features of ontologies (i.e., their structure, definitions of concepts and instances of concepts) in order to find matches. Based on the idea that labeling is important and helps to align most of the entities, the structure also provides valuable help in identifying alignments. We have developed this step based on these two elements.
It considers the entities and structure of an ontology, i.e., class (C), property (P), relation (R), instance (I) and superclass (S). The distances between the input structures are then expressed in a set of equations. As described on Figures (2, 3).
\[
\text{TotSim}(c, c') = w \cdot \text{Sim}_c(c, c') + w \cdot \text{Sim}_p(c, c') + w \cdot \text{Sim}_r(c, c') + w \cdot \text{Sim}_i(c, c') + w \cdot \text{Sim}_s(c, c')
\]
where
- \( \text{Sim}_c(c, c') \) is the similarity between labels of classes,
- \( \text{Sim}_p(c, c') \) is the similarity between properties of classes,
- \( \text{Sim}_r(c, c') \) is the similarity between relations of classes,
- \( \text{Sim}_i(c, c') \) is the similarity between instances of classes,
- \( \text{Sim}_s(c, c') \) is the similarity between super-classes of classes,
- \( w \) is the weight, which is set at 1/5 in order to obtain results in the range [0, 1].
The output is one-to-one or one-to-many correspondences. This strategy is based on string similarity (i.e., Monge-Elkan metric [3]) structure and instances.
Monge-Elkan distance is recursive matching scheme for comparing two long strings s and t. By assuming that the strings s and t are broken into substrings (tokens), i.e., s = s1 . . . sk and t = t1 . . . tl. The intuition behind this measure is the assumption that si in s corresponds to tj with which it has highest similarity. The similarity between s and t equals the mean of these maximum scores.
\[
\text{Monge} - \text{Elkan}(A, B) = \frac{1}{|A|} \sum_{i=1}^{|A|} \max_{j=1}^{|B|} \text{match}(A_i, B_j)
\]
In heuristic matching, iteration is one of the most important steps in ontology alignment, which takes into account the structure of the input ontologies. It enables the whole process to be repeated several times, by propagating and updating the assessed similarities.
The sigmoid strategy combines multiple results using a sigmoid function, which is a smoothed threshold function,
showing the importance of retaining high individual prediction values and removing low ones.
This technique starts after the application of the normalization process on the input elements, then compares class and property names in terms of editing distance and substring distance between entity names. The algorithms next create a distance matrix in order to determine the alignment group from the distance.
This algorithm is used in order to cover most possible features of ontologies (i.e., terminological, structural and extensional); on the other hand, we explicate recursive PROMPT ontology, finding possible this approach applies string by taking the average or maximum value. from the distance.
showing the importance of retaining high individual prediction values and removing low ones.
This technique starts after the application of the normalization process on the input elements, then compares class and property names in terms of editing distance and substring distance between entity names. The algorithms next create a distance matrix in order to determine the alignment group from the distance.
This algorithm is used in order to cover most possible features of ontologies (i.e., terminological, structural and extensional); on the other hand, we explicate recursive PROMPT ontology, finding possible this approach applies string by taking the average or maximum value. from the distance.
Any two concepts are probably the same if their labels are the same.
Any two concepts are equal if their properties are equal, even if their labels are different.
Two concepts that have the same instances are the same.
**E. Aggregation**
The results discussed here have been calculated using string, linguistic, structure and heuristic matchers. Indeed, with several matching strategies/ algorithms, there are several similarity values for a candidate matching (e1; e2). Therefore, combining them is an effective way to achieve high accuracy for a larger variety of ontologies, so this step extracts the combined matching result from the individual strategy results stored in the similarity cube. For each combination of ontology entities, the strategy-specific similarity values are aggregated into a combined similarity value, e.g., by taking the average or maximum value.
The easiest way to proceed consists of selecting correspondences above a particular threshold. Such threshold-based filtering allows us to retain only the most similar entity pairs. For the combination of the match results, the average value has been computed and a selection has been made using a threshold, for example: Semantic Distance\(C, C' \leq \) Threshold
**IV. RELATED WORK**
The following literature offers several approaches to the alignment of ontologies, based on measures of similarity.
**A. The Naive Ontology Mapping**
This approach [17] is simple, constituting a straightforward baseline for later comparisons. It comprises six steps. Feature Engineering demands that the ontologies be represented in RDF. Search Step Selection compares all entities of the first ontology with all entities of the second. Similarity Computation computes the similarity between entities in different ontologies, using a wide range of similarity functions. In Similarity Aggregation, NOM highlights individually significant similarities by weighting individual similarity results and aggregating them. This, however, neglects individual similarities that are of less significance. Interpretation uses the individual or aggregated similarity values to derive mappings between entities. Finally, Iteration repeats the previous step several times. This gives the capacity to access the already computed pairs and use more sophisticated structural similarity measures, whereas neglecting this step provides only a comparison based on labels and string similarity. A new version has more features and heuristic combinations, such as Quick Ontology Mapping (QOM) [18].
**Advantage and Disadvantage:** this approach applies string matching, structure matching and an instance matching, but it doesn’t use auxiliary information. The means of defining the ontology is based on concepts, properties, and instances. The input-ontologies for this approach are in RDF format only. Moreover, it does not use a normalisation process. The way of selecting matching elements is threshold based.
**B. PROMPT**
Prompt [21] is a tool for merging ontologies, developed by Stanford University Knowledge Systems Laboratory. The knowledge model underlying PROMPT is frame-based and is compatible with Open Knowledge Base Connectivity. In general, this tool provides a semi-automatic approach to merging two ontologies; it is based initially on alignment relations, which should be held before providing output as a coherent ontology. More specifically, PROMPT performs some tasks automatically: it takes two ontologies as input and creates an initial list of matches based on class names. This list will be a coherent ontology. The following cycle then occurs: (1) the user triggers an operation by either selecting one of PROMPT’s suggestions from the list or by using an ontology-editing environment to specify the desired operation directly; and (2) PROMPT performs the operation, automatically executes additional changes based on the type of the operation, generates a list of suggestions for the user, based on the structure of the ontology around the arguments of the last operation, and determines conflicts that the last operation introduced in the ontology, finding possible solutions for them. PROMPT then guides the user in performing other tasks for which his intervention is required. Its top level contains Classes (collections of objects arranged into hierarchies), Slots (binary relations), Facets (ternary relations) and Instances (individual members of classes).
**Advantage and Disadvantage of PROMPT:**
It applies string matching and semantic matching but it does not provide instance or structure matching. The input-ontologies for this approach are in different format like RDF(s), OWL-Lite, and OWL-DL. The output is merged ontology. The way of defining ontology is based on concepts, properties and instances. It does not deal with normalisation process. The way of selecting matching elements is based on highest value. This approach provides interactive suggestions to the users. It solves mismatches at terminological and scope of concept level, and it helps alignment by providing possible edit points and it supports repeatability. But it is not automatic which means every step requires user interaction.
C. Chimaera
Chimaera [19, 20] is a semi-automatic or interactive tool for merging ontologies. The engineer is in charge of making decisions that will affect the merging process. This tool starts by analysing the ontologies to be merged. It automatically finds linguistic match merges, and if it cannot find any matching terms, it gives the user control over any further action. In fact, it is similar to PROMPT, as both are embedded in ontology editing environments and offer the user interactive suggestions.
**Advantage and Disadvantage of Chimaera:**
It uses string matching, semantic matching and structure matching but it does not provide instance matching. The input-ontologies for this approach are OKBC ontologies and the output is a merged ontology. This approach analyses ontologies to be merged; if linguistic matches are found then the merge is processed automatically; otherwise, it uses subclass and super class relationship. In fact, this approach solves mismatches at the terminological level in a very light way, and provides interactive suggestions to the users. It solves mismatches at terminological and scope of concept level, and it helps alignment by providing possible edit points and it is not repeatability. But it is not automatic which means everything requires user interaction. (It is very similar to PROMPT).
V. Evaluation
It can be argued that the most significant and crucial issue when suggesting a new approach is its evaluation. Therefore, this section presents many test cases which are used to evaluate the performance of our system in different scenarios, followed by the experimental methodology, test data sets and results.
The Ontology Alignment Evaluation Initiative (OAEI) is a coordinated international initiative to establish agreement for evaluating and improving the available ontology alignment techniques. The OAEI ontology matching campaign is a contest organised annually since 2004, comprising several kinds of tests, processes and measures for assessing the results.
The benchmark data tests were divided into five groups, as shown in Table 1.
**Table 1: Description of Benchmark Data Set**
<table>
<thead>
<tr>
<th>Test Sets</th>
<th>Ontology Description</th>
<th>Num of Ontologies</th>
</tr>
</thead>
<tbody>
<tr>
<td>101-104</td>
<td>Similar in both label description and hierarchy structure</td>
<td>4</td>
</tr>
<tr>
<td>201-210</td>
<td>Similar hierarchy structure</td>
<td>10</td>
</tr>
<tr>
<td>221-247</td>
<td>Similar in label description</td>
<td>18</td>
</tr>
<tr>
<td>248-266</td>
<td>Different in both label description and hierarchy structure</td>
<td>15</td>
</tr>
<tr>
<td>301-304</td>
<td>Real-world ontologies provided by different institutions</td>
<td>4</td>
</tr>
</tbody>
</table>
In order to assess the different approaches or evaluate the degree of compliance of the results of matching algorithms, standard information retrieval metrics are used, presenting four values which are widely used to estimate the quality of the alignment process and its results: precision, recall, overall and F-measure.
Currently, there are many ontology matching systems that have been developed based on different strategies for various purposes. In order to evaluate their performance and their qualities, we will focus on OAEI evaluation which employs a systematic approach to evaluate ontology matching algorithms and identify their strengths and weaknesses. After that we chose the following tests to show the evaluation:
A. Tests 221 to 247
In the third test set, the names, labels and comments had no special features that might confuse the alignment, but the structures of these ontologies were manipulated and some instances of and properties were added. Therefore, in these ontologies our algorithm performed very well on string-, linguistic- and heuristic-based strategies in computing the similarity between features. This was due to the fact that the terms in these test cases had high string similarity; moreover, the heuristic matcher performed very well in these tests. On the other hand, where specific terms did not have similar names or comments, our algorithm was able to apply structural or semantic features of ontologies in order to derive the remaining alignments.
The most important issues affecting each of these are briefly stated here. Ontologies 221 to 247 featured no specialization (221), a flattened hierarchy test (222), an expanded hierarchy test (223), no instances (224), no restrictions (225), no datatypes (226), unit differences (227), no properties (228), class vs. instances (229) and flattened classes (230); all of these were matched with a very high recall and precision rate. As a conclusion, on this group of tests our algorithm performed well, which can be attributed to the fact that we carried out both syntactic and semantic similarity assessments.
**Table 2: Result of Tests 221-247**
<table>
<thead>
<tr>
<th>Test ID</th>
<th>Precision</th>
<th>Recall</th>
</tr>
</thead>
<tbody>
<tr>
<td>221</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>222</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>223</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>224</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>225</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>228</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>230</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>231</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>232</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>233</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>236</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>237</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>238</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>239</td>
<td>1.00</td>
<td>0.99</td>
</tr>
<tr>
<td>240</td>
<td>1.00</td>
<td>0.99</td>
</tr>
<tr>
<td>241</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>246</td>
<td>1.00</td>
<td>1.00</td>
</tr>
<tr>
<td>247</td>
<td>1.00</td>
<td>1.00</td>
</tr>
</tbody>
</table>
Although the structures of the candidate ontologies were changed, our algorithm found most correct alignments by using strings (label similarity, comment similarity), the
linguistic perspective and heuristic matching. Therefore, both precision and recall were excellent.
While tests 221-247 shared the same names and comments, their structures differed. Instances were similar, but some ontologies did not contain them. The information given was sufficient to reach very good results. For most of these tests the structures were modified, which means that structural similarity was low, but the label similarity was very high. Because of this low structural similarity, the structure matcher did not work well for some tests; for example, tests 221, 232, 233 and 241 had high label and structural similarity factors, so both linguistic and structure-based strategies were employed, although the structure matcher made little contribution. Table 2 shows the results. Table 2 shows the results which appeared from tests 221-247. Our results are very high and are nearly equal to 1. Our algorithms are heavily using linguistic and string matching algorithms.
B. Comparison with other existing approaches
In order to evaluate our system, a comparison of the system results was made against the published results from the 2007 Ontology Alignment Evaluation Initiative.
For most of tests 221-247, the structures of ontologies were manipulated, so that structural similarity was low; however, names, labels and comments in these ontologies had no special features, so linguistic similarity was very high. The information given was sufficient to yield very good results. In this set of tests, where the ontologies had high similarity with the reference ontology on linguistic information, our system performed very well and was the best, with precision, recall and F-measure scores of 1.00, 0.999 and 0.999 respectively. Other systems, including Falcon, DSSim and RiMOM also performed very well, with results on the F-measure of 0.997, 0.996 and 0.997 respectively.
VI. CONCLUSION
We develop new ontology alignment technique by using different matching strategies. This new ontology alignment approach utilizes both linguistic and structural information from ontologies in order to solve ontology alignment problems. The system is applying different matching algorithms, which includes: String matching, Linguistic-based strategies Structural matching, and Heuristic-based Strategies.
REFERENCES
|
{"Source-Url": "http://www.thinkmind.org/download.php?articleid=semapro_2010_1_30_50049", "len_cl100k_base": 6393, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22885, "total-output-tokens": 7745, "length": "2e12", "weborganizer": {"__label__adult": 0.0003709793090820313, "__label__art_design": 0.0007700920104980469, "__label__crime_law": 0.0007729530334472656, "__label__education_jobs": 0.0021953582763671875, "__label__entertainment": 0.00024068355560302737, "__label__fashion_beauty": 0.0002677440643310547, "__label__finance_business": 0.0006031990051269531, "__label__food_dining": 0.00047659873962402344, "__label__games": 0.000964641571044922, "__label__hardware": 0.0009169578552246094, "__label__health": 0.001003265380859375, "__label__history": 0.0005788803100585938, "__label__home_hobbies": 0.00014925003051757812, "__label__industrial": 0.0005903244018554688, "__label__literature": 0.0013818740844726562, "__label__politics": 0.0005488395690917969, "__label__religion": 0.0007872581481933594, "__label__science_tech": 0.437255859375, "__label__social_life": 0.00027370452880859375, "__label__software": 0.07220458984375, "__label__software_dev": 0.4765625, "__label__sports_fitness": 0.00030922889709472656, "__label__transportation": 0.0006089210510253906, "__label__travel": 0.0002846717834472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32849, 0.05038]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32849, 0.61975]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32849, 0.90701]], "google_gemma-3-12b-it_contains_pii": [[0, 4119, false], [4119, 10126, null], [10126, 14989, null], [14989, 21599, null], [21599, 27180, null], [27180, 32849, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4119, true], [4119, 10126, null], [10126, 14989, null], [14989, 21599, null], [21599, 27180, null], [27180, 32849, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32849, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32849, null]], "pdf_page_numbers": [[0, 4119, 1], [4119, 10126, 2], [10126, 14989, 3], [14989, 21599, 4], [21599, 27180, 5], [27180, 32849, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32849, 0.16981]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
ffb07980aa045e2108032ff8a9bbcd4bf2f98ce5
|
Exploring the Potential of Large Language Models in Generating Code-Tracing Questions for Introductory Programming Courses
Aysa Xuemo Fan¹, Ranran Haoran Zhang², Luc Paquette¹, Rui Zhang²
¹ University of Illinois at Urbana-Champaign
² Penn State University
{xuemof2, lpaq}@illinois.edu
{hzz5361, rmz5227}@psu.edu
Abstract
In this paper, we explore the application of large language models (LLMs) for generating code-tracing questions in introductory programming courses. We designed targeted prompts for GPT4, guiding it to generate code-tracing questions based on code snippets and descriptions. We established a set of human evaluation metrics to assess the quality of questions produced by the model compared to those created by human experts. Our analysis provides insights into the capabilities and potential of LLMs in generating diverse code-tracing questions. Additionally, we present a unique dataset of human and LLM-generated tracing questions, serving as a valuable resource for both the education and NLP research communities. This work contributes to the ongoing dialogue on the potential uses of LLMs in educational settings¹.
1 Introduction and Background
The teaching of introductory programming courses continues to be a challenging endeavor, despite the global uptake and popularity of such courses. High enrollment rates often result in diverse student populations, with a wide range of programming experience from those just starting their journey to others with prior exposure (Lopez et al., 2008). Ensuring an effective learning experience that accommodates this wide disparity presents a daunting task, making the teaching of these courses complex.
One critical component in teaching introductory programming is the focus on code tracing, a skill identified as instrumental in enhancing code writing abilities (Lister et al., 2009; Venables et al., 2009; Kumar, 2013). Current educational methodologies encourage code tracing through a variety of means, such as practice questionnaires (Lehtinen et al., 2023), direct teaching strategies (Xie et al., 2018), and tracing quizzes (Sekiya and Yamaguchi, 2013). These strategies consistently utilize code-tracing questions aimed at fostering and developing a student’s understanding and skills.
However, the preparation of code-tracing questions poses challenges. Manual question creation by instructors (Sekiya and Yamaguchi, 2013; Hassan and Zilles, 2021) is time-consuming and lacks scalability. Automatic generation using program analysis saves time, yet is limited by the analyzer’s capabilities and lacks question diversity (Zavala and Mendoza, 2018; Thomas et al., 2019; Russell, 2021; Lehtinen et al., 2021; Stankov et al., 2023).
¹Our data and code are available at https://github.com/aysafanxm/llm_code_tracing_question_generation
In light of the increasing potential of Large Language Models (LLMs) in sectors like code summarization and explanation (Chen et al., 2021; Siddiq et al., 2023), the question arises: Can LLMs generate high-quality code-tracing questions? Our study explores this query using GPT4 (OpenAI, 2023), leveraging prompts to guide its question generation based on given code snippets and descriptions. To assess the LLM’s capability in this pivotal aspect of computer science education, we devised a set of human evaluation metrics. This allowed for an objective appraisal of the LLM-generated questions, and, through a comparative analysis with human-created counterparts, critical aspects such as question quality, diversity, discernibility between human and AI authors, and relative superiority in quality were explored (Figure 1). These analyses have enhanced our understanding of the potential roles of LLMs in computer science education.
This investigation provides a foundation for considering the potential inclusion of LLMs in learning platforms, which could offer new possibilities for enhancing the learning experience in introductory programming courses. Given these advancements, our study contributes to the field as follows:
• The curation of a high-quality dataset consisting of human and LLM-generated code tracing questions and associated code snippets.
• An exploration and evaluation of GPT4’s capability in question generation, including comparisons with both GPT3.5-turbo and human-authored questions, and an examination of few-shot and zero-shot scenarios.
• The introduction of a human evaluation methodology and a comprehensive assessment of the quality of LLM-generated questions, offering valuable insights into the potential of LLMs in educational contexts.
2 Related Work
Question Generation: Early Question Generation (QG) research primarily focused on multiple-choice questions (Mitkov et al., 2006; Agarwal and Mannem, 2011) and questions with specific interrogatives (Heilman and Smith, 2010). With the emergence of the SQuAD dataset (Rajpurkar et al., 2016), context-dependent QG gained momentum (Du et al., 2017; Yuan et al., 2017; Subramanian et al., 2018; Puri et al., 2020). This extended to complex tasks like generating unanswerable questions (Choi et al., 2018; Zhu et al., 2019; Reddy et al., 2019) and multi-hop reasoning (Pan et al., 2020, 2021; Shridhar et al., 2022). Our work, focusing on generating code tracing questions in CS education domain, addresses unique challenges around code, natural language, and pedagogical comprehension, inadequately covered by previous methods due to a lack of specialized datasets.
Code LLMs for CS Education: Recent advances in code large language models (LLMs) (Chen et al., 2021; Wang et al., 2021; Le et al., 2022; Wang et al., 2023) have enabled various downstream applications, including code completion, retrieval, summarization, explanation, and unit test generation (Lu et al.; Siddiq et al., 2023; Tian et al., 2023). Studies have showcased the LLMs’ ability to generate novice programming content comparable to humans (Finnie-Ansley et al., 2022; Piccolo et al., 2023). LLMs have been utilized in classroom environments (Kazemitabaar et al., 2023), to generate coding exercises and explanations (Sarsa et al., 2022), and to create counterfactual questions (Narayanan et al., 2023). Our study represents the first exploration of LLMs for code tracing question generation, a critical component of CS Education, thus underscoring the potential of these models for generating educational content.
3 Our Approach
3.1 Task Definition
In automatic tracing question generation, given a description (optional) $d \in D \cup \emptyset$, detailing the code context, and a code snippet $c \in C$ provided by an instructor or student, the aim is to generate a set of relevant questions $Q'$ for student practice. This task can be formally defined as a function:
$$f : (d, c) \mapsto Q'$$
where $D$ represents all possible descriptions, $C$ all possible code snippets, and $Q'$ is a subset of all possible questions $Q$.
3.2 Curating the Code-Tracing Question Dataset
For our experiment, we curated a unique dataset reflecting the range of tracing questions encountered by beginner programmers. We sourced 158 unique
questions from CSAwesome\(^2\), a recognized online Java course aligned with the AP Computer Science A curriculum. To enhance diversity, we added 18 questions extracted from relevant YouTube videos. Other platforms and sources were also examined but didn’t fit due to a lack of explicit tracing questions. Our final dataset consists of 176 unique code snippets and question pairs, allowing a fair evaluation of LLMs’ ability to generate introductory programming tracing questions.
### 3.3 Prompt Engineering and Model Selection
In our iterative approach to prompt engineering and model selection, we first refined prompts and then generated tracing questions using GPT-3.5-turbo and GPT-4. Using BERTScore, we assessed question diversity and similarity. Based on these insights, we combined the optimized prompt with the chosen model to determine the most effective generation approach, be it few-shot or zero-shot. Our final prompt, refined iteratively from (Brown et al., 2020), positioned in Appendix B, adopts an expert instructor’s perspective, encourages deep understanding via code-tracing questions, and maximizes the inherent versatility of LLMs.
Next, we considered GPT-3.5-turbo and GPT-4 for model selection, and investigate the generated tracing questions diversity by BERTScore (Zhang* et al., 2020). Regarding the automatic evaluation of the diversity in generated questions, we adopted the following methodology: For each code snippet, we utilize a singular human-authored tracing question as the reference. Both GPT3.5-turbo and GPT4 are then tasked with generating multiple tracing questions for every snippet. Following this, we employ regular expressions in a postprocessing step to segment the generated content, isolating individual tracing questions. Subsequently, for each generated prediction \(p\), its BERTScore is computed in relation to the reference, denoted as BERTScore(\(reference, p\)).
The boxplot in Figure 2 displays the Precision, Recall, and F1 scores for both models. From the graph, it’s clear that GPT-3.5-turbo has a median Precision score around 0.45, Recall slightly above 0.6, and an F1 score hovering around 0.5. In comparison, GPT-4 shows a more balanced performance with a median Precision score close to 0.6, Recall near 0.55, and F1 just above 0.5. Notably, the variability in scores, particularly for GPT-4, highlights the diverse outcomes in its results. Based on our results, we chose GPT4 for subsequent evaluations. Enhanced performance examples from GPT4 are in Appendix C.
Next, we hypothesized that the few-shot question generation approach, which feeds the model with three tracing question examples and their respective code snippets, would yield higher-quality questions than the zero-shot generation that relies solely on the prompt. Contrary to our expectations, the experiment showed that the few-shot method introduced a significant bias towards the example questions, thus narrowing the diversity in the generated questions. Consequently, we opted for the zero-shot generation in our tests, which fostered a broader spectrum of question types. Detailed examples of outcomes from both the zero-shot and few-shot approaches are available in Section 4.4.
### 3.4 Human Evaluation
Next, we conducted a human evaluation comparing the quality of GPT4-generated and human-authored tracing questions. The expert evaluators were meticulously screened based on specific criteria: they had to be computer science graduate students with at least one year of programming teaching or tutoring experience. Four such experts, meeting these criteria, participated in the evaluation.
Each evaluator was assigned a set of 44 randomly selected code snippets from a pool of 176 human-authored tracing questions. For each snippet, evaluators received a pair of questions (one human-authored and one GPT4-generated) in a randomized order to mitigate potential ordering bias. Evaluators unawareness of question authorship was...
The evaluators rated each question based on five criteria shown in Table 1. They also guessed the question’s authorship and expressed their preference between the pair. Detailed evaluation criteria and labels can be found in Table 1.
### Table 1: Criteria used for expert evaluation.
<table>
<thead>
<tr>
<th>Criteria Label</th>
<th>Label</th>
</tr>
</thead>
<tbody>
<tr>
<td>Relevance to Learning Objectives</td>
<td>1-5</td>
</tr>
<tr>
<td>Tracing or not</td>
<td>Yes/No</td>
</tr>
<tr>
<td>Clarity of the Question</td>
<td>1-5</td>
</tr>
<tr>
<td>Difficulty Level</td>
<td>1-5</td>
</tr>
<tr>
<td>Relevance to the Given Code Snippet</td>
<td>1-5</td>
</tr>
<tr>
<td>Ability to Distinguish Source</td>
<td>Human-created/</td>
</tr>
<tr>
<td></td>
<td>AI-generated</td>
</tr>
<tr>
<td>Preference for Better Question</td>
<td>Check preferred</td>
</tr>
</tbody>
</table>
#### Table 2: Comparative statistics for human and GPT4 generated questions.
<table>
<thead>
<tr>
<th>Criteria</th>
<th>Human Mean</th>
<th>Median</th>
<th>GPT4 Mean</th>
<th>Median</th>
</tr>
</thead>
<tbody>
<tr>
<td>Relevance to Learning</td>
<td>4.78</td>
<td>5.00</td>
<td>4.62</td>
<td>5.00</td>
</tr>
<tr>
<td>Question Clarity</td>
<td>4.72</td>
<td>5.00</td>
<td>4.42</td>
<td>5.00</td>
</tr>
<tr>
<td>Appropriate Difficulty</td>
<td>4.75</td>
<td>5.00</td>
<td>4.43</td>
<td>5.00</td>
</tr>
<tr>
<td>Relevance to Code</td>
<td>4.72</td>
<td>5.00</td>
<td>4.64</td>
<td>5.00</td>
</tr>
</tbody>
</table>
The evaluators rated each question based on five criteria shown in Table 1. They also guessed the question’s authorship and expressed their preference between the pair. Detailed evaluation criteria and labels can be found in Table 1.
### Analyses and Results
This section details our analysis and highlights the results, encompassing quality ratings, expert perceptions, and textual similarities in question generation.
#### 4.1 Comparative Analysis of Quality Ratings
<table>
<thead>
<tr>
<th>Criteria</th>
<th>U-val</th>
<th>p</th>
</tr>
</thead>
<tbody>
<tr>
<td>Relevance to learning objectives</td>
<td>3688.0</td>
<td>0.047</td>
</tr>
<tr>
<td>Question Clarity</td>
<td>3392.0</td>
<td>0.011</td>
</tr>
<tr>
<td>Difficulty Appropriateness</td>
<td>3540.5</td>
<td>0.015</td>
</tr>
<tr>
<td>Relevance to the given code snippet</td>
<td>3918.5</td>
<td>0.595</td>
</tr>
</tbody>
</table>
#### 4.2 Expert Perception of Question Authorship
We further evaluated the discernibility of LLM-generated questions from human-authored ones using a Confusion Matrix (Table 5). Approximately 56% (99 out of 176) of GPT4-generated questions were mistakenly identified by experts as human-generated, and about 20% (35 out of 176) of human-authored questions were misattributed to the GPT4. This overlap signifies the high quality of the generated questions and GPT4’s proficiency in producing pedagogically relevant tracing questions. Moreover, the matrix reveals an evaluator bias toward attributing higher-quality questions to human authorship.
#### 4.3 Textual Similarity between Questions
Table 6 presents BLEU (Post, 2018), ROUGE-1/2/L (Lin, 2004), and BERTScores (Zhang* et al., 2020), comparing the similarity between the randomly selected GPT4 generated questions and corresponding human-authored questions. The low BLEU and ROUGE scores suggest that GPT4 is generating distinct, non-verbatim questions compared to human-authored questions. A moderate BERTScore, reflecting semantic similarity, suggests that GPT4-generated questions align with the context of human-authored ones. This further underscores GPT4’s capability to independently generate relevant and diverse code-tracing questions, distinct from those crafted by humans.
### 4.4 Few-shot vs Zero-shot Generation Results
Few-shot generation biased our model towards the provided examples, largely reducing question diversity. In contrast, zero-shot generation yielded more diverse questions, prompting us to favor it for broader question variety in our experiment. Detailed examples of the generated results for both 0-shot and few-shot methods can be found in Appendix D.
Table 4 provides a side-by-side comparison of GPT-4’s performance in few-shot and zero-shot settings. The zero-shot results exhibit a broader range of question types, while the few-shot results seem to be more templated, reflecting the bias introduced by the provided examples.
Possible reasons for these observations include the influence of training data and model design in zero-shot scenarios, allowing GPT-4 to tap into its vast training experience. In contrast, in few-shot scenarios, the model might overly adhere to the provided examples, interpreting them as stringent templates, which can compromise output diversity. The balance between the nature of the task and the examples becomes pivotal in few-shot settings, potentially leading to outputs that may sacrifice accuracy or diversity. These hypotheses warrant further investigation in future work.
### 5 Conclusion
This study explored the capability of GPT-4 in generating code-tracing questions that rival the quality of those crafted by human educators. The findings illuminate the potential of LLMs to bolster programming education, marking a significant stride in the domain of code-tracing question generation and LLM application. This sheds light on scalable, high-quality automated question generation.
Limitations and Future Work
This study marks a step closer in evaluating LLMs for code tracing question generation, but it is not without its limitations. Our research was primarily anchored to GPT-4, raising concerns about the generalizability of our findings to other LLMs, such as CodeT5+. Moreover, the study did not delve into the personalization of tracing questions based on individual student submissions, a facet that could greatly enhance the learning experience. Furthermore, the real-world educational efficacy of the LLM-generated questions remains an open question, given that our study did not involve actual students.
Several avenues beckon for further exploration. Evaluations with a broader range of models will offer a more comprehensive perspective on LLM capabilities. While our study centered on introductory Java tracing questions, assessing LLM versatility across different programming domains is imperative. The potential of LLMs extends beyond mere question generation; by tailoring questions to student needs, we can amplify the educational relevance. Our roadmap includes the development of an educational platform integrated with LLM questions, followed by classroom experiments and usability testing. To ensure broader applicability, expanding our dataset is crucial. Lastly, our findings on few-shot and zero-shot learning necessitate further investigation into model adaptability, biases in question generation, and the potential of intermediate-shot learning.
These directions not only underscore the transformative potential of LLMs in AI-driven education but also emphasize the importance of comprehensive evaluations.
Ethical Statement
Our exploration of Large Language Models (LLMs) in introductory programming education was conducted ethically. We sourced public data and maintained evaluator anonymity and data confidentiality through secure storage. Evaluators were informed of the objectives and participated voluntarily. All evaluation results, as committed in the IRB forms, are securely stored. We strived for educational fairness by properly compensating the educators involved in our evaluation. We are mindful of the societal impacts of LLM integration in education. While acknowledging their promise, we believe careful consideration of pedagogical goals within the educational ecosystem is vital. Our future work will be guided by these ethical principles of privacy, informed consent, secure data handling, inclusivity, and conscientious progress focused on students’ best interests.
References
Henry B Mann and Donald R Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics, pages 50–60.
A Questionnaire
Description: (blank)
Code:
```java
int[][] m = {{1,1,1,1},{1,2,3,4},{2,2,2,2},{2,4,6,8}};
int sum = 0;
for (int k = 0; k < m.length; k++) {
sum = sum + m[m.length-1-k][1];
}
```
Question 1: Given the following code segment, what is the value of sum after this code executes?
Question 2: What is the role of the expression ‘m[m.length-1-k][1]’ in the code?
Annotation
1. Relevance to Learning Objectives: The question is relevant to the learning objectives of an introductory programming course. (Label: 1-5)
2. Tracing or not: Is this a tracing question? (Label: Yes or No)
3. Clarity of the Question: The question presented is clear and the language used in the question is easy to understand. (Label: 1-5)
4. Difficulty Level: The difficulty level of the question is appropriate for an introductory programming course. (Label: 1-5)
5. Relevance to the Given Code Snippet: The question is appropriately related to the code snippet provided in the question. (Label: 1-5)
6. Ability to Distinguish Human Authored from Automatically Generated Questions: Can you tell if the question is human-authored or automatically generated? (Label: Human-created or AI-generated)
7. I think this is a better tracing question. (Check the box under the better question)
This is an example of our questionnaire sent to annotators.
B Prompts
The final prompt we employed offers LLMs a detailed context: it requests the generation of questions from an expert instructor’s perspective within a defined pedagogical setting. It outlines the merits of code-tracing questions, gives an insight into their typical structure, and highlights their educational aim, specifically, promoting in-depth understanding rather than just assessing knowledge. Unlike the data collection process, where each code snippet is linked to a single question, the prompt is designed to produce multiple valid tracing questions for the same snippet, which leverages the inherent diversity and breadth potential of LLMs.
**User Prompt:** In your role as an education expert in an introductory Java programming course, you are equipped with a deep understanding of Java and teaching methodologies. Your aim is to shape the minds of young learners, paving their path to becoming proficient programmers. One of your potent teaching techniques involves providing students with real-world coding practice requirements and letting them craft working code snippets.
Consider a scenario where you have presented your students with a requirement to develop a basic calculator program in Java, performing fundamental arithmetic operations like addition, subtraction, multiplication, and division. Enthusiastically, the students pour their effort into crafting varied solutions, reflecting their unique approach to the problem and their understanding of Java programming.
But your task doesn’t end with the generation of code. It’s now time for the students to delve deeper, and here comes the importance of tracing in programming. Code tracing is the process of manually going through the execution of code, step-by-step, to understand its flow and logic. It’s a fundamental practice in learning programming, aiding students in debugging their programs and developing a stronger understanding of code behavior.
One of the best ways to encourage code tracing is by generating tracing questions. Such questions stimulate the students to analyze their code at a granular level, fostering their problem-solving skills and enhancing their understanding of the Java language.
Remember, your questions should stem from two sources - the initial Requirement and the Code the students have produced. For example, given a code snippet, you might ask 'What is the output of this code if the user inputs two and three and chooses to multiply?' or 'Can this code handle the division by zero scenario, according to the requirement?'.
Your goal is not just to evaluate their coding skills, but to instill a deep, lasting understanding of Java programming in them. By asking relevant tracing questions, you’re helping to nurture a new generation of critical thinkers, problem solvers, and proficient programmers.
You will ask many relevant tracing questions given 1. Requirement, 2. Code, as below:
**Requirement:** (blank)
```java
int x = 3;
int y = 2;
if (x > 2)
x++;
if (y > 1)
y++;
if (x > 2)
System.out.print("first");
if (y < 3)
System.out.print("second");
System.out.print("third");
```
This is an example of our prompt.
---
**C GPT3.5 vs GPT4**
---
int x = 3;
int y = 2;
if (x > 2)
x++;
if (y > 1)
y++;
if (x > 2)
System.out.print("first ");
if (y < 3)
System.out.print("second ");
System.out.print("third");
GPT-3.5 Result:
Tracing Questions:
1. What are the initial values of x and y in this code?
2. Does the if statement on line 3 execute? Why or why not?
3. Does the if statement on line 5 execute? Why or why not?
4. What is the value of x after line 4 has executed?
5. What is the value of y after line 6 has executed?
6. Which System.out.print statement(s) will execute in this code?
7. What is the output of this code?
8. What happens if the initial value of x is 1? How does the output change?
9. What happens if the initial value of y is 3? How does the output change?
10. Can this code handle other arithmetic operations besides addition and subtraction, according to the requirement?
GPT-4 Result:
1. What will be the values of x and y after executing the first two if statements?
2. What is the output of the code snippet as it is?
3. If we change the initial value of x to 1, what will be the output of the code snippet?
4. If we change the initial value of y to 0, what will be the output of the code snippet?
5. What will happen if both x and y are initialized with values less than or equal to 2? What will be the output in that case?
6. How would you modify the code to include an additional condition where it checks if x is equal to y and prints "equal" if it’s true?
7. If we swap the positions of the second and third if statements, would the output change? Why or why not? 8. Can you explain the flow of the code execution given the initial values of x and y? What are the key decision points in the code?
9. What happens if we change the second if statement to "if (y == 1)"? How does that change the output of the code snippet?
10. How would you modify the code to print "none" if none of the conditions are met in the if statements?
This is an example of GPT3.5-turbo v.s. GPT4 for tracing question generation on a simple java problem.
public static int redo(int i, int j) {
if (i==0)
return 0;
else
return redo(i/j, j)+1;
}
GPT-3.5 Result:
1. What is the purpose of the redo() method?
2. What is the return type of the redo() method?
3. What is the initial value of i and j when the redo() method is called?
4. What happens if the value of i is 0?
5. What happens if the value of j is 0?
6. What happens if the value of i is negative?
7. What happens if the value of j is negative?
8. What happens if the value of i is greater than j?
9. What is the output of the redo() method if the values of i and j are 8 and 2 respectively?
10. What is the output of the redo() method if the values of i and j are 10 and 3 respectively?
GPT-4 Result:
1. What is the base case for this recursive function, and what does it return?
2. How is the recursion progressing towards the base case in this code?
3. What will be the output of this function if the inputs are i=16 and j=2?
4. What will be the output of this function if the inputs are i=81 and j=3?
5. How will the function behave if both i and j are equal to 0? Will it cause any issues?
6. How will the function behave if i is less than j? Is the output as expected according to the requirement?
7. How does the function handle the scenario when j is 1? Is the output as expected according to the requirement?
8. Can this function handle negative inputs for i and j? If not, how can it be modified to handle such cases?
9. What is the maximum recursion depth for this function, and how does it depend on the input values i and j?
10. Are there any edge cases or scenarios that this function might not handle well? If so, can you suggest any modifications to improve the code?
This is an example of GPT3.5-turbo v.s. GPT4 for tracing question generation on a recursion java problem.
D Zero-Shot vs Few-Shot in GPT4
Requirement: (blank)
Code:
```java
public static int redo(int i, int j)
{
if (i==0)
{
return 0;
}
else
{
return redo(i/j, j)+1;
}
}
```
**Few-Shot Prompt** ... Here are some more examples of tracing questions based on code snippet:
1. Requirement: (blank)
Code:
```java
for (int i = 3; i <= 9; i++)
{
System.out.print("*");
}
```
Tracing question:
How many times does the following method print a *?
2. Requirement: (blank)
Code:
```java
//precondition: x >=0
public static void mystery (int x)
{
System.out.print(x % 10);
if ((x / 10) != 0)
{
mystery(x / 10);
}
System.out.print(x % 10);
}
```
Tracing question:
Given the following method declaration, which of the following is
printed as the result of the call mystery(1234)?
3. Requirement: (blank)
Requirement: (blank)
Code:
```java
public static int product(int n)
{
if (n <= 1)
return 1;
else
return n * product(n - 2);
}
```
Tracing question:
Given the following method declaration, what value is returned as the result of the call product(5)?
Your goal is not just to evaluate their coding skills, but to instill a deep, lasting understanding of Java programming in them. By asking relevant tracing questions, you're helping to nurture a new generation of critical thinkers, problem solvers, and proficient programmers. You will ask many relevant tracing questions given 1. Requirement, 2. Code, as below. You don’t have to strictly follow the examples, and we encourage you to be creative when thinking about the tracing questions:
Requirement: (blank)
Code:
```java
public static int redo(int i, int j)
{
if (i==0)
return 0;
else
return redo(i/j, j)+1;
}
```
Zero-Shot Result:
1. What is the purpose of this function? What does it aim to accomplish?
2. What is the base case for this recursive function? What happens when the base case is reached?
3. If the input values are i = 8 and j = 2, what will be the output of the function? Please trace through the code step-by-step and explain your reasoning.
4. If the input values are i = 27 and j = 3, what will be the output of the function? Please trace through the code step-by-step and explain your reasoning.
5. What happens if the input values are i = 0 and j = 5? Does the function handle this case correctly? Explain your reasoning.
6. What happens if the input values are i = 5 and j = 0? Does the function handle this case correctly according to the requirement? Explain your reasoning.
7. What happens if both input values are negative, such as i = -8 and j = -2? Does the function handle this case correctly according to the requirement? Explain your reasoning.
8. Are there any edge cases or potential issues with the current implementation of the function? If so, what are they and how can they be addressed?
Few-Shot Result:
Tracing question 1: What is the output of the method redo(9, 3)?
Tracing question 2: What is the output of the method redo(0, 5)?
Tracing question 3: What is the output of the method redo(27, 3)?
Tracing question 4: How many times does the method redo call itself when i = 16 and j = 2?
Tracing question 5: Can this method handle cases where either i or j is a negative number? If so, what would be the output for redo(-6, 2)?
This is an example GPT4 in Few-Shot setting v.s. Zero-Shot Setting.
|
{"Source-Url": "https://aclanthology.org/2023.findings-emnlp.496.pdf", "len_cl100k_base": 7151, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 40586, "total-output-tokens": 11266, "length": "2e12", "weborganizer": {"__label__adult": 0.0007452964782714844, "__label__art_design": 0.0007781982421875, "__label__crime_law": 0.0006937980651855469, "__label__education_jobs": 0.0521240234375, "__label__entertainment": 0.0001809597015380859, "__label__fashion_beauty": 0.00038242340087890625, "__label__finance_business": 0.0004944801330566406, "__label__food_dining": 0.0008234977722167969, "__label__games": 0.001434326171875, "__label__hardware": 0.0010709762573242188, "__label__health": 0.0008778572082519531, "__label__history": 0.0005140304565429688, "__label__home_hobbies": 0.0002505779266357422, "__label__industrial": 0.0006866455078125, "__label__literature": 0.0008497238159179688, "__label__politics": 0.0006451606750488281, "__label__religion": 0.0009746551513671876, "__label__science_tech": 0.015716552734375, "__label__social_life": 0.00042176246643066406, "__label__software": 0.006366729736328125, "__label__software_dev": 0.91162109375, "__label__sports_fitness": 0.00075531005859375, "__label__transportation": 0.0011606216430664062, "__label__travel": 0.0004291534423828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43041, 0.03146]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43041, 0.5667]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43041, 0.8443]], "google_gemma-3-12b-it_contains_pii": [[0, 2819, false], [2819, 7121, null], [7121, 11101, null], [11101, 14645, null], [14645, 16325, null], [16325, 21203, null], [21203, 26162, null], [26162, 31169, null], [31169, 31169, null], [31169, 33171, null], [33171, 35709, null], [35709, 37742, null], [37742, 39594, null], [39594, 40498, null], [40498, 42833, null], [42833, 43041, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2819, true], [2819, 7121, null], [7121, 11101, null], [11101, 14645, null], [14645, 16325, null], [16325, 21203, null], [21203, 26162, null], [26162, 31169, null], [31169, 31169, null], [31169, 33171, null], [33171, 35709, null], [35709, 37742, null], [37742, 39594, null], [39594, 40498, null], [40498, 42833, null], [42833, 43041, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 43041, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43041, null]], "pdf_page_numbers": [[0, 2819, 1], [2819, 7121, 2], [7121, 11101, 3], [11101, 14645, 4], [14645, 16325, 5], [16325, 21203, 6], [21203, 26162, 7], [26162, 31169, 8], [31169, 31169, 9], [31169, 33171, 10], [33171, 35709, 11], [35709, 37742, 12], [37742, 39594, 13], [39594, 40498, 14], [40498, 42833, 15], [42833, 43041, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43041, 0.06769]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3d6223ae9645555d4bfda48c0f4439e48ee42077
|
State Space Reduction by Proving Confluence
Stefan Blom and Jaco van de Pol
CWI, P.O.-box 94.079, 1090 GB Amsterdam, The Netherlands
Email: {sccblom, vdpol}@cwi.nl
Abstract We present a modular method for on-the-fly state space reduction. The theoretical foundation of the method is a new confluence notion for labeled transitions systems. The method works by adding confluence information to the symbolic representation of the state space. We present algorithms for on-the-fly exploration of the reduced state space, for generating confluence information and for a symbolic reduction, called prioritization. The latter two algorithms rely on an automated theorem prover to derive the necessary information. We also present some case studies in which tools that implement these algorithms were used.
Keywords: Labeled transition systems, state space reduction, confluence, on-the-fly reduction, theorem proving, symbolic transformation, branching bisimulation, $\mu$CRL tool set
Category: A - regular paper
Title: State Space Reduction by Proving Confluence
Authors: Stefan Blom, CWI, Amsterdam
Jaco van de Pol, CWI, Amsterdam
Contact Author: Jaco van de Pol
Address: CWI, Department of Software Engineering
P.O.-Box 94.079
1090 GB Amsterdam
The Netherlands
Email: Jaco.van.de.Pol@cwi.nl
Phone: +31 - (0)20 592 4137
Fax: +31 - (0)20 592 4199
State Space Reduction by Proving Confluence
Stefan Blom and Jaco van de Pol
CWI, P.O. Box 94.079, 1090 GB Amsterdam, The Netherlands
Email: {sccblom, vdpol}@cwi.nl
Abstract We present a modular method for on-the-fly state space reduction. The theoretical foundation of the method is a new confluence notion for labeled transitions systems. The method works by adding confluence information to the symbolic representation of the state space. We present algorithms for on-the-fly exploration of the reduced state space, for generating confluence information and for a symbolic reduction, called prioritization. The latter two algorithms rely on an automated theorem prover to derive the necessary information. We also present some case studies in which tools that implement these algorithms were used.
1 Introduction
A popular approach to the verification of distributed systems is based on an exhaustive state space exploration. This approach suffers from the well-known state space explosion problem. Much research is devoted to algorithms that generate a reduced, but essentially equivalent, state space. Collectively, these methods are called partial-order reduction methods. In this paper we introduce a new method for generating a reduced state space that is branching bisimilar with the original one.
The method is based on a subtle variation on the confluence notion for labeled transition systems (LTS). Inverse \( r \) steps in the LTS may be confluent or not. All states in a subset connected by confluent steps are branching bisimilar. By virtue of our new confluence notion, this subset may be replaced by a particular representative state, and only transitions outgoing from this representative need to be explored. This is explained in Section 2.
In order to apply confluence for the immediate generation of the reduced state space, the confluent transitions must be detected before generating the LTS. This is solved in Section 3 by representing the system specification in an intermediate format, called linear process. A specification in this format consists of a finite number of symbolic transitions. The confluence property of each symbolic transition (or in fact a stronger approximation) can be expressed as a Boolean formula over the data types that occur in the specification. This formula is solved by a separate automated theorem prover. If the formula can be proved, the transition is marked as confluent, allowing for some reduction of the corresponding state space.
In some cases it is even possible to feed the information on confluent symbolic transitions back to the symbolic level. This results in a transformation on linear
processes, which we call *symbolic prioritization* (Section 4). In Section 5 we show a number of applications, to which we applied our techniques to reduce the state space. Some of these examples go beyond partial-order reduction methods based on super-determinism. Furthermore, confluence detection and symbolic prioritization can be applied to infinite state spaces as well.
Our ideas are implemented in the context of the \(\mu\)CRL tool set [2]. The basic functionality of this tool set is to generate a state space (LTS) out of a \(\mu\)CRL specification. To this end *linear processes* are used as an intermediate representation. This contributes to the modularity of the tool set. In particular, several optimizations are implemented as separate tools that transform a linear process, aiming at a reduction of the state space to be generated.
To this tool set, we added symbolic prioritization as yet another optimizer on linear processes. Moreover, the on-the-fly reduction algorithm has been integrated in the state space generator of the tool set.
With the approach in this paper we further contribute to modularity. In particular, we defined a notion of confluence, which is quite liberal, but nevertheless sufficient to ensure correctness of the on-the-fly reduction algorithm. Finding confluent transitions is a separate task. In fact, while the maximal set of confluent transitions is hard to detect, it is perfectly acceptable if actual confluence detection algorithms only find a subset.
We propose to use an automated theorem prover to find a reasonable subset of confluent transitions, but an alternative approach could be to prove confluence by hand, or with interactive theorem provers. In cases where the specification is automatically generated from source code, it is sometimes even possible to know a priori that certain transitions are confluent.
### 1.1 Related Work
Several *partial order reduction* algorithms that preserve branching bisimilarity have been proposed in the literature [18,14,13]. These approaches also allow the reduction to a representative subset of all states. Some of these approaches restrict attention to deterministic transitions. All these approaches involve some notion of *determinacy*.
For instance, criterion 88 [18] allows the selection of one outgoing transition from a state provided it is an invisible *super-deterministic* transition. In our approach, such a transition need not be deterministic, but only confluent, which means that *eventually* the computation paths reach the same state. It can be proved that the set of super-deterministic transitions forms a confluent set of \(\tau\)-transitions, but not vice versa.
Another criterion for obtaining a branching bisimilar state space is 85, which prevents that a transition is postponed forever. This is implemented in [13] by the algorithm Twophase. In phase one, a state is expanded by repeatedly applying *deterministic* invisible transitions, which is then fully expanded in phase two. Our algorithm can be seen as a modification: we take confluent transitions...
until a terminal strongly connected component (SCC) is reached, instead of
deterministic transitions only.
Confluence is a property of sets of invisible transitions in an LTS. Several
confluence notions have been studied in the setting of branching bisimi-
lation [11,19,8]. In [1] these notions are compared systematically. In summary,
the notions in [11,19] only deal with global confluence, whereas we deal with
partial confluence, i.e. it suffices to identify a subset of the τ-transitions which is
confluent. This is essential for practical applications, in which it is seldom the
case that all τ-transitions are confluent.
The confluence notion in [11] was adapted to partial confluence in [8]. In
order to apply it for state space reduction, this notion required the absence of
τ-loops. [8] introduced an algorithm to reduce concrete state spaces as follows.
First, all strongly connected τ-components are collapsed, in order to get rid of
τ-cycles. Then, the maximal set of strongly confluent τ-steps is computed, and
strongly confluent τ-transitions are given priority over other transitions. We note
that these steps can only be applied after generating the unreduced state space.
Especially absence of τ-loops is a severe restriction for on-the-fly generation.
In our paper, we use theorem proving to find (not necessarily all) confluent
transitions in the specification, which allows the immediate generation of the
reduced state space. So we provide a really on-the-fly reduction method. Our
new method even allows to perform some optimization at specification level
already.
Finally, also [19] introduces a notion of confluence. That notion has the nice
teoretical property that the τ-transition relation is confluent, if and only if it
only connects states that are branching bisimilar. For reduction purposes that
definition is not convenient, because it would require to consider all outgoing
transitions in each terminal SCC. Our notion of confluence is slightly stronger,
and as a consequence we only have to take all outgoing transitions from one
member of each terminal SCC.
2 Confluence and Reduction On-the-fly
In this section we present the confluence property, a state space reduction method
based on the confluence property and an algorithm that computes these reduced
state spaces “on-the-fly”.
We use labeled transition systems (LTS) to model state spaces. Confluence
is a property of sets of invisible transitions in an LTS. Any set of transitions
induces an equivalence relation on the set of states, which identifies states in
the same component. If a set of invisible transitions is confluent then the in-
duced equivalence relation is a branching bisimulation. Moreover, each finite
equivalence class has a representative, whose transitions are the same as those
of the whole equivalence class. Because of these representatives, we can give an
algorithm that computes the reduced state space without computing the whole
original state space.
Because the reduction preserves branching bisimilarity, the reduced state space can be used to check all properties of the original system that are expressible in action based CTL*-X (computation tree logic without next-time) or HML (Hennessy-Milner logic).
2.1 Confluence
The labels of our LTSs will be taken from a given set Act. We assume that Act contains a special element \( \tau \), representing an invisible action.
**Definition 1 (LTS).** A labeled transition system is a triple \( (S, \rightarrow, s_0) \), consisting of a set of states \( S \), transitions \( \rightarrow \subseteq S \times \text{Act} \times S \) and an initial state \( s_0 \in S \).
We write \( s \xrightarrow{a} t \) for \( (s, a, t) \in \rightarrow \). Moreover, \( \xrightarrow{\mathcal{R}} \) denotes the transitive reflexive closure of \( \rightarrow \), and we write \( s \xrightarrow{\mathcal{R}} t \) if either \( s \xrightarrow{a} t \) or \( a = \tau \) and \( s = t \). Given a subset of \( \tau \)-transitions \( c \subseteq \rightarrow \), we write \( s \xrightarrow{\mathcal{R}} t \) for \( (s, t) \in c \). Finally, we write \( S_1 \xrightarrow{\mathcal{R}} S_2 \) to denote that LTSs \( S_1 \) and \( S_2 \) are branching bisimilar [5].
The idea is that a subset \( c \) of the set of invisible transitions is confluent if the steps in \( c \) cannot make real choices. This is formalized with two conditions. First, if in a certain state two different sequences of \( c \) steps are possible then these sequences can be extended with more \( c \) steps to sequences that end in the same state. Second, if in a state both a step from \( c \) is possible and an \( a \) step is possible then after doing the step from \( c \) another \( a \) step is possible and the results of the two \( a \) steps are in the same \( c \) equivalence class. These two requirements can be neatly expressed in a single diagram, as is done in the following definition:
**Definition 2 (confluence).** Let \( c \) be a subset of \( \rightarrow \). Then \( c \) is confluent iff the following diagram holds:
\[
\begin{array}{c}
\tau_x \\
\downarrow \quad \downarrow \\
\begin{array}{c}
\tau_y \\
\downarrow \\
\tau_z \\
\end{array}
\end{array}
\]
2.2 Reduction
As mentioned before, the equivalence relation induced by a confluent set is a branching bisimulation. By taking the original state space modulo this equivalence one can reduce the state space. An effective way of computing the transitions of the reduced state space is to find a representative of each equivalence class, whose transitions are precisely the transitions of the equivalence class. These representatives are guaranteed to exists if the equivalence class (seen as a directed graph) has a Terminal Strongly Connected Component (TSCC). In that case every element of the TSCC can be chosen as a representative. Because of confluence there is at most one TSCC. The notion of representation map is based on this idea. The first condition forces every element in an equivalence class to have the same representative. The second condition forces this representative to be in the TSCC.
Definition 3 (representation map). Given an LTS $S \equiv (S, \rightarrow, s_0)$ with a confluent subset of $\tau$-steps labeled $c$, a map $\phi : S \rightarrow S$ is called a representation map if $\forall s, t \in S : s \xrightarrow{\tau_c} t \Rightarrow \phi(s) = \phi(t)$ and $\forall s \in S : s \xrightarrow{\tau_c} \phi(s)$.
A representation map exists if and only if every equivalence class has a TSCC. Because we usually work with finite LTSs the existence of TSCCs and hence representation maps is guaranteed. Based on the notion of representation map we can define a reduced LTS. The set of states of the reduced LTS is the set of representatives. For every transition from a representative to a destination state in the original LTS, we include a transition from that representative to the representative of that destination in the reduced LTS. Finally, the new initial state is the representative of the old initial state. In [1] it is proven that the reduced LTS is branching bisimilar to the original LTS.
Definition 4 (LTS modulo $\phi$). Given a representation map $\phi$, we define $S/\phi = (\phi(S), \rightarrow_{\phi}, \phi(s_0))$, where $s \xrightarrow{a}_{\phi} t$ if $a \neq \tau_c$ and $\exists t' : s \xrightarrow{a} t'$ and $\phi(t') = t$. As usual, $\phi(S) = \{\phi(s) \mid s \in S\}$.
Theorem 5. Given a transition system $S$ with a confluent subset of $\tau$-steps labeled $c$ and a representation map $\phi$, we have that $S \sim_\phi S/\phi$.
2.3 Algorithm for Reduction On-the-fly
The essential functions in an “on-the-fly” interface for an LTS are a function that yields the initial state and a function that computes outgoing transitions of a given state. Given an on-the-fly representation of an LTS and the label for confluent transitions, the key to providing an on-the-fly representation of the reduced LTS is a function that computes a representation map. (See Figure 1.)
Such a function must return a representative from the terminal strongly connected component of the $\tau_c$ graph. Moreover, this representative must be unique for all elements of an equivalence class. We implemented the latter requirement simply by maintaining a table of the representatives that we have computed so far. To compute a representative if it is not in the table, we use a variation on Tarjan’s algorithm for finding strongly connected components [17]. More precisely, we perform a depth first search of the graph of confluent transitions until we find a state with a known representative or until we backtrack from a node where we entered a strongly connected component. The first thus encountered component is the TSCC. In the latter case this node is chosen as the representative, and stored in the table.
The table consumes a significant amount of memory. If this is unacceptable and a total order on the set of states is available then one can choose the least state in the TSCC as the representative and recompute the TSCC each time rather than storing it.
3 Confluence Detection by Theorem Proving
In the previous section we discussed a state space generation algorithm, which applies reduction on-the-fly, based on some information on confluent transitions.
ReducedInit()
return Representative(Init())
ReducedNext(state)
return { (a,Representative(s)) \ a \ \neq \ \tau_e, (a,s) \in \text{Next}(state) }
Figure 1.
In this section we show how this information is obtained. In particular, by exploiting a special format for the specification, we show that (stronger approximations of) confluence properties can be expressed by quantifier-free first order formulae. These formulae can be solved by a separate automated theorem prover.
3.1 Symbolic specifications in $\mu$CRL
We implemented our ideas in the setting of the $\mu$CRL formalism. A $\mu$CRL specification [10,2] consists of an algebraic specification, defining the data sorts, and a process specification, defining the system under scrutiny as the parallel composition of several components, each specified as a process algebra expression. We utilize the result in [9], that such specifications can be transformed to a linear process without a considerable blow-up in size.
A linear process consists of a vector of global state variables (each with a particular sort from the algebraic data specification), an initial state vector (terms whose sorts match the state variables), and a set of program rules, traditionally called summands. These summands define the process behaviour in a condition/action/effect style, similar to I/O-automata or UNITY programs. These summands have the following form:\footnote{We focus on the essential ingredients, rather than concrete $\mu$CRL syntax.}
$$\sum_{e_i} b_i(d, e_i) \Rightarrow a_i(d, e_i); d := g_i(d, e_i)$$
We assume that the data algebra contains the special sorts $\text{Bool}$ for booleans and $\text{Act}$ for external actions. In the summands above, we have that:
- $e_i$ is a vector of local variables used for instance to model arbitrary input of this summand.
- $b_i(d, e_i)$ is a term of type $\text{Bool}$, with variables among $d$ and $e_i$, denoting the condition or guard of the summand.
- $a_i(d, e_i)$ is a term of type $\text{Act}$, with variables among $d$ and $e_i$, denoting the action executed by this summand.
- $g_i(d, e_i)$ is a vector of terms, whose sorts match the vector of global state variables.
Each summand specifies a structural transition \( \rightarrow \) as follows:
\[
d \xrightarrow{a_i} d' \text{ iff } \exists e_i \cdot b_i(d, e_i) \And d' = g_i(d, e_i) \And \alpha = a_i(d, e_i)
\]
Together, a linear process specifies a \textit{structural} labeled transition system \((\Sigma, (\rightarrow)_{i \in I, s_0})\), from which the ordinary LTS can be obtained by taking the union of all structural transitions. Here a state in \(\Sigma\) is a vector of data values for the global variables; \(s_0\) is the initial state vector; \(I\) is the (index) set of the summands; \(\rightarrow\) is the structural transition generated by summand \(i\), which is a subset of the transitions of the whole LTS.
Note that a structural transition defined in this way is partial (due to the enabling condition \(b\)) and non-deterministic (due to choice involved in \(\exists e\)). Also note that one summand may generate transitions with various labels. A similar decoupling of action labels and structural transitions occurs in [18].
### 3.2 Generation of Confluence Formulae
Owing to the format of linear processes, commutation formulae can be generated. In order to facilitate automated theorem proving, we try to avoid quantifiers. The generated formulae will be Boolean expressions over the user defined abstract data types, with implicit universal quantification.
To get the formulae in this form, we only consider a special case, which occurs frequently in practice. So in fact we detect a stronger approximation of confluence. Consider two divergent steps of summands \(i\) and \(j\):
\[
\sum_{e_i} [b_i(d, e_i)] \Rightarrow a_i(d, e_i); d := g_i(d, e_i)
\]
\[
\sum_{e_j} [b_j(d, e_j)] \Rightarrow \tau; d := g_j(d, e_j)
\]
The first simplification is that we only consider a closing of this diagram in \textit{one step} (strong confluence). Furthermore, we assume that the diagram is closed by using the same summands \(j\) and \(i\) again, and moreover we only try the same instance of summand \(i\) and \(j\). This situation is depicted in the following picture (we left out the enabling conditions).
\[
\begin{array}{c}
\xrightarrow{\tau} g_j(d, e_j) \\
\downarrow^{g_i(g_j(d, e_j), e_i)} \\
\downarrow^{g_i(g_j(d, e_j), e_i)} \\
g_i(d, e_i) \rightarrow \tau \Rightarrow g_j(g_i(d, e_i), e_j)
\end{array}
\]
Commutation of $\tau$-summand $j$ with summand $i$ can be expressed by the following Boolean expression over the algebraic data theory:
$$b_i(d, e_i) \land b_j(d, e_j) \rightarrow \left( \begin{array}{c}
b_i(g_j(d, e_j), e_i) \\
\land b_j(g_i(d, e_i), e_j) \\
\land a_i(d, e_i) = a_i(g_j(d, e_j), e_i) \\
\land g_i(g_j(d, e_j), e_i) = g_i(g_i(d, e_i), e_j)
\end{array} \right)$$
If $\tau$-summand $j$ commutes with all summands $i$ (including $j$), it can be safely marked as a confluent $\tau$-summand. In fact, in case also $a_i(d, e_i) = \tau$, we also have to allow that $g_i(d, e_i) = g_j(d, e_j)$. As strong confluence implies confluence, the transitions generated by $\tau$-summand $j$ will form a confluent subset in the sense of Definition 2. Because the union of to confluent subsets constitutes a confluent subset, it is safe to label multiple summands in the same linear process.
### 3.3 Automated Theorem Prover
In order to prove formulae of the above kind, we have built a theorem prover for Boolean combinations over a user-defined algebraic data type. In [16] we show how an extension of binary decision diagrams (BDD) enhanced with term rewriting can be applied to these formulae. This is along the lines of the BDDs extended with equality developed in [7]. Given a formula, the prover returns an equivalent but ordered BDD. If this BDD equals TRUE, the pair $(i, j)$ commutes.
If the resulting BDD doesn’t equal TRUE, then the formula could not be proved, and $\tau$-summand $i$ cannot be marked as confluent. Note that this may be due to the fact that it is not confluent, or due to the fact that the prover is inherently incomplete (simple equalities over an abstract data type are undecidable already, let alone arbitrary Boolean expressions). In this case, the prover provides some diagnostics, on the basis of which user interaction is possible.
The user can add equations to the data specification, or provide an invariant. It is possible to add new equations, provided they hold in the initial model. Proving correctness of the new equations requires induction, which is beyond our theorem prover. The new equations could be proved either manually, or using a separate interactive theorem prover.
In some cases, the formula is not valid in the initial model, but it would hold for reachable states $d$. In this case, the user may supply an invariant $Inv$ and the confluence formulae are proved under the assumption $Inv(d)$. Of course such an invariant must be checked separately. This can be done with the same prover, by checking that $Inv$ holds initially, and is preserved by all summands $i$:
$$b_i(d, e_i) \land Inv(d) \rightarrow Inv(g_i(d, e_i))$$
### 4 Optimization by Symbolic Prioritization
Combining the previous sections, we can now mark certain transitions as being confluent by using an automated theorem prover, and subsequently generate a
reduced state space by the on-the-fly reduction algorithm. However, the confluence marks can also also be used to apply an optimization to the specification, i.e. on the symbolic level. Let transition $j$ be a deterministic (i.e. without local variables) transition which is marked as confluent:
$$[b_j(d)] \Rightarrow \tau_c; d := g_j(d)$$
Now because the $j$ summand is confluent, it may be given priority to other summands, as long as loops are avoided. To avoid loops, we will only give summand $j$ priority just after a non-marked (visible or non-visible) transition. So let another summand $i$ be given, which is not marked as confluent:
$$\sum_{e_i} [b_i(d, e_i)] \Rightarrow a_i(d, e_i); d := g_i(d, e_i)$$
Now, if we can prove that $j$ is always enabled after the $i$-transition, we can combine the $i$ and $j$ summand in one step. Enabledness of $j$ can be represented by the formula $b_j(d, e_i) \rightarrow b_j(g_i(d, e_i))$. This formula is sent to the prover, and upon success, we modify summand $i$ to become:
$$\sum_{e_i} [b_i(d, e_i)] \Rightarrow a_i(d, e_i); d := g_j(g_i(d, e_i))$$
We call this transformation symbolic prioritization. One advantage of this symbolic optimization is that the intermediate state $g_i(d, e_i)$ needs not be explored during state space generation. Another advantage is that this optimization often gives rise to a cascade of other possible optimizations, such as elimination of unused variables removal of dead code, based on reachability analysis. In many cases the confluent $j$ summand becomes unreachable (in the sense that in all reachable states $d$, $b_j(d)$ is false), in which case it can be removed from the specification. This in turn can give rise to elimination of state variables that were only used in summand $j$.
A very interesting effect is that we can now possibly mark more transitions as confluent. Recall that we only mark “strong confluence”, where a diverging pair is closed in one step. After symbolic prioritization, we might detect confluence also when the diverging pair can be closed in two steps as illustrated in Figure 2. Of course, this process can be iterated.
In the system on the left, automatic confluence marking will not detect the confluence of $\tau$-summand (2), because the divergence with $a$-summand (1) cannot be closed in one step. However, typically $a$-summand (3) will be detected to be confluent, because no other summands are enabled in its source state. The marking of summand (3) is denoted by the $\tau_c$-label. Note that after $a$-summand (1) it is always possible to perform the marked $\tau_c$-summand (3). Hence symbolic prioritization can be applied, and we obtain the system on the right. In the new situation summand (2) becomes strongly confluent, so it will be detected by a second application of automatic confluence detection. Due to the confluence of summand (2), the state space generation algorithm will now visit a single path through this graph.
5 Applications
We applied our method to several distributed algorithms, protocols and industrial case studies. A number of experiments are described in detail in [16]. Figure 3 shows the reduction obtained by confluence detection and on-the-fly reduction. For each system, we list the size of the original and the reduced state space, and also – in order to allow fair comparisons – the total costs including the number of nodes and transitions that are visited during the TSCC-computation.
The first rows refer to the alternating bit protocol, the bounded retransmission protocol, and a mutual exclusion algorithm. Furthermore, DKR\((n)\) refers to the DKR leader election protocol with \(n\) parties; Firewire\((n)\) to the Firewire Tree Identify protocol for \(n\) components from the IEEE 1394 bus standard; and the Lift entries refer to a case study with distributed lifts [6], used for lifting car trucks by several lift legs.
As a conclusion, we note that the contribution of confluence reduction to toy examples is rather modest. However, on the industrial case studies (Firewire, Lift) the reduction is notable. On the DKR protocol the reduction is even dra-
matic (the number of visited states goes from exponential down to $n \log n$). We now discuss two experiments in more detail.
5.1 Leader election protocol
In the DKR (Dolev-Klawe-Rodeh) leader election protocol [4], $n$ parties are connected in a ring by $n$ channels, modeled as unbounded queues. These parties exchange messages, which are not visible for the outside world. After a finite number of messages, the party with the highest identification performs the action “I’m the leader”.
This algorithm allows for a remarkable state space reduction, viz. from exponential to linear in the number of parties. The theorem prover detects that all $\tau$-summands are confluent, even when $n$ is unknown. Given a concrete number of parties, the generation algorithm finds a completely deterministic path representing the whole state space. So the state space is immediately reduced to a single transition, labeled “I’m the leader”. We remark that also the traditional partial order reduction can be applied to this example.
5.2 Shared data space systems
We also studied distributed systems based on shared data space architectures, such as Splice [3]. A Splice system consists of a number of application processes, that coordinate through agents, which are coupled via some network. The agents locally maintain multi-sets of data items (the distributed data space), into which applications can write new items, and from which applications can read items. The agents distribute their items by asynchronously sending messages to each other over the network.
Figure 4 depicts a simple Splice system, with a producer and a consumer. In between, several workers independently take items from their local storage, perform some computation, and write results back in the space. We want to prove transparency of the number of workers. See [12] for the full case study, which heavily relies on using our confluence reduction.
<table>
<thead>
<tr>
<th>original space</th>
<th>reduced state space</th>
<th>after prioritization</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td># states</td>
<td>cost</td>
</tr>
<tr>
<td>Splice (1.2)</td>
<td>85362</td>
<td>15</td>
</tr>
<tr>
<td>Splice (2.2)</td>
<td>18140058</td>
<td>69</td>
</tr>
<tr>
<td>Splice (3.2)</td>
<td>??</td>
<td>297</td>
</tr>
<tr>
<td>Splice (1.4)</td>
<td>??</td>
<td>83</td>
</tr>
<tr>
<td>Splice (2.4)</td>
<td>??</td>
<td>1661</td>
</tr>
<tr>
<td>Splice (3.4)</td>
<td>??</td>
<td>31001</td>
</tr>
<tr>
<td>Splice (1.6)</td>
<td>??</td>
<td>317</td>
</tr>
<tr>
<td>Splice (2.6)</td>
<td>??</td>
<td>14387</td>
</tr>
<tr>
<td>Splice (3.6)</td>
<td>??</td>
<td>??</td>
</tr>
</tbody>
</table>
**Figure 5.** Splice benchmarks with symbolic prioritization
This communication mechanism is asynchronous and leads to much non-determinism: messages from one agent are sent to the others in any order. Consequently, the agents receive messages in various orders, even when they originate from the same agent. By proving confluence, it is detected that all these different orders are equivalent. In fact, the on-the-fly reduction algorithm computes a reduced state space, as if there were only one global multi-set of data items.
Another reduction is possible within the workers. They read any message from their agent, and write some computed result back. Note that such transactions cannot be represented by super-deterministic transitions, because a worker can start with any message in the current set of its agent. Therefore, traditional partial-order reduction methods, which are based on super-determinism, fail on this example. However, several such transactions commute, basically because \((A \cup \{a\}) \cup \{b\} = (A \cup \{b\}) \cup \{a\}\). Using confluence reduction, only a fixed transaction order is explored.
For this example, we also needed symbolic prioritization: although the transactions commute, the corresponding diagrams can only be closed in multiple steps. This corresponds to the situation in Figure 2.
In Figure 5 we applied our reductions on the Splice(m,n) benchmarks having \(m\) workers and processing \(n\) input values. We show the number of generated states, as well as the total number of visited states, including those used in the TSCC search. The size of the original state space (first column) could only be computed in a few cases, and even here we used a parallel machine. After one application of confluence detection and on-the-fly reduction, in most cases the state space could be generated, but this approach doesn’t scale well (middle two columns). After symbolic prioritization more transitions could be proven confluent, and running on-the-fly reduction again results in pretty small state spaces (last two columns).
As a final remark, we note that the size of the reduced space doesn’t depend on the number of workers anymore. So this example has been solved nearly symbolically in the number of workers.
References
|
{"Source-Url": "http://wwwhome.cs.utwente.nl/~vdpol/papers/confotf.pdf", "len_cl100k_base": 7726, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 70615, "total-output-tokens": 9599, "length": "2e12", "weborganizer": {"__label__adult": 0.00048661231994628906, "__label__art_design": 0.0006046295166015625, "__label__crime_law": 0.0006947517395019531, "__label__education_jobs": 0.0009098052978515624, "__label__entertainment": 0.0001552104949951172, "__label__fashion_beauty": 0.00026535987854003906, "__label__finance_business": 0.0005030632019042969, "__label__food_dining": 0.0005922317504882812, "__label__games": 0.0009632110595703124, "__label__hardware": 0.0015583038330078125, "__label__health": 0.0010786056518554688, "__label__history": 0.0004870891571044922, "__label__home_hobbies": 0.00018930435180664065, "__label__industrial": 0.001178741455078125, "__label__literature": 0.0005674362182617188, "__label__politics": 0.0006327629089355469, "__label__religion": 0.0008335113525390625, "__label__science_tech": 0.326416015625, "__label__social_life": 0.0001633167266845703, "__label__software": 0.007709503173828125, "__label__software_dev": 0.65185546875, "__label__sports_fitness": 0.00048160552978515625, "__label__transportation": 0.0013074874877929688, "__label__travel": 0.0002772808074951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35753, 0.03479]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35753, 0.42446]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35753, 0.87515]], "google_gemma-3-12b-it_contains_pii": [[0, 1347, false], [1347, 4012, null], [4012, 7104, null], [7104, 10086, null], [10086, 13219, null], [13219, 16422, null], [16422, 18614, null], [18614, 20939, null], [20939, 23839, null], [23839, 26813, null], [26813, 27984, null], [27984, 29906, null], [29906, 32779, null], [32779, 35753, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1347, true], [1347, 4012, null], [4012, 7104, null], [7104, 10086, null], [10086, 13219, null], [13219, 16422, null], [16422, 18614, null], [18614, 20939, null], [20939, 23839, null], [23839, 26813, null], [26813, 27984, null], [27984, 29906, null], [29906, 32779, null], [32779, 35753, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35753, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35753, null]], "pdf_page_numbers": [[0, 1347, 1], [1347, 4012, 2], [4012, 7104, 3], [7104, 10086, 4], [10086, 13219, 5], [13219, 16422, 6], [16422, 18614, 7], [18614, 20939, 8], [20939, 23839, 9], [23839, 26813, 10], [26813, 27984, 11], [27984, 29906, 12], [29906, 32779, 13], [32779, 35753, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35753, 0.05405]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
8b0bfa7dd8b924394c0939b1baa59507a5cacdd2
|
syplan — Our Solution for
Integrated, Continuous,
Driver-based Planning
Contents
1. About syconomic........................................................................................................................................3
2. Our Approach and Solution ..................................................................................................................4
2.1. Main Components of Our Solution ....................................................................................................5
2.2. Analytica — the Powerful Modeling Platform .................................................................................6
2.3. syFMF — Our Framework for Integrated Planning ...........................................................................9
2.4. Cubeplan — Enterprise-wide Driver-based Planning .................................................................11
3. Applications and Use Cases ..................................................................................................................13
3.1. Strategic Planning................................................................................................................................13
3.2. Enterprise Risk Management .........................................................................................................13
3.3. Integrated, Driver-based Planning ...................................................................................................13
4. Typical Implementation Process .............................................................................................................15
4.1. Specification of Model Design ..........................................................................................................15
4.2. Development of Pilot Model .............................................................................................................16
4.3. Implementation and Training ...........................................................................................................17
4.4. Ongoing Support ...............................................................................................................................19
5. Software Licenses and Hardware Requirements ..................................................................................20
5.1. Software Licensing ............................................................................................................................20
5.2. Application Servers ..........................................................................................................................21
5.3. Database Server .................................................................................................................................21
5.4. End-User Computers ..........................................................................................................................22
1. **About syconomic**
syconomic is a boutique consulting firm specializing in building planning and decision support models. We help our clients translating their domain-specific knowledge into quantitative models and bespoke financial tools using Analytica, a visual modeling and simulation platform. Since 2015 we are the official European reseller and solution provider for Analytica and Cubeplan (http://cubeplan.com), the enterprise version of Analytica.
**Torsten Röhner**, founder and Managing Director of syconomic, has more than 15 years of experience in financial modeling, corporate planning, probabilistic risk modeling and analysis, and corporate decision-making. He has worked for clients in various industries, e.g. ALBA, Altana Pharma (now Takeda), Bayer Material Science (now Covestro), FMO Dutch Development Bank, Knorr-Bremse or Volkswagen. Torsten is a thought leader in enterprise-wide risk management and driver-based business planning. Torsten has implemented software solutions in treasury management, energy and commodity trading, portfolio management, and financial risk management. You can check out his profile on LinkedIn: https://www.linkedin.com/in/torstenroehner/
2. **Our Approach and Solution**
The core of our **syplan** solution is Analytica, a visual modeling platform for building and analyzing complex financial models and tools. It is designed to overcome many of the shortcomings of Excel and other spreadsheets software. Analytica has a wide and mature range of modeling and statistical analysis capabilities and allows for rapid, agile model development.
You can check out Analytica’s features and capabilities on our website [beyondexcelmodeling.com](http://beyondexcelmodeling.com).
Cubeplan, the enterprise version of Analytica offers, on top of this, browser-based access for multiple users and individual, interactive dashboards.
Cubeplan has been recognized as the **“Best New Vendor of the Year 2016”** by North American BPM Partners, the leading independent authority on business performance management.
After significant success of Cubeplan in Latin America ([http://cubeplan.com/cubeplan/case-studies](http://cubeplan.com/cubeplan/case-studies)) and entering the North American market in 2015, syconomic is now introducing Cubeplan as an exclusive partner in Europe.
Since partnering with Cubeplan, syconomic has, in 2017, developed and implemented the first Cubeplan planning solution in Europe for the Dutch Development Bank (FMO) in The Hague, Netherlands.
2.1. Main Components of Our Solution
Our solution for an integrated, continuous, driver-based planning consists of these three components:
- **Analytica** — for visual model building with multi-dimensional data and powerful risk and scenario analysis
- **syconomic Financial Modeling Framework (syFMF)** — for generating robust, integrated 3-way financial projections
- **Cubeplan** — for deploying planning models across the enterprise and providing individual, interactive reports and dashboards
2.2. Analytica — the Powerful Modeling Platform
We use Analytica as the modeling platform for driver-based planning models because of the following benefits.
"Easiest way we have ever seen to create a model." — BPM Partners, the leading independent authority on business performance management
Transparency
The credibility of financial models depends on their transparency, how clearly users and executives can understand their structure, assumptions, and calculations. Analytica provides visual influence diagrams to represent and navigate the business logic encoded in the models. This graphical representation of the variables and their influences provides an intuitive way to understand and communicate key assumptions and model structure. Complex models are constructed as a hierarchy of modules, each displayed as a node in its parent diagram.
Plain Language Formulation
With Analytica, power users are able to change the logic of the model with no knowledge about specific programming languages, i.e. formulation is similar to spreadsheets but use individual, meaningful variable names instead of cryptic cell references. Analytica comes with extensive, comprehensive function libraries (including standard financial functions), and offers the opportunity to develop individual, user-defined functions and libraries.
Integrated Documentation
The graphical representation of the model in the form of influence diagrams provides an automatic high-level documentation of the model logic and information flow within the tool. Additionally, each object in the model includes units, description, inputs, outputs, and more. On top of this, Cub- eplan provides an integrated, interactive Knowledge Base to further describe business processes with visual and text documentation.
Multidimensional Data and Calculations
Analytica uses multidimensional data cubes for handling, calculating and reporting data, powered by proprietary technology called Intelligent Arrays™. Formulas generalize automatically when existing dimensions are changed or new dimensions added. This feature allows for creating compact formulas which favors development of large-scale models.
Managing Scenarios, Risk and Uncertainty
A key part of financial models is dealing with a wide variety of risks and uncertainties. Analytica was designed from the ground up to support sensitivity analysis to identify the relative importance of alternative assumptions and policies, scenarios to explore the medium and longer-range implications of these assumptions, and risk analysis. It provides state-of-the-art methods for rapid calculation of multiple samples to represent volatility and uncertainty about market rates and other risk factors. It supports Monte Carlo simulation and related methods to estimate Value at Risk and other insightful measures of portfolio risk.
2.3. syFMF — Our Framework for Integrated Planning
Our syconomic Financial Modeling Framework consists of a pre-configured financial model which maintains the relationships between the three financial statements—P&L, balance sheet, and cash flow statement. It automatically calculates income taxes, dividend distribution, and interest on liquid funds. A standardized logic allows populating the statements with data generated by the operational, driver-based model or from other sources (e.g. imported actuals) and ensures that the statements are always consistent.
“The idea behind the syconomic Financial Modeling Framework is to separate the ‘financial model’ from the ‘operational model’ and link the two via a simple, standardized connector.”
—Torsten Röhner
Managing Partner of syconomic
Features of syFMF are:
- Pre-configured integrated 3-way financial statements
- Standard connector for linking any business logic
- Flexible planning horizon
- Continuous or rolling forecast
- Blending actual and forecast numbers
- Monthly, quarterly, or yearly aggregation
- Define and compare strategies and scenarios
- Sensitivity “Tornado” analysis
- Built-in functions for loans, investments, projects, etc.
- Flexible period deviation analysis
2.4. Cubeplan — Enterprise-wide Driver-based Planning
Cubeplan is the enterprise version of Analytica. It empowers collaborative planning throughout an organization and provides a unique visual environment with influence diagrams and dashboards to provide a comprehensive model of the entire business across various business units, legal entities, etc.
Web-based Collaboration
Cubeplan makes models and tools available via secure web browsers to enable collaboration with teams of executives, modelers, and analysts. It provides role-based access and security, allowing each user to view results and change assumptions for selected modules, depending on what type and level of access has been authorized for them.
Interactive Dashboards
Cubeplan makes it easy to create tailor-made dashboards for visualizing in a single screen key inputs and outputs and for performing interactive analysis by changing inputs and immediately assessing their effects.
Data Integration
Cubeplan offers seamless integration with existing information and database systems. You can integrate seamlessly to spreadsheets, databases, and other data sources with standard APIs. Cubeplan is often integrated with business intelligence and ERP systems for import of data and exchange of projections. CubeCalc is an Excel add-in that allows interacting with Cubeplan directly from Excel. With CubeCalc you can easily pull information from model variables and vice versa, send data to a particular model or even send an entire Excel file.
Calculation Power and Performance
Cubeplan uses in-memory computation for model calculations, avoiding extensive access to databases on hard disks or other slower memory devices. This allows rapid recalculation even in large models including multiple scenarios and risk analysis.
3. Applications and Use Cases
Our solution allows for many different applications and use case scenarios. Technically, planning models can either be deployed and shared throughout an organization using Cubeplan, or just used as a desktop application with standalone Analytica.
The following examples are typical use cases of our solution.
3.1. Strategic Planning
You can use our solution for strategic planning and scenario analysis. This does not necessarily require modeling the entire business logic of your organization. Instead, you can import an existing planning as a base case and model different strategic options or initiatives (e.g. M&A transactions, organizational initiatives, product launches, financing transactions, etc.). These elements can then be combined into scenarios and compared against the base case and each other, showing the impact on any financial figure, ratio, KPI, etc.
3.2. Enterprise Risk Management
Like with the strategic planning, you can use our solution for enterprise-wide risk analysis and management. In this case, you model each individual risk and its effects on P&L, balance sheet, and cash flow statement. Analytica's powerful built-in Monte Carlo simulation allows modeling all sorts of uncertainties, even very complex market price simulations with correlations, for example. Combining the base case with the modeled risks, you can analyze and display the aggregated risk for your organization. In the same way you can model potential risk mitigation measures and combine them into risk management strategies to choose the most effective one.
3.3. Integrated, Driver-based Planning
The most advanced way to use our solution is, of course, to model the entire business logic of your organization. With such a model, you can analyze the impact and relative importance of each driver of your business.
Developing such a driver model can be done gradually, meaning you can start with some parts of your business and expand the model step by step as you need to improve the quality of your planning. An integrated business planning
like this can be combined with the other elements of the Strategic Planning or Risk Management use cases as well.
Since Analytica and syFMF are completely flexible and modular, it is possible to implement any level of model complexity, and to expand and improve models at any time in the future if necessary.
4. **Typical Implementation Process**
Unlike other ERP or standard planning software, Cubeplan allows and requires developing individual driver-based planning models. It doesn’t force organizations to use a pre-defined planning logic developed by the software vendor, but enables them to capture their individual, specific business logic.
Developing this individual business logic cannot practically be done by the customer itself without reasonable training and experience. We therefore use the following typical implementation process which assures the fastest, most effective way to develop a working planning solution.
The typical implementation process consists of four phases with these main activities, outcomes, and time effort estimates.
<table>
<thead>
<tr>
<th>Specification of Model Design</th>
<th>Development of Pilot Model</th>
<th>Implementation and Training</th>
<th>Ongoing Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>1-/2-day workshop</td>
<td>Customizing syFMF and developing individual model logic</td>
<td>Software implementation, training, model modification and enhancement</td>
<td>Web-based or telephone support</td>
</tr>
<tr>
<td>Define scope and goal</td>
<td>Fully functional planning solution</td>
<td>Productive solution, trained users</td>
<td>Solving issues using the model, adapt and change model</td>
</tr>
<tr>
<td>3 to 4 person-days</td>
<td>5 to 15 person-days</td>
<td>5 to 35 person-days</td>
<td>Effort-based per hour</td>
</tr>
</tbody>
</table>
4.1. **Specification of Model Design**
A project typically starts with an initial 1- or 2-day in-person workshop (on-site) where we work through strategy, business model and financial model questions, constraints, requirements, practical challenges, etc.
In preparation of this workshop, customer provides us with information about the business model, operational process flows, and any other supporting documentation which might be helpful to better understand requirements.
During the workshop we specify and refine the objectives and requirements for the desired planning solution.
As opposed to the traditional “waterfall” approach where the end product is carefully thought through, designed, and documented in great detail, we are trying to identify and agree on the key objectives, functionalities, and deliverables of the planning system. In our experience, the concrete requirements and possible solutions usually unfold in parallel with the model’s development and should not tried to be anticipated at this early stage when the least information is available.
**Outcome and deliverables**
After the workshop we have a shared understanding of the scope and requirements, and agreement on the design of the Pilot version.
We create a brief documentation for further reference and guidance of the subsequent model development. Key part of this documentation is the “Application Manifesto.” This is a short paragraph outlining in a very focused and concise way the overall goal of the project. After going-live, the customer should be able to read the manifesto and agree that the model built aligns completely with the manifesto. It also serves as a constant reminder during the project to stay on course.
**Effort estimate**
The time required for this task is 3 to 4 person-days on our side— one or two days for conducting the workshop, one day each for preparation and for documenting the outcome.
4.2. Development of Pilot Model
Based on the specification of the model design, we build a Pilot version of the planning solution. This Pilot version is a fully functional model developed in Analytica and deployed in Cubeplan which fulfills the initial set of challenges and goals. We usually set up Cubeplan on a Cubeplan test server for the development of the Pilot version, and use test or demo data for privacy reasons.
Part of the Pilot model development is customizing the pre-configured financial statements of syFMF according to the reporting requirements and the organizational structure of customer’s organization. syFMF is an Analytica add-in we license to customer free of charge; we only charge for the days required for
customization. (The individual operational business model is delivered as Open Source for customer to adapt and scale as their own.)
Analytica allows for rapid model building with short, iterative development cycles. We therefore use an agile modeling approach with weekly or bi-weekly development "sprints." A sprint can be seen as a small, self-contained project, covering design, build, integrate, test and demo activities. The outcome is always a new functional increment of the solution and can be reviewed via web meeting or on-site.
After a series of sprints, we present and test the Pilot version on-site together with stakeholders of the customer.
Outcome and deliverables
The main deliverable of this phase is a fully functional Pilot version of the planning solution. This version will not comprise each and every feature of a potential final solution but will provide the customer with the opportunity to better understand the possibilities and options of Analytica and Cubeplan.
The Pilot model is the basis for further adaption and enhancement of the planning solution, which customer can do either internally or using our expertise. Based on the experiences and lessons learned from developing the Pilot model, we can also better determine other requirements and additional features of the solution.
Effort estimate
The time required to develop the Pilot model is very much dependent on the scale and complexity of the model logic. From our experience, the rage of person-days is between 5 and 15. After the specification workshop we are able to narrow down the effort and provide a fixed price for this phase if desired.
4.3. Implementation and Training
After initial testing and sign off by customer, we install the planning model either on customer's own servers or provide it as a hosted SaaS (software as a service) solution.
At this point, we provide intensive user training. The training usually takes 3 to 5 days and provides thorough insight into the functioning of Analytica, the syFMF framework, and Cubeplan. The specific content and focus of the training is typically tailored for two types of users—regular and power users. We can rely on a set of standard building blocks which we have used and tested in various training formats.
Based on the pilot model, the customer decides how to continue to use and expand the tool—either on their own or using our help. We are happy to continue supporting customers with our analytical and modeling expertise and help implementing additional requirements and ideas faster and more effectively.
In this case, we define the functionalities and features of the final planning solution together with the customer, based on the experiences and lessons learned from developing the Pilot model.
Outcome and deliverables
After this phase, customer’s staff will be well trained in using Cubeplan for creating reports and interactive dashboards as well as navigating and understanding the model logic. Power users will be able to modify, adapt, and extend the operational driver model or build their own models.
The main outcome of this phase is a productive planning solution ready for going live. This solution should fulfill the overall goal of the project as defined in the “Application Manifesto” (see chapter 4.1).
The documentation of the planning model should be integrated in the model itself using Cubeplan’s built-in documentation functions and Knowledge Base. In case we are developing the final model for the customer we will create comprehensive documentation of the planning model.
Effort estimate
The time required for this phase comprises at least the days of the user training incl. preparation. We calculate approximately a half day of preparation for each training day. For a 4-day training session this results in 6 person-days.
The time required to develop the final model is, as for the Pilot version, very much dependent on the scale and complexity of the model logic. Therefore,
we can only come up with a relatively wide rage of 10 to 30 person-days. To reduce the uncertainly and providing customers a higher level of confidence and protection, we usually grant customers a unilateral, “no quibble” veto over the hours we charge at any stage of the project. This way you decide what you think is reasonable, and pay only that.
4.4. Ongoing Support
After going live with the planning solution we provide support for answering questions about model use or adapting the model. We are available via telephone, email, web meeting, or on-site if required.
Lumina, the Cubeplan and Analytica software vendor, will provide technical support, patches, and upgrades for Cubeplan according to a separate “Cubeplan Service Level Agreement.”
Effort estimate
We charge customer for our ongoing support effort-based per hour.
The cost for the “Cubeplan Service Level Agreement” is included in the yearly license fee for the software.
5. **Software Licenses and Hardware Requirements**
Cubeplan can be provided as a SaaS (software as a service) solution or installed on customer’s own servers. A basic Cubeplan license allows installing Cubeplan on two server computers, one for development and testing one for production use. The license includes up to 16 users from within the organization. The computer hardware and remaining software described below must be separately acquired by customer if necessary.
Typically, customers need at least one Analytica license to enable power users to modify and extend the model, e.g. to change or extend the model logic or add new result formats, because Analytica is much more convenient for heavy modeling than Cubeplan alone.
5.1. **Software Licensing**
The Cubeplan license includes the installation of the software on customer’s server(s) by dedicated Cubeplan IT experts at no additional cost. The installation and initial setup can be done remotely, and take about two to three hours.
The proposed Analytica license is only necessary if power users are expected to modify and extend the planning model, e.g. to change or extend the model logic or add new result formats.
The following license prices are indications based on current pricing by the software vendor and might be subject to change. Both Cubeplan and Analytica licenses are subscription-based with yearly renewal.
<table>
<thead>
<tr>
<th>Software Licenses</th>
<th>First year (USD)</th>
<th>Renewal (USD)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cubeplan — including installation, support and maintenance, and upgrades</td>
<td>25,000</td>
<td>25,000</td>
</tr>
<tr>
<td>Analytica Enterprise 5.0 (per user) — including technical support and upgrades</td>
<td>2,795</td>
<td>1,395</td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td><strong>27,795</strong></td>
<td><strong>26,395</strong></td>
</tr>
</tbody>
</table>
5.2. Application Servers
Application servers include one server for development and testing, and one server for production. Cubeplan can also run on virtual servers with similar configuration; however a shared environment may affect performance depending on the other applications on the server.
Hardware sizing will be directly related to the amount of data, number of users, and complexity of the model. A standard configuration, based on 16 users can be defined as follows:
Hardware
- Processor: Intel Quad Core 3.3GHz E3 (4x8 = 16 threads) or faster
- Memory: RAM 64 GB
- Hard Disk: 100 GB SSD or SATA
Software
- Windows Server 2008 64 bit or later
- .NET Framework 4 or later
- Microsoft Access Database Engine
- IIS (Internet Information Service) including components ASP.NET and ASP
- Web browser: IE (11 or later), Chrome (v49 or later), Firefox (v45 or later)
- Administrator user for installation
5.3. Database Server
Hardware requirements will vary with the version of database and size of information. It is also acceptable to run the database on one of the application servers. This is a suggested configuration for an SQL Server 2014:
Hardware
- Processor: x64 2.0 GHz or faster
- Memory: 6 GB RAM
- Hard Disk: 20 GB
5.4. End-User Computers
End users may access Cubeplan from most kinds of computer, including tablet computers using a web browser with Internet access to the Cubeplan servers.
Software
- Windows Server 2012 or later
- .NET Framework 4 or later
- SQL Server 2012 or later standard version (minimum) or Enterprise (recommended)
- Administrator user for installation
Hardware
- Processor: Core 2 Duo or faster
- Memory: 2GB RAM
- Hard Disk: 1GB
- Network connectivity 10/100mbps or faster
Software
- Web browser: IE (11 or later), Chrome (v49 or later), Firefox (v45 or later)
- MS Office 2007 or later*
- .NET Framework 4*
- Visual Studio 2010 Tools for Office Runtime*
*) only applicable if CubeCalc is included (see Data Integration)
|
{"Source-Url": "https://syconomic.de/downloads/Driver-based-Planning-2018-08.pdf", "len_cl100k_base": 5133, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 44325, "total-output-tokens": 5906, "length": "2e12", "weborganizer": {"__label__adult": 0.0005841255187988281, "__label__art_design": 0.0012216567993164062, "__label__crime_law": 0.0006799697875976562, "__label__education_jobs": 0.0016164779663085938, "__label__entertainment": 0.0002446174621582031, "__label__fashion_beauty": 0.000308990478515625, "__label__finance_business": 0.08050537109375, "__label__food_dining": 0.0005369186401367188, "__label__games": 0.0015001296997070312, "__label__hardware": 0.0010080337524414062, "__label__health": 0.0004422664642333984, "__label__history": 0.00027060508728027344, "__label__home_hobbies": 0.0002894401550292969, "__label__industrial": 0.0015554428100585938, "__label__literature": 0.0003299713134765625, "__label__politics": 0.0003368854522705078, "__label__religion": 0.00046443939208984375, "__label__science_tech": 0.01201629638671875, "__label__social_life": 0.00022482872009277344, "__label__software": 0.369384765625, "__label__software_dev": 0.52490234375, "__label__sports_fitness": 0.0003764629364013672, "__label__transportation": 0.0008358955383300781, "__label__travel": 0.0003659725189208984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27213, 0.02079]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27213, 0.02957]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27213, 0.87008]], "google_gemma-3-12b-it_contains_pii": [[0, 72, false], [72, 2997, null], [2997, 4196, null], [4196, 5519, null], [5519, 6019, null], [6019, 6873, null], [6873, 8186, null], [8186, 8864, null], [8864, 9661, null], [9661, 10112, null], [10112, 10827, null], [10827, 11905, null], [11905, 13985, null], [13985, 14295, null], [14295, 16340, null], [16340, 18405, null], [18405, 20257, null], [20257, 22382, null], [22382, 23329, null], [23329, 25237, null], [25237, 26475, null], [26475, 27213, null]], "google_gemma-3-12b-it_is_public_document": [[0, 72, true], [72, 2997, null], [2997, 4196, null], [4196, 5519, null], [5519, 6019, null], [6019, 6873, null], [6873, 8186, null], [8186, 8864, null], [8864, 9661, null], [9661, 10112, null], [10112, 10827, null], [10827, 11905, null], [11905, 13985, null], [13985, 14295, null], [14295, 16340, null], [16340, 18405, null], [18405, 20257, null], [20257, 22382, null], [22382, 23329, null], [23329, 25237, null], [25237, 26475, null], [26475, 27213, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27213, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27213, null]], "pdf_page_numbers": [[0, 72, 1], [72, 2997, 2], [2997, 4196, 3], [4196, 5519, 4], [5519, 6019, 5], [6019, 6873, 6], [6873, 8186, 7], [8186, 8864, 8], [8864, 9661, 9], [9661, 10112, 10], [10112, 10827, 11], [10827, 11905, 12], [11905, 13985, 13], [13985, 14295, 14], [14295, 16340, 15], [16340, 18405, 16], [18405, 20257, 17], [20257, 22382, 18], [22382, 23329, 19], [23329, 25237, 20], [25237, 26475, 21], [26475, 27213, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27213, 0.05291]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
7574202c0ebd01ca15a1f6d89b0903c07420d853
|
Mobile Communications Client Server System for Stock Exchange e-Services Access
E. Pop, and M. Barbos
Abstract—Using mobile Internet access technologies and e-services, various economic agents can efficiently offer their products or services to a large number of clients. With the support of mobile communications networks, the clients can have access to e-services, anywhere and anytime. This is a base to establish a convergence of technological and financial interests of mobile operators, software developers, mobile terminals producers and e-content providers. In this paper, a client server system is presented, using 3G, EDGE, mobile terminals, for Stock Exchange e-services access.
Keywords—Mobile communications, e-services access, stock exchange.
I. INTRODUCTION
DATA transmission through mobile networks became very attractive for many economic agents, in business environment, for offering electronic services to their clients. The client server system presented in the paper offers to the mobile users 3G, EDGE, GPRS and WLAN possibilities for electronic services access, through mobile data terminals like Smartphones and PDAs. As a representative example for business environment, a stock market electronic service is installed on the server. The information was taken over from the Romanian BSE - Bucharest Stock Exchange. The client server functionality can be extended for various informational services: e-commerce, mobile-banking, location based services, e-citizen, etc, directly on site, or by URL links to other addresses.
The main components of the system are a WEB service, a server database, and mobile applications. The users can have access and consume the e-services, with the aim of mobile Internet. The 3G mobile communications networks give the users the mobility advantage and significant bandwidth. The server software can be hosted at the digital content provider locations.
Data transfer is achieved using TCP/IP protocols through 3G, EDGE, GPRS and WLAN channels. Suitable graphical user interfaces are provided to the clients.
The server site stores the electronic content of the service and uses multiple access Smartphones and PDA terminals, for data transmission.
The system architecture can be a distributed one, including various sites for different e-services. The system user is also provided with downloadable mobile applications, ready to be installed on mobile terminals, as an executable file.
There are possibilities to integrate API resources in the system, like e-parlay type Network Resource Gateway modules.
II. SYSTEM ARCHITECTURE
A. Hardware Architecture
The client server hardware architecture is adapted for data transmissions on various mobile channels. The platform server consists of an IBM PC compatible computer, Core2Quad, 2.8 GHz, with Windows XP Professional operating system and Internet connection.
The mobile data terminals at the client side are:
- SE P990i Smartphone, with Symbian v. 9.1. operating system;
- QTEK 2020i, or HTC TyTN, Personal Digital Assistant, with Windows Mobile 2003, 2005, operating system;
- Laptop, Core2Duo 2 GHz, Ethernet interface, with Windows XP Professional or Vista OS;
- 3G, EDGE, GPRS, WLAN, or CDMA data transmission modems for mobile connection of the laptop to the Internet.
B. The Server Software Structure and Functions
Several server software modules are developed, for service access using laptop, Smartphones or PDA;
- the electronic service site;
- the stock market exchange web service;
- server database for e-service information storing;
- server application, installed in the operating system;
- client applications, suitable for mobile data terminals;
- server software applications, for mobile connections requests serving, and TCP/IP data transmission.
The client software applications are available on the server, for downloading and installing in the adequate mobile terminals of the user: Smartphones and PDA.
C. The Mobile Client Software Structure and Functions
The software structure of the mobile client, for e-service access using laptop, Smartphones and PDA consists of:
- interfaces for 3G, EDGE, GPRS, WLAN mobile networks attaching, [4];
- data transmission software modules, using TCP/IP;
- database query transmission to the server;
- the IP address of the server computer is specified in the code, or introduced in the suitable text field;
- the requested information is received from the server and
III. TECHNOLOGY DESCRIPTION
A. Server Application Development
In order to implement and publish the e-service site on Internet, the following tools are used:
- HTTP application server IIS 5.1;
- DNS installed on a server of the local network;
- Microsoft Visual Studio 2008, ASP.NET, VISUAL C#;
- .Net Framework 3.5;
- Adobe Dreamweaver CS3;
- MySQL server for storing the e-content of the service;
- PhpMyAdmin tool, for developing and administrating the database of the service;
- HTML language, using the Get, Head, Post, Push, Put functions, for data transfer;
- Php language for the project site developing.
The server database was created in the following steps:
- starting the IIS service in the PC’s Windows XP OS;
- in PhpMyAdmin folder, the index.php file was opened in a web browser;
- logging as a “root” user, with suitable password, for security reasons, to access the application main interface;
- defining a new database using Create new Menu.
In order to implement and publish the e-service site on Internet, the following tools, installed on Windows XP OS PC are used: .NET Compact Framework SDK libraries, MS Visual Studio 2005 IDE, installed on Windows XP Professional, Active Sync 4.0 or later, for PC connection.
B. The Client Software Applications Development Tools for Sony Ericsson P990i
The client server system can use various mobile telephony terminals. Adequate tools must be used to develop client applications, according to the SDK GUI platform and the mobile phone operating system [1].
In case of SE P990i Smartphone, the following tools, installed on a Windows XP OS PC are used, [2]:
- the UIQ 3.0 SDK GUI platform, for Symbian 9.1, [3];
- the CodeWarrior Professional 3.1 IDE, based on C++ language, for object oriented programming, compatible with UIQ 3.0 SDK, ;
- ActiveSync utility software, for transferring the SIS file, from PC in the Smartphone’s internal or card memory, through USB cable or by Bluetooth connection.
The UIQ 3.0 SDK software design components are:
- a phone emulator, compatible with Windows OS PC;
- header files and libraries for running the application on emulator and on the real ARM processor;
- a GCC-E and WINSCW compiler and tools for UREL type build target developing and transferring the application on the phone [5];
- specific documentation and Symbian OS libraries [6].
C. The Client Software Applications for QTEK 2020i PDA
In order to develop client applications and WEB service interfaces for QTEK 2020i, PDA, Phone Edition, running Windows for Mobile 2003, the following tools are used: .NET Compact Framework SDK libraries, MS Visual Studio 2005 IDE, installed on Windows XP Professional, Active Sync 4.0 or later, for PC connection.
IV. THE STOCK EXCHANGE E-SERVICE
A. The Client Server System’s WEB Service
The WEB service is developed to offer to the clients stock exchange market information. Besides, the web service is also a public software interface, which can be accessed through the Internet, by suitable mobile client applications. The WEB service offer to the client fourteen WEB methods. which allow the mobile clients to:
- realize web transactions, following the evolution of Bucharest Stock Exchange market quoted societies;
- create, administrate of user accounts, available on the server, which permit stock-exchange market demo transactions;
- processing the orders placed by users with the maximum delay of 15 min.
A standardized description of operations and supported web service data types is published and is accessible on the main server in the form of a WDSL - Web Services Description Language - file. Through the web service the platform provides, to the client software developers, the possibility to integrate stock exchange market information in their own applications, using standard tools, instruments and protocols. The stock market exchange data upgrade operation is done through the BSE information resources access, at the market’s closing moment, when the current day transaction reports are published.
The database contains several societies, currently listed at the stock’s first category. The database can be accessed in two modes: by WEB service and through PHP scripts, containing the SQL queries. The database is continuously updated, by an adequate software module. The query protocols used by the web service methods are: Soap 1.1, 1.2, Http Post, Get.
The New Account Statement method of the WEB service will be presented in the following, as an example. This method allows the client to open user accounts, in order to utilize the interactive components of the web service - fig. 1, 2, 3 and 4. An e-mail address must be also specified by the client. The New Account statement operation will be recorded, and will have effect in the local database. Automatically, an e-mail message will be transmitted to the client, comprising the account data, UserId and Password.
The method's input variables, that must be provided by the user for invoking, are the personal data: name, adress, country, town, zip code, telephone number, e-mail, etc.
The output variable is an Account class object, whose member variables comprise the user account data. In the following, the general SOAP 1.1., SOAP 1.2, HTTP GET, HTTP POST query format, for this method is presented. The
example contains the method’s query and the answer. The marked parameters must be replaced with normal values.
POST /webservicemobacces/service.asmx HTTP/1.1
Host: serviciimobacces.ipa.ro
Content-Type: text/xml; charset=utf-8
Content-Length: length
SOAPAction: "http://microsoft.com/webservices/ContNou"
<?xml version="1.0" encoding="utf-8"?>
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ContNou xmlns="http://microsoft.com/webservices/">
<StringsDefinition>{string}</StringsDefinition>
</ContNou>
</soap:Body>
</soap:Envelope>
HTTP/1.1 200 OK
Content-Type: text/xml; charset=utf-8
Content-Length: length
<?xml version="1.0" encoding="utf-8"?>
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ContNouResult>
<StringsDefinition>{string}</StringsDefinition>
</ContNouResult>
</soap:Body>
</soap:Envelope>
Fig. 2 The SOAP 1.2. general query and response format
GET
/webservicemobacces/service.asmx/ContNou?
Nume=string&Preume=string&Tara=string&Oras=string&Jude=string
&CodPostal=string&Adresa=string&Telefon=string&Emai
il=string HTTP/1.1
Host: serviciimobacces.ipa.ro
HTTP/1.1 200 OK
Content-Type: text/xml; charset=utf-8
Content-Length: length
<?xml version="1.0" encoding="utf-8"?>
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ContNouResult>
<StringsDefinition>{string}</StringsDefinition>
</ContNouResult>
</soap:Body>
</soap:Envelope>
Fig. 3 The HTTP GET general query and response format
POST /webservicemobacces/service.asmx HTTP/1.1
Host: serviciimobacces.ipa.ro
Content-Type: application/x-www-form-urlencoded
Content-Length: length
Nume=string&Preume=string&Tara=string&Oras=string&Jude=string
&CodPostal=string&Adresa=string&Telefon=string&Emai
il=string HTTP/1.1
Host: serviciimobacces.ipa.ro
HTTP/1.1 200 OK
Content-Type: text/xml; charset=utf-8
Content-Length: length
<?xml version="1.0" encoding="utf-8"?>
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ContNouResult>
<StringsDefinition>{string}</StringsDefinition>
</ContNouResult>
</soap:Body>
</soap:Envelope>
Fig. 4 The HTTP POST general query and response format
V. RESULTS
The system client software applications functionality was tested as it is presented in the following:
A. Stock Market Exchange Web Service Access Using Browser
It is useful to test the web service’s functionality, before consuming it on mobile channels. This allows us to establish that the web service works properly and the extension on wireless channels is possible. The appealing and response of the WEB service, using the browser, for the New Account and Information Account method, is presented in the Figs. 5, 6, 7, 8.
B. Stock Market Exchange Web Service Access using Windows Mobile and Symbian Client Application
The specific software installation kits for PDAs and Smartphones, developed for Windows Mobile and Symbian operating systems, are stored on the platform site, and are directly downloadable in the mobile terminals. To obtain them, the interested users can access the platform site by browsing. By registration in the database, they will receive an "User name" and a password, that will allow them to access the platform site private section, where the client application kits are stored. The mobile applications can be installed in the terminal and used to invoke the web service methods. The results of New Account method invoked from a PDA mobile terminal is presented in Fig. 9.
C. Stock Market Exchange E-Service Access using PhP Query Functions Accessed on Laptop, Smartphone or PDA
The server database can also be accessed through PhP scripts, containing the SQL queries. The PhP scripts are appealed directly through a Web browser, because they are hosted on the same server as the database. The call has the following general form http://localhost/query.php?parameter=value.
The company stock exchange information query is available through URL link: http://localhost/query3.php?symbol=ATB; this stands for an Antibiotics Company.
The list of companies having a quotation variation under a specified maxim threshold is obtained by the commands succession:
- in the main screen, select Menu – Maxim Threshold;
- the cursor must be placed in Threshold Maxim window, and the desired maximum value must be typed, than Done and Go buttons must be pressed;
- the companies that had a quotation variation under the introduced value are displayed.
The list of companies having a quotation variation above a specified minim threshold, can be obtained by the commands succession:
- in the main screen, select Menu – Minimum Threshold;
- the cursor must be placed in Minim Threshold window, and the maximum value must be typed, than Done and Go buttons must be pressed;
- in the information box the companies that had an index variation above the introduced value are displayed.
The same applications have been tested on a PDA terminal, QTEK 2020i see Fig. 11.
Based on these client applications, the user can efficiently decide to sell or buy shares, having the advantage of the terminal mobility. This will increase his financial benefits.
The following functions of the client server system have been implemented and tested:
- server software application for stock exchange informational service;
- client software applications for Smartphones and PDAs, suitable for electronic services access.
The client server system can be extended to provide various kinds of electronic service: e-commerce, e-banking, e-government, etc.
Fig. 5 New Account WEB service method appealed from browser
Fig. 6 WEB service response of New Account method appealed from browser
Serviciu WEB Mobacces
Click here for a complete list of operations.
InformatiCont
Offers utilization informations desired contain create.
Test
To test the operation using the HTTP POST protocol, click the "Invoke" button.
Parameter Value
UserID: 12
Password: 1234566
Invoke
Fig. 7 Account Information WEB service method appealed from browser
Fig. 8 WEB service response of Account Information method appealed from browser
Fig. 9 New Account WEB method appealed from PDA mobile terminal
Fig. 10 Information about stock market quotation on SE P990i display
VI. CONCLUSION
A web service client server system for a stock market exchange electronic service was achieved. The mobile clients can use Smartphones, PDAs or laptops, to access and use the electronic service. 3G, GSM/GPRS or WLAN channels are used in the system, for data transmission.
The server contains the WEB service, consisting of adequate methods, that provide to the users the possibility of account statement and information regarding the quoted companies, transactions, share values, etc. The server database stores the information about companies, regarding the stock transactions indexes. Various Php functions are implemented, to provide the answers to the client’s queries:
- selected companies stock market exchange information;
- the list of companies whose quotation variation is under the maxim threshold, or above the minimum threshold, introduced by the user.
The WEB service methods can be invoked by the clients through browsers, which is a good method to establish that the web service works properly and the extension on wireless channels will have a good functionality.
The invoking and results provided by the New Account Statement method of the WEB service by browser, are presented in the paper, as an example. The client is allowed to open user accounts, in order to utilize the facilities of the service. The New Account statement operation is recorded and stored in the local database.
The stock market information is available to the client primarily by direct and dedicated applications, which significantly reduce the access time and data traffic cost.
The service’s site offers to the client access applications samples for SE P990i and QTEK 2020i mobile terminals. These samples can be downloaded on a PC, and then installed on the mobile phone, using ordinary Sync type tools. The executable files can also be directly downloaded on the mobile phone, using wireless channels and a web browser.
The mobile applications of the client server system have been implemented and tested on Windows Mobile and Symbian V. 9.1. operating system terminals.
The client server system functionality can be extended and modified to develop and provide various kind of electronic services.
REFERENCES
Pop I. Eugen (PhD’99) has a PhD in Telecommunications. He was born in Brasov, Romania, in 1961. He became engineer in electronics in 1986 and PhD in Telecommunications in 1999. He is R&D Project Manager and Principal Researcher in the Institute for Automation in Bucharest Romania, and is associated with the “Politehnica” University in Bucharest. He is interested in mobile communication networks, GSM/GPRS, CDMA and WLAN. At present he is involved in value added services developing for mobile telephony networks. R&D in e-services access and software development for 3G mobile terminals, Smartphones and PDAs, for business environment is one of his main direction of activity.
Phone: 4021-3180032, e-mail: epop@ipa.ro.
M. Barbos is researcher in the Automation Institute in Bucharest, involved in mobile telephony data transmission data developments.
Phone: 4021-3180032, e-mail: mihaibarbos@ipa.ro.
|
{"Source-Url": "http://waset.org/publications/588/mobile-communications-client-server-system-for-stock-exchange-e-services-access", "len_cl100k_base": 4157, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17511, "total-output-tokens": 5015, "length": "2e12", "weborganizer": {"__label__adult": 0.0008668899536132812, "__label__art_design": 0.0005664825439453125, "__label__crime_law": 0.0006880760192871094, "__label__education_jobs": 0.0013227462768554688, "__label__entertainment": 0.00023257732391357425, "__label__fashion_beauty": 0.0003447532653808594, "__label__finance_business": 0.048187255859375, "__label__food_dining": 0.00083160400390625, "__label__games": 0.0010805130004882812, "__label__hardware": 0.0213470458984375, "__label__health": 0.0008931159973144531, "__label__history": 0.0004642009735107422, "__label__home_hobbies": 0.00019884109497070312, "__label__industrial": 0.0021839141845703125, "__label__literature": 0.0003058910369873047, "__label__politics": 0.0004711151123046875, "__label__religion": 0.0005679130554199219, "__label__science_tech": 0.0631103515625, "__label__social_life": 9.828805923461914e-05, "__label__software": 0.08673095703125, "__label__software_dev": 0.765625, "__label__sports_fitness": 0.0004601478576660156, "__label__transportation": 0.0027217864990234375, "__label__travel": 0.0005536079406738281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20360, 0.02451]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20360, 0.17431]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20360, 0.84428]], "google_gemma-3-12b-it_contains_pii": [[0, 4454, false], [4454, 9759, null], [9759, 12348, null], [12348, 15781, null], [15781, 16420, null], [16420, 20360, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4454, true], [4454, 9759, null], [9759, 12348, null], [12348, 15781, null], [15781, 16420, null], [16420, 20360, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20360, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20360, null]], "pdf_page_numbers": [[0, 4454, 1], [4454, 9759, 2], [9759, 12348, 3], [12348, 15781, 4], [15781, 16420, 5], [16420, 20360, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20360, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
efd2a7f94e2a7975ad4cf972469fa4e292888e62
|
Towards an Understanding of Sourcing Decisions
Darja Smite¹,², Claes Wohlin¹,³, Aybuke Aurum³, Ronald Jabangwe¹, Emil Numminen¹
¹ Blekinge Institute of Technology, ² University of Latvia, ³ University of New South Wales
Corresponding author:
Darja Smite
SE-371 79 Karlskrona, Sweden
Cell phone: +46 (0) 702 100 858
Email: Darja.Smite@bth.se
Acknowledgements:
This research is funded by Ericsson Software Research, the Swedish Knowledge Foundation under the KK-Hög grant 2009/0249, and European Social Fund through a research project “Application of computer science and its links to quantum physics”.
Towards an Understanding of Offshore Insourcing Decisions
Abstract
A variety of new forms of business are enabled through globalization and practiced by software organizations today. While companies go global to reduce their development costs, access a larger pool of resources and explore new markets, it is often assumed that the level of services shall remain the same after implementing the sourcing decisions. In contrast, critical studies identified that global software development is associated with unique challenges, and a lot of global projects fail to mitigate the implications of a particular environment. In this paper we explore offshore insourcing decisions in an empirical field study conducted at Ericsson. A set of questions for consideration emerged from empirical observations of a software transfer project and was used as a basis for building an insourcing decision model. We discuss the ingredients of insourcing decision process and emphasize that due to a wide variability in the choices, the steps towards a global initiative are not, and should not be the same for every company and project. Finally, the companies should carefully select their what, where, when, how and why strategies paying attention to alignment of each selected decision and their contribution to the achievement of the expected benefits.
Keywords Global software development, Offshore insourcing, Sourcing, Go-global decisions, Empirical field study
1. INTRODUCTION
Globalization has led to significant changes also in software organizations. In result, global software development has become business as usual driven by availability and costs of resources accessible around the world, and other benefits. While there are many assumed beneficial aspects of offshore development, critical studies have identified that these benefits are neither clear-cut nor can their realization be taken for granted (Conchuir et al. 2006), (Ebert 2007). This is due to considerable complexity of global sourcing and cross-site collaboration, which adds a set of specific risks on top of the regular ones (Karolak 1998), (Ebert 2007).
Understanding the reasons behind sourcing failures is however not a straightforward task, due to the diversity of sourcing scenarios. Experiences of internal (insourcing) versus external (outsourcing) collaborations (Poikolainen and Paananen 2007), and nearshore versus farshore collaborations (Carmel and Abbott 2007), to name a few, suggest that it is fair to assume that strategies that apply in one context might not necessarily apply in another. The implication of this is that the steps towards an offshoring initiative are not, and should not be the same for every company and project (Conchuir et al. 2006), (CFUS 2008).
Although a wealth of academic literature exists on examining sourcing options, it generally addresses the decision of whether or not to outsource (Šmite et al. 2010), (Tanriverdi et al. 2007) while specific components of possible decisions appear
relatively unexplored (e.g., which particular development should be kept within the company, or transferred to another site). There is also little academic literature that compares different sourcing decisions. As a result, a vast majority of so-called “go-global” decisions are not supported by deliberate analysis of return on investments and systematic risk management. In fact, some organizations have been criticized for outsourcing simply because “everybody is doing it” (Weidenbaum 2005). Consequently, companies spend years and valuable resources on learning by trying, and all too often failing.
Likewise, our empirical investigation emerged from the willingness to explain the realization or non-realization of expected offshore benefits in Ericsson, a large Swedish software systems development company operating in telecommunications domain. Ericsson is an international corporation involved in offshore insourcing — internal collaboration with several geographically, temporally and culturally distant sites. In this paper, we illustrate the line of decision-making associated with software transfers, transition of work from one site to another. Motivated by the lack of systematic research work in the area, our aim was to address the following research question:
RQ: What shall be considered when making offshore insourcing decisions?
The rest of the paper is organized as follows. Empirical background and motivation for our field study are outlined in Section 2. In Section 3, we analyze the key elements of offshore insourcing decisions based on empirical observations and related field literature. These findings are further incorporated into a structured model for exploring and evaluating different sourcing decisions, described and discussed in Section 4. Finally, Section 5 concludes the paper with a summary of the results.
2. EMPIRICAL BACKGROUND AND MOTIVATION
2.1. Field Study Description
Empirical observations discussed in this paper can be classified as a field study (Zelkowitz and Wallace, 1998) based on multiple sources of evidence related to an offshore insourcing project investigated in one company.
Ericsson is a large-scale developer of software-intensive systems for telecommunication domain with sites all around the world. The main focus of our field study in Ericsson was a software transfer project (Alpha), which prescribed final relocation of responsibility for a software component from Sweden to India. The project involved three sites — a headquarters site in Sweden, which handled product management (site A), a different site in Sweden initially handling responsibility for development (site B), and a site in India receiving the development responsibility as a result of the offshore insourcing decision under investigation (site C).
2.2. Research Design
Project data was collected from multiple sources, namely, qualitative interviews with software engineers, product managers, managers responsible for implementing sourcing decisions, site managers, investigation of sourcing project documentation, and discussions in regular research group meetings at the company. An overview of the research activities is given in Table 1.
Table 1: Research Activities
<table>
<thead>
<tr>
<th>Date</th>
<th>Main Research Activities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Jun 2009</td>
<td>Interviews with developers</td>
</tr>
<tr>
<td></td>
<td>• two interviews with Indian site representatives</td>
</tr>
<tr>
<td>Oct 2009</td>
<td>Interviews with two product managers</td>
</tr>
<tr>
<td>Oct 2009</td>
<td>Research seminar on transfer challenges</td>
</tr>
<tr>
<td>Nov 2009</td>
<td>Two group interviews with the transfer project manager and product development manager</td>
</tr>
<tr>
<td>Mar 2009- Nov 2010</td>
<td>Monthly research group meetings (16 in total)</td>
</tr>
<tr>
<td>Mar 2010</td>
<td>Interview with the Swedish site manager</td>
</tr>
<tr>
<td>Aug 2010</td>
<td>Research seminar on transfer decisions</td>
</tr>
</tbody>
</table>
Although the number of interviews is relatively small, interviewing personnel with different roles and responsibilities enabled different perspectives. All interviews were conducted in person, in Sweden, and were exploratory in nature. Interviews focused on collecting observations about the transfer decisions, challenges related to their implementation, results expected and achieved, and the awareness about these decisions among different levels of personnel. All interviews were documented during the interview process, the notes were then sent to the interviewees for approval. Research group meetings and informal discussions were used for cross-validation of the gathered observations with the managers involved as research stakeholders, and
to guide further investigation. Research seminars involved a wider audience and aimed at disseminating and discussing the research findings.
Although the selected study does not form a complete picture of all types of offshore insourcing decisions, empirical observations from an offshore insourcing software transfer were used to analyze different aspects of decision-making. In particular, interviews and project historical information was reviewed for evidence supporting the following three categories: characteristics of the decision, situation before the decision, and expected situation after the implementation of the decision. We have sought to identify and describe patterns and themes of consideration from the perspective of the participants, and then to understand and explain these patterns and themes from the decision-making perspective (Creswell 2003). The three categories used for structuring the decisions were inspired by the change implementation model suggested by Pieters (2002). In his model Pieters suggests weighing the gains from implementing the changes, in our case — offshore insourcing decisions, with the losses, and emphasizes the importance of evaluating the transition phase, in our case — implementation of offshore insourcing decisions. This is because numerous experiments demonstrated that having a vision is insufficient for success, as people often fail to realize the necessary steps for transition (Pieters 2002).
Exploratory in nature, our research led to further findings considering offshore insourcing decisions. Several important themes for consideration emerged from analyzing the history of the project under study. In the next section we provide detailed description of the offshore insourcing decisions. The project narrative is divided into descriptions of decisions, and situations before and after the implementation of these decisions. We also outline our line of thinking by mapping the extracts to the themes for consideration emerging from empirical observations. These findings were further used to build a model for supporting offshore insourcing decisions from a bottom-up perspective (see Section 4).
2.3. Project Alpha
2.3.1. History of Decisions Taken
**Situation Before the Decision**
The transfer decision for Project Alpha was taken in Ericsson by the development site manager in Sweden. The reasons for this were twofold: 1) to free up existing resources for new projects, and at the same time 2) to decrease the overall costs as directed by the headquarters. With this aim a product component that has been initially developed by an external software company and later bought by Ericsson was chosen. The component was transferred from the acquired company and has been since developed in a distributed way involving two development sites (in Sweden and in India) with product
<table>
<thead>
<tr>
<th>Table 2: Themes for Consideration</th>
</tr>
</thead>
<tbody>
<tr>
<td>Decision Themes</td>
</tr>
<tr>
<td>1 Motivation (why)</td>
</tr>
<tr>
<td>2 Motivation (why)</td>
</tr>
<tr>
<td>3 Work to be sourced (what)</td>
</tr>
</tbody>
</table>
management performed at the headquarters site. Distribution was regarded as challenging ways of collaboration and thus additionally motivated the transfer decision.
The component selected for sourcing was characterized as complex and immature due to its history, and the experienced developers from the Swedish site has been investing significant resources into improving the software up till the sourcing decision was taken. Before the transfer 54 members of the Swedish site and 46 members of the Indian site (total of 100 employees) handled the product development.
Implementation of the Decision
The sourcing decision prescribed full allocation of responsibility for the selected component to Ericsson’s subsidiary in India, which was already involved in product development. The decision was executed as a separate transfer project with a dedicated project manager and allocated budget for active training and exchange visits. It was announced in Q1/2009 and completed by the end of the year. The transfer happened in the middle of a development release. This influenced the overall capability of the decreasing experienced resources and slowly increasing inexperienced resources to handle the ongoing work.
Expected Situation After
On completion of the transfer the Indian site was expected to handle development independently, with the 10% decrease in the number of resources. It was expected that previous involvement of the Indian developers will ensure a fast learning curve, and thus the site manager expected return of investments into the transfer within one to two years.
Outcome
The transfer of a complex and immature software product component led to significant challenges and required unforeseen resources for training and documenting the product before handing over the responsibility. After the transfer was completed, five Swedish developers continued their work onsite and several Swedish developers were relocated to India to ensure access to product expertise, if needed. The number of Indian staff
<table>
<thead>
<tr>
<th>4</th>
<th>Current costs (context)</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>Work to be sourced (what)</td>
</tr>
<tr>
<td>6</td>
<td>Current benefits (context)</td>
</tr>
<tr>
<td>7</td>
<td>Current costs (context)</td>
</tr>
<tr>
<td>8</td>
<td>Current costs (context)</td>
</tr>
<tr>
<td>9</td>
<td>Type of collaboration (how)</td>
</tr>
<tr>
<td>10</td>
<td>Destination (where)</td>
</tr>
<tr>
<td>11</td>
<td>Expected costs (outcome)</td>
</tr>
<tr>
<td>12</td>
<td>Point in product life cycle (when)</td>
</tr>
<tr>
<td>13</td>
<td>Expected benefits (outcome)</td>
</tr>
<tr>
<td>14</td>
<td>Expected benefits (outcome)</td>
</tr>
<tr>
<td>15</td>
<td>Expected benefits (outcome)</td>
</tr>
<tr>
<td>16</td>
<td>Expected cost/benefits (outcome)</td>
</tr>
<tr>
<td>17</td>
<td>Costs during transfer</td>
</tr>
<tr>
<td>18</td>
<td>Costs during transfer</td>
</tr>
<tr>
<td>19</td>
<td>Costs after transfer</td>
</tr>
<tr>
<td>20</td>
<td>Costs after transfer</td>
</tr>
</tbody>
</table>
reached 80 employees. As planned, *most of the Swedish employees were further involved in the new projects*\(^{21}\). Product development was still provided from the headquarters site in Sweden, thus some challenges of distribution remained. This required *additional effort from product managers for coordination of requirements*\(^{22}\) from even larger distance.
Finally, after the transfer was completed the management made a decision about the future of the transferred software component. It was decided to switch development into *maintenance mode in the beginning of 2011*\(^{23}\), and continue product support *during at least five years*\(^{24}\). As a consequence, *half of the Indian developers are being slowly phased out within the first two years*\(^{25}\) after the transfer.
---
### 2.3.2. Lessons Learned
Analysis of the sourcing decision-making for project Alpha suggests that several unforeseen risks have manifested during and after the implementation of the transfer decision. The gained experience generated a list of questions for consideration in the future decisions, which we grouped under decision themes and coded using five interrogative questions from the Six W's widely used in journalism for information gathering. The following themes emerged from observations (discussed in more detail in the following section):
- **Why** is an organization making an offshore insourcing decision, what are the expected costs/benefits to be achieved and which costs/benefits are dissatisfactory in the existing setting?
- **What** shall be sourced and whether the selected type of work is suitable for the selected offshore insourcing strategy?
- **When** shall the offshore insourcing strategy be implemented and whether this will have any effect on the achievement of expected benefits?
- **Where** will the work be located and whether the offshore destination is optimal in terms of proximity and skills for the demands of the work and collaboration?
- **How** will the work, roles and responsibilities be divided and whether the chosen work division is suitable for the overall offshore insourcing strategy?
Each of these themes comprises of questions for consideration in the process of decision-making and decision options. The chain of decisions made afterwards may influence the probability of achieving the expected benefits and thus shall be approached with considerate attention. In the next section we triangulate our observations with the related findings and try to better understand the decision options and how these are interrelated.
3. THEMES FOR CONSIDERATION AND DECISION OPTIONS
3.1. Why – Motivation and Expected Outcome
First of all, it is essential to understand the drawbacks of the current setting. Managers often perceive offshore sourcing options as a panacea for processes that may not be yielding the desired results (Ilan Oshri et al. 2009). However there are associated risks that may hinder success of sourcing decisions such as, the possibility that the processes may not function as expected, also known as “operational risk”, or the possibility that the sourcing relationships, e.g. supplier-client, may not yield ideal results also known as “structural risk” (Aron and Singh 2005). These and other potentially counterproductive risks make it imperative that offshoring decisions are not carried out blindly without any clear expectations and goals. Thus, primarily, an analysis should be conducted to understand the inhibiting factors associated with the current situation (Ilan Oshri et al 2009).
An understanding of the current situation and the associated drawbacks enables the organization in identifying the current problematic areas in different processes. In turn this facilitates in deriving and defining clear expectations and goals of the offshoring strategies. Comprehension of the drawbacks of the current settings facilitates in aligning these drawbacks to the drivers or motivations for offshore decisions. Ultimately, examination of the rationale and drivers behind these decisions, and identification of the current drawbacks, helps in deriving realistic goals and expectations in relation to cost and potential benefits. We therefore emphasize the need to have a clear link and alignment between the drawbacks of the current situations, the rationale for offshore insourcing, and the expected outcome.
3.2. What – Type of Software Work
The transfer in project Alpha was more expensive than expected mainly due to the complexity and immaturity of the selected component. More complex or immature products have been recognized to require more training, more documentation, and lead to a longer learning curve of the new developers. We thus suggest that selection of the work for offshore insourcing plays an important role in decision-making. Our observations of transferring existing product development from one location to another also suggest that coupling of the work matters — independent components are easier to transfer than inter-dependent ones (Šmite and Wohlin, in press).
Related research suggests that nowadays software organizations are sourcing all types of software work, e.g. handling a product, component, functionality, or separate development activity (Mockus & Weiss 2001). Although there are yet no common best practices determining the success or failure of offshoring different types of work, empirical observations confirm our findings that certain characteristics of the work can alleviate or challenge the outcome of these endeavors. Similarly experiences drawn from numerous GSE initiatives in telecommunication and automotive industries (Ebert 2007) suggest that global development of mere IT applications or internet services is fairly easy, while embedded software faces major challenges; big savings have been also reported from sourcing well-defined processes that require
little control (e.g. maintenance projects), technical documentation or validation activities.
3.3. When – Software Lifecycle
Depending on the situation, offshore insourcing decisions can be made for new project in the start of their lifecycles or existing projects in the middle of a product lifecycle or close to its end. In the Alpha project the transfer was executed in the middle of a release. This caused extra stress on the ongoing operation, since teaching and learning requires significant efforts from the old and new developers. In addition, a full transfer resulting in independent productive performance is claimed to take five to six years (Kommeren and Parviainen 2007), (Smite and Wohlin, in press), which confirms our findings that a transfer decision shall have intentions for at least a seven to eight years long period to allow reaching the economic benefits. We therefore conclude that the point of software lifecycle, in which the offshore insourcing decision is implemented, influences the degree to which expected benefits are achieved.
3.4. Where – Destination
The sourcing destination in project Alpha was selected with particular consideration. Our observations suggest that characteristics of the offshore destination, such as maturity and attrition levels, play one of the crucial roles in determining the success or failure of future collaboration. In our field study Ericsson in India was own subsidiary, which provided significant level of control associated with insourcing relations (Tanriverdi et al. 2007), (Carmel and Tjia 2005). Therefore the turnover of employees was reduced with the help of internal promotion campaigns and fostering of corporate identity, despite the common view that attrition especially in India is very high (Conchuir et al. 2006).
Distance or proximity of the collaborating sites may influence the success of the relationship and nearshoring versus farshoring decisions are often traded off with costs and risks of doing business in a specific destination (Carmel and Abbott 2007). Challenges experienced in distributed development on a large distance between Sweden and India motivated the further offshore insourcing decision that relocated all development activities in one site. In this case, previous involvement of the Indian site meant that key developers already existed and suggested an easier implementation of the transfer decision. A lack of domain and product expertise is frequently referred to as one of the key challenges when starting global software development (Herbsleb et al. 2005), (Battin et al. 2001). In result, the selected destination had a minimal negative impact associated with the increase in the costs of coordinating product management activities remaining in Sweden.
In conclusion, we emphasize that offshore location decisions shall be based on task attributes and how these are influenced by the necessity to meet (travel time and costs), asynchronously interact (time zone overlap), speak a common language (linguistic similarity or language skills), etc.
3.5. How – Type of Collaboration
Work division and allocation of roles and responsibilities also play an important role in offshore insourcing decisions, especially since these are tightly influenced by the type of work being sourced, characteristics of the sites involved and their relationship. Our observations indirectly support previous studies that have demonstrated that distributed work items appear to take considerably longer to complete as similar items where all the work is co-located (Herbsleb and Mockus 2003). Likewise, the difficulties of handling product management on a distance at Ericsson confirm that software development phases such as requirements engineering become challenging when cross-functional stakeholder groups specify requirements across cultural, language and time zone boundaries (Damian and Zowghi 2002). The transfer decision thus favored allocation of a complete component to one site. Although the component was still related with a product family, it enabled the benefits of modularized or decoupled task strategies, such as isolation of the effect of changes (Herbsleb and Mockus 2003). Related research suggests that modularized development shows good results, when tasks are well separated and supported by ownership, while projects with tasks broken down too much fail (Ebert 2007). It is worth emphasizing that ownership and responsibility for the work performed in the sourcing destination influences motivation (Beecham et al. 2008) and was also noted to consequently decrease the level of attrition of the employees at Ericsson.
4. OFFSHORE INSOURCING DECISION MODEL
The discussion above can be stated as a decision-making problem based on Luce and Raiffa (1957). The main objective of the decision analysis is to maximize the expected utility from the made decision based on the chosen action or strategy and the resulting state of the world. In the above setting this means that we want to maximize the difference between the present and future why. In the studied case strategies for doing so consisted of a chain of related decision points, taken within the identified themes for consideration. This way, after defining the expected outcome — freeing up resources in Sweden and obtaining cost benefits through offshore insourcing to India — the chain of further decisions consisted of selecting what have been chosen to be sourced, when and how. In a formal setting the decision analysis is made by defining a finite set of actions and states of the world, i.e. future states that we do not control but make the decision about. The resulting value of the decision is then defined by the combination of the action or strategy chosen and what state becomes the true state. The assumption underlying the analysis is that we have complete information and can characterize the setting, which the decisions concerns in correct and finite states. For these states we can define suitable actions to maximize the outcome value of the decision. Accordingly, decision points emerge from selecting different options available under the identified themes.
4.1. The Model
Empirical observations led to breaking down offshore insourcing decisions into key questions for consideration that we call decision themes. Related literature was
consulted for cross-examination of the findings and complementation of each of
decision themes with possible options available within the decision space. Thus the
model outlined in this section emerged from an exploratory thinking (bottom-up
perspective).
The model comprises of three states and five key consideration themes (See Figure 2). These themes provide general questions for consideration of decision options and help to distinguish different sourcing strategies. The proposed model assists development of a detailed plan for implementation of a strategy and can be also helpful in an evaluation of possible alternatives.
Fig.2 – Decision Model
We suggest focusing analysis of offshore insourcing decisions on the probability of
achieving the expected costs/benefits through a certain combination of variables in
the decision options. This is especially important in the light of our findings —
comparative analysis of different sourcing scenarios showed that certain approaches
in the wrong combination may lead to undesirable outcomes. For example,
distribution of highly coupled work items that require active collaboration might
clash with the inability for synchronous interaction, if the sourcing destination is
situated in a far time zone. Thus, paying attention to the decision points suggested in
this paper and supplemented by continuous knowledge and experience accumulation
shall help to evaluate the viability of selected decisions and perhaps avoid pitfalls.
At the same time the observations lead to conclude that there might be no single best
approach to sourcing, as different strategies e.g. to work division can be more or less
effective, depending on a variety of circumstances (Herbsleb and Mockus 2003). The
implication of this is that it is essential to evaluate the risks and costs associated with
of each strategy and the chain of selected options when planning sourcing decisions and assess the suitability of the work under consideration for the chosen work division approach.
4.2. Dynamic Decisions
Based on the analysis in this paper it might be relevant to implement a more dynamic decision-making instead of having a static decision view on decision-making as the traditional decision analysis assumes, i.e. it is one decision to be made. The target of transferring the project is maximizing the why. However, maximizing why requires the control of uncertainty in the what is transferred over time for the expected outcome to be realized. Depending on the uncertainty in the what different sourcing strategies, i.e. how, should be implemented. Given the level of complexity in the project a longer time horizon should be used for the offshore insourcing to reach the economic break even. By having a longer time horizon the offshore insourcing can be made as a sequence of decisions providing an opportunity for information gathering along the way. This would reduce the uncertainty in the what and provide the possibility to stop the sourcing at early stage if implementation does not meet target ratios. It is also important to analyze the timing of the decision, i.e. the when. In the example of the studied transfer strategy time to build the competence for the specific project was necessary for the organization to succeed. This means, that investing into training and prolonging the onsite support reduced the uncertainty in the what. Finally, the expected ROI of the project can also be affected by the where decisions. Based on previous sourcing implementations, different sites fit different projects. Ultimately if the what does not meet the expected criteria the company can take back the project at a known cost for development. The dynamics of the decisions can be illustrated using the studied project and instantiating the model in the following example (see Fig. 3).
---
**Fig.3 – Model Instantiation: Decision Chain in Project Alpha**
<table>
<thead>
<tr>
<th>Theme</th>
<th>Selected options</th>
<th>Context</th>
<th>Expected outcome: costs/benefits</th>
</tr>
</thead>
<tbody>
<tr>
<td>Why</td>
<td>Free-up resources, Reduced costs</td>
<td>Inability to employ in Sweden</td>
<td>Availability of developers, ROI in 1-2 years</td>
</tr>
<tr>
<td>What</td>
<td>Product Alpha</td>
<td>54 developers in Sweden, 46 developers in India, Overhead of distributed work, Experienced developers, Complex product, Immature product, Limited documentation</td>
<td>Costs of documentation</td>
</tr>
<tr>
<td>Where</td>
<td>India</td>
<td>Some experience exists, Insufficient existing resources</td>
<td>Fast learning curve, New employment, Training</td>
</tr>
<tr>
<td>When</td>
<td>Middle of a release</td>
<td>Overload of existing staff, Decrease in scope of delivery</td>
<td></td>
</tr>
<tr>
<td>How</td>
<td>Full transfer</td>
<td>Transfer costs, Cost for after-transfer support</td>
<td></td>
</tr>
</tbody>
</table>
It can be seen that the decision chain in every case can take a different road regarding the sequence of decisions. Also, some of the options can be more or less locked to the context of the case. Finally, we conclude that it is important to evaluate context-related consequences of each selected option, and align the strategy with the expected outcome at the end.
4.3. Limitations and Threats to Validity
First of all, the offshore insourcing decision model shall not be viewed as a final or single model applicable to all offshore insourcing decisions, since the objective of the model is to facilitate decision-making process and not represent reality.
The applicability of the model is limited to the scope of offshore insourcing decisions. However, we assume that it can be also useful to model offshore outsourcing decisions, and onshore insourcing and outsourcing decisions. We suggest that similar investigations of offshore insourcing shall be performed to increase the reliability of the model, and other sourcing strategies to shed the light into the generalizability of the model.
Research validity threats in this research were addressed through triangulation and peer debriefing (Creswell 2003). We triangulated different data sources (related literature, interview data, informal discussions and project documentation) to build a coherent understanding of the offshore insourcing decisions. Peer debriefing was used to provide an external view on the findings from the co-authors that have not been directly involved in collecting and analyzing the empirical data. In addition, the credibility of the empirical narrative outlined in this paper is supported by confirming it with the case company and by having two co-authors who have spent a prolonged time in the field.
5. CONCLUSIONS
In this paper we aimed at exploring the key factors influencing offshore insourcing decisions. This was approached through an exploratory study based on related research findings and an empirical field study of one global transfer project in an international software company. We have found that an offshore insourcing decision comprises of selecting the what, where, when, how and why to source. Although companies claim to start offshoring because of less offensive reasons than simply reducing costs, the main driving force for offshoring has always been related to costs (Carmel and Tjia 2005). This means, that the main objective of the decision analysis is to maximize the expected utility from the made decision based on the chosen strategies. This requires a deliberate analysis of the factors that influence the outcome of the decision. In our study we have observed that particular decision points can be interrelated and thus it is essential to aligned the what, where, when, how to the why. This is however not an easy task, since companies may lock themselves early in the decision process to e.g. available types of software work, existing offshore locations, points in the lifecycles of the products, and experienced work division modes.
Finally, on the basis of our observations we offer an offshore insourcing decision model, which shall assist in evaluating of existing costs and benefits, expected costs and benefits, and selected implementation scenarios. Although optimal strategies and solutions are yet to be explored in more detail, we believe that the raised questions provide a structure for exploring, evaluating and comparing different decisions, and planning their implementation. Experience structured accordingly shall also contribute to continuous learning in this field.
REFERENCES:
Šmite D and Wohlin C (in press) Strategies Facilitating Software Product Transfers. Accepted for publication in IEEE Software
|
{"Source-Url": "http://www.wohlin.eu/globalsourcingworkshop11.pdf", "len_cl100k_base": 6967, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 56557, "total-output-tokens": 8462, "length": "2e12", "weborganizer": {"__label__adult": 0.000732421875, "__label__art_design": 0.0005593299865722656, "__label__crime_law": 0.0011281967163085938, "__label__education_jobs": 0.0087890625, "__label__entertainment": 0.00010830163955688477, "__label__fashion_beauty": 0.0003108978271484375, "__label__finance_business": 0.019683837890625, "__label__food_dining": 0.0006084442138671875, "__label__games": 0.0009160041809082032, "__label__hardware": 0.0007381439208984375, "__label__health": 0.0006871223449707031, "__label__history": 0.0003592967987060547, "__label__home_hobbies": 0.0001468658447265625, "__label__industrial": 0.00077056884765625, "__label__literature": 0.0006155967712402344, "__label__politics": 0.0008635520935058594, "__label__religion": 0.0004696846008300781, "__label__science_tech": 0.0071258544921875, "__label__social_life": 0.0002498626708984375, "__label__software": 0.01015472412109375, "__label__software_dev": 0.943359375, "__label__sports_fitness": 0.0004038810729980469, "__label__transportation": 0.0009889602661132812, "__label__travel": 0.00044465065002441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38990, 0.03161]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38990, 0.20547]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38990, 0.92871]], "google_gemma-3-12b-it_contains_pii": [[0, 604, false], [604, 3606, null], [3606, 6400, null], [6400, 8664, null], [8664, 11745, null], [11745, 14388, null], [14388, 16966, null], [16966, 20280, null], [20280, 23341, null], [23341, 26621, null], [26621, 28457, null], [28457, 31321, null], [31321, 34382, null], [34382, 37874, null], [37874, 38990, null]], "google_gemma-3-12b-it_is_public_document": [[0, 604, true], [604, 3606, null], [3606, 6400, null], [6400, 8664, null], [8664, 11745, null], [11745, 14388, null], [14388, 16966, null], [16966, 20280, null], [20280, 23341, null], [23341, 26621, null], [26621, 28457, null], [28457, 31321, null], [31321, 34382, null], [34382, 37874, null], [37874, 38990, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38990, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38990, null]], "pdf_page_numbers": [[0, 604, 1], [604, 3606, 2], [3606, 6400, 3], [6400, 8664, 4], [8664, 11745, 5], [11745, 14388, 6], [14388, 16966, 7], [16966, 20280, 8], [20280, 23341, 9], [23341, 26621, 10], [26621, 28457, 11], [28457, 31321, 12], [31321, 34382, 13], [34382, 37874, 14], [37874, 38990, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38990, 0.23034]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
2fbf7b0c3e00cfbb06a0e6e9daed61370b795f44
|
The following problems have been used on previous exams for CS212. The exam will have 4 problems.
1. Write a function called `shorten` that is defined in the class `P8` and shortens each element of an array of strings. Every string with more than two characters is cut down to its first two characters.
For example, a program that uses the function `shorten` follows.
```java
public class P8 {
public static void main(String args[]) {
String x[] = {"CSCI", "1", "11", "Queens", "College", "CUNY"};
shorten(x);
for (int i = 0; i < 6; i++) System.out.print(x[i] + " ");
// Output: CS 1 11 Qu Co CU
System.out.println();
}
}
```
**Answer:**
2. Write the best title lines for the functions that are called by the following main program. Do not supply the blocks for the functions.
```java
public class P1 {
public static void main(String args[]) {
int x = 2, y = 3, z[] = new int[4];
boolean a = true, b = false, c[] = new boolean[4];
String s = "Hello", t = "goodbye", u[][] = new String[4][5];
for (int i = 0; i < 4; i++) c[i] = data(x, y, 2.5); // (a)
setToFive(z, c);
System.out.println(z[1]); // (b) prints 5
System.out.println(speedLimit(x, z[1])); // (c) prints 55
System.out.println(numberStrings(4, u, 5)); // (d) prints 20
f(numberStrings(0, u, 0), data(y, x, f(20, a || b))); // (e)
}
}
```
(a) Title line for `data` as called at the line marked (a).
**Answer:**
(b) Title line for `setToFive` as called at the line marked (b).
**Answer:**
(c) Title line for `speedLimit` as called at the line marked (c).
3. Consider the following Java program.
```java
public class P2 {
public static int F(int x[], int c) {
if (c < 3)
return 0;
return x[c - 1] + F(x, c - 1);
}
public static int G(int a, int b) {
b = b - a;
a = b + a;
return a;
}
public static void main(String args[]) {
int a = 4, b = 1;
int x[] = {3, 1, 4, 1, 5};
String s = "Problem Number 2";
System.out.println(x[2 + 2] + x[2]); // line (a)
System.out.println(s.substring(2, 3)); // line (b)
System.out.println(s.substring(s.indexOf("b"))); // line (c)
System.out.println("" + G(b, a) + a + b); // line (d)
System.out.println(F(x, 5)); // line (e)
}
}
```
Answer:
(d) Title line for numberStrings as called at the line marked (d).
Answer:
(e) Title line for f as called at the line marked (e).
Answer:
(a) What is the output at line (a)?
Answer:
(b) What is the output at line (b)?
Answer:
(c) What is the output at line (c)?
Answer:
(d) What is the output at line (d)?
Answer:
(e) What is the output at line (e)?
Answer:
4. Write blocks of code to perform the functions used in the following main program. Your blocks must match the given title lines. Each block should be a short function of only a few lines.
```java
public class P3 {
public static void main(String args[]) {
int a[][] = {{1, 2, 3, 4}, {0, 1, 2, 3}};
int b[] = {3, 1, 4, 1};
int x = 1, y = 2;
String s = "hello";
// (a) Return true if at least one of x and y is positive. Here Yes is printed
if (positive(x, y)) System.out.println("Yes");
// (b) Return the sum of the first row. Here 10 is printed.
System.out.println(rowSum(a));
// (c) Return the smallest element. Here 1 is printed.
System.out.println(smallest(b));
// (d) Remove the first letter. Here ello is printed.
System.out.println(removeFirst(s));
// (e) Insert an X at the specified position. Here heXllo is printed.
s = addX(s, 2);
System.out.println(s);
}
}
```
(a) private static boolean positive(int x, int y)
Answer:
(b) private static int rowSum(int[][] a)
Answer:
(c) private static int smallest(int[] b)
Answer:
(d) private static String removeFirst(String s)
Answer:
(e) private static String addX(String s, int i)
Answer:
5. Write a function called `randFill` that fills the entries of an array with random integers in the range from 10 to 99 (inclusive). (You should use a standard Java method to generate the values. Your solution should use no more than 6 lines of code.)
For example, a program that uses the function `randFill` follows.
```java
public class P4 {
public static void main(String[] args) {
int x[];
x = randFill(5);
for (int i = 0; i < 5; i++)
System.out.print(x[i] + " "); // prints 5 random numbers
System.out.println(); // such as 93 73 12 69 40
}
}
```
Answer:
6. Write a function called `evenUp` that returns the result of increasing the first even digit in a positive integer parameter by 1. (Your solution should use no more than 10 lines of code. Your function can return any convenient value of your choice if the parameter is not positive.)
For example, a program that uses the function `evenUp` follows.
```java
public class P5 {
public static void main(String[] args) {
System.out.println(evenUp(1232)); // prints 1332 only the first even 2 changes
System.out.println(evenUp(1332)); // prints 1333
System.out.println(evenUp(1333)); // prints 1333 no even digit to change
System.out.println(evenUp(22)); // prints 32
System.out.println(evenUp(2)); // prints 3
}
}
```
Answer:
7. Write a complete Java program that is to be used for a psychology study into random number choices by a human volunteer. Your program is to operate as follows. (Programs that correctly carry out some of the tasks will receive partial credit. Your program should not be more than 30 lines long.)
Ask the user (the volunteer) to repeatedly type 2 digit numbers onto the screen.
Read the user input and discard any number that is less than 10 or greater than 100, but keep track of numbers within this range.
When the user enters the number 100 the experiment ends and the program prints a summary with the following form (with one line of output for each of the numbers from 10 to 99 that was used):
User chose 10 for 2.1% of choices.
User chose 12 for 0.7% of choices.
User chose 16 for ...
Answer:
8. Read the following code and answer the questions starting at part (a).
```java
public class Q1 {
public static void main(String[] args) {
print(args);
}
static void print(String[] greetings) {
for (int i = 0; i < args.length; i++)
System.out.println(greetings[i] + "");
return true;
}
}
```
(a) Does this program compile? If yes, proceed to part (c).
Answer:
(b) If your answer to part (a) is no, correct and rewrite the line(s) that has(have) error(s).
Answer:
(c) What is the name of the file that must contain class Q1 so that you can compile it using the command line in the terminal?
**Answer:**
(d) What command would you type on the terminal to run the program in class Q1 so as to produce output: “Welcome to cs212!”?
**Answer:**
9. Write a function called `positiveRows` that returns the number of rows of a 2-dimensional array that have a positive sum. The array contains doubles.
For example, a program that uses the function `positiveRows` follows. The output is 2 because only columns 1 and 3 have positive sum.
```java
public class P7 {
public static void main(String args[]) {
double x[][] = { { 1.0, 6.0, 3.0, -5.0, -4.0, 2.0 },
{ -4.0, -4.0, -4.0, 4.0, 2.0, 4.0 } };
System.out.println(positiveRows(x)); // prints 1
}
}
```
**Answer:**
10. Read the following code. Then, in each of the labeled lines, find all syntax errors and state the nature of the errors in the space provided for the answer. If there is no error, write ”No error” in the answer space.
```java
class A {
private int count;
}
class C {
private String text = "";
public void C(String s) {
String text = s;
}
}
```
class D {
int x = 3;
boolean b;
public D(boolean b) {
this.b = b;
}
public int doIt() {
return this; //----------------------------------(a)
}
}
public class Y {
public static void main(String[], args) {
A a = new A();
a.count = 10; //-----------------------------------(b)
C c = new C("ABC"); //--------------------------(c)
System.out.println(c.text.toLowerCase()); //--------------------------(d)
D d = new D(true);
System.out.println(d.doIt());
}
}
(a) Answer (a):
(b) Answer (b):
(c) Answer(c):
(d) **Answer(d):**
11. In this problem you are to write complete code for a class called `TrafficSignal`. The class represents a traffic signal that cycles through four states: _green_, _yellow_, _red_ and _leftTurn_, in this order. This means that _leftTurn_ comes just after _red_ and before _green_.
The class should have only one attribute, _light_, which is of type int. The value of this attribute determines the state of the traffic signal. For example, if the attribute _light_ has value 0, the traffic signal shows _green_.
The class should have one (default) constructor and only three additional methods. The default color of the light should be _red_. The three methods are _currentColor()_ which returns the integer corresponding to the current state, _changeLight()_ which changes light from the current state to the next state. Finally, write a _toString()_ method which returns the state of the traffic signal as _green_, _yellow_ etc..
**Answer:** Write your class below.
12. Predict the output generated at the marked `println` lines in the following program. The program makes use of the class `Employee` that is also given. Please enter your answers in the space provided below the code.
```java
public class Employee {
private String name;
private double salary;
public Employee(String name, double salary) {
this.name = name;
this.salary = salary;
}
public String getName() {
return name;
}
public double getSalary() {
return salary;
}
public void raiseSalary(double percent) {
double raise = salary * percent/100;
salary += raise;
}
}
public class Pr3 {
```
public static void main(String[] args) {
double percent = 10;
System.out.println("percent = " + percent); //---------------(a)
doubleValue(percent);
System.out.println("percent = " + percent); //---------------(b)
Employee john = new Employee("John", 75000);
System.out.println("Salary: $" + john.getSalary()); //----------(c)
doubleSalary(john);
System.out.println("Salary: $" + john.getSalary()); //----------(d)
Employee a = new Employee("Emily", 90000);
Employee b = new Employee("Henry", 100000);
System.out.println("a's Name: " + a.getName()); //----------(e)
System.out.println("b's name: " + b.getName()); //----------(f)
swap(a, b);
System.out.println("a's Name: " + a.getName()); //----------(g)
System.out.println("b's name: " + b.getName()); //----------(h)
}
public static void doubleValue(double x) {
x = 2 * x;
System.out.println("End of method: x = " + x); //---------(i)
}
public static void doubleSalary(Employee y) {
y.raiseSalary(200);
System.out.println("End of method: Salary : " + y.getSalary()); //----(j)
}
public static void swap(Employee x, Employee y) {
Employee temp = x;
x = y;
y = temp;
System.out.println("End of method: x is " + x.getName()); //----------(k)
System.out.println("End of method: y is " + y.getName()); //----------(l)
}
(a) Answer(a):
(b) Answer(b):
13. (a) Retail products are identified by their **Universal Product Codes (UPCs)**. The most common form of a UPC code has 12 decimal digits: the first digit $x_1$ identifies the product category, the next five digits $x_2, \ldots, x_6$ identify the manufacturer, the following five digits $x_7, \ldots, x_{11}$ identify the particular product, and the last digit $x_{12}$ is a check digit that is determined by the following relation:
Write a method called checkDigit() that takes a String parameter whose 11 characters are the first 11 digits of a UPC code. The method should return an int that gives the value of the corresponding check digit.
public static int checkDigit(String str) {
Answer:
(b) Write a fast recursive method for evaluating \(a^n\) when \(n \geq 0\) is a non-negative integer and \(a\) is a positive integer greater than 1. [Hint: use the equalities \(a^{2n} = (a^n)^2\) and \(a^{2n+1} = a \times a^{2n}\).
public static int fastPower(int a, int n) {
Answer:
14. This question is related to Strings and Files. It is a two part question that requires code for two methods.
(a) Write a private method called removeDuplicates that accepts a String parameter and returns a new String in which consecutive occurrences of a character are removed. If the method removeDuplicates(football) is called, it returns fotbal after replacing the consecutive o’s by a single o and the consecutive l’s by a single l. Similarly if we pass cookbook!!! as a parameter to the method, it returns cokbok!.
Your Code:
Answer:
(b) Write a private method called `removeDuplicatesFromFile` that accepts a String parameter that names a text file. It reads the file line by line and removes from each word of the file any consecutive occurrences of a character. It prints the entire modified text with words separated by spaces, and punctuation marks.
For example, let us suppose that the file has the following text:
The sky is a deep red!!!
Calvin’s skin is a pale green!!
Yellow flowers are now blue!!!!
Theee eeend.
When we call `removeDuplicatesFromFile("myFile.txt")` it should print:
The sky is a dep red!
Calvin’s skin is a pale gren!
Yelow flowers are now blue!
The end.
You can see that the duplicate letters and punctuation marks are removed. You can assume that the file exists and the words are separated by single spaces.
Your Code:
15. A school has 100 lockers and 100 students. All lockers are closed on the first day of school. As the students enter, the first student, S1, opens every locker. Then the second student, S2, begins with the second locker, denoted by L2, and closes every other locker. Student S3 begins with the third locker L3, and changes every third locker (closes if it is open, and opens if it is closed). Student S4 begins with the locker L4 and changes every fourth locker. Student S5 begins with L5 and changes changes every fifth locker, and so on, until S100 changes L100. After all students have passed through the building and changed the lockers, some lockers will be open and others will be closed. For example, L1 was only changed by S1 and will be open. However, L3 was changed by S1 and S3 so will be closed. Write a program to find and display all lockers that will be open. Your output should list the open lockers as L followed by a number and separate different lockers by exactly one space. There should be no more than 15 actual instruction lines excluding all braces and empty lines. Name your class `Lockers`. It should have just a static main method.
(Hint: Use an array of 100 boolean elements, each of which indicates whether a locker is open(`true`) or closed(`false`). Initially all lockers are closed.)
16. In both parts read the code and predict the output.
public static double mystery(double d[], int l, int r) {
if (l == r) return d[l];
int m = l + (r-l)/2;
double x = mystery(d, l, m);
double y = mystery(d, m+1, r);
if (x > y) return x;
else return y;
}
(a) what does the above method return?
public static int mystery1(int b, int n) {
if (n == 0) return 1;
int y = mystery1(b, n/2);
int z = y * y;
if (n%2 == 0)
return z;
else
return z*b;
}
(b) What will be printed by the following instruction?
System.out.println(mystery1(2,10));
17. (a): Write a class Distance with the following specification: There are two instance variables: feet and inches stored as integers; a private constant called INCHES_IN_A_FOOT with value 12; a no argument constructor to initialize a Distance as zero feet, zero inches.
Answer:
(b): Write a two argument constructor for Distance that accepts two positive integers for feet and inches. Write an access (get) method for feet and a mutator (set) method for inches.
Answer:
(c): Write a public method `add` that accepts another `Distance` object as a parameter and adds the distance of the other object to the current distance. For example:
```java
x1 = new Distance(5,8); x2 = new Distance(2,7);
x1.add(x2); // x1 becomes 8 feet, 3 inches
```
Answer:
(d): Write a public method `metricDistance` that returns (as a `double`) the number of meters in the distance represented by the object. This method should contain a private variable representing the number of feet in a meter (1 foot = 0.3048 meters).
Answer:
(e): Modify the constructor you wrote for part (b) so that if the number of inches is greater than 11, it prints **Illegal number of inches** to the user’s screen, and sets `inches` to 11.
Answer:
18. Each of the following 3 pieces of code might contain an error. For each part, if there is an error correct it by rewriting just one line of the code in the space provided. If the code has no error, leave the answer space blank.
(a): A two-dimensional array of `int` has been initialized and `total` has been set to 0. This code should calculate `total` as the sum.
```java
for (int row = 0; row < numbers.length; row++) {
for (int col = 0; col < numbers.length; col++)
total += numbers[row][col]; }
```
Answer:
(b):
The following is the `add` method in a class called `Fraction`. Read the code and answer the question.
```java
public void add(Fraction frac) {
int a, b, c, d;
a = this.getNumerator(); b = this.getDenominator();
c = frac.getNumerator(); d = frac.getDenominator();
setNumerator(a*d + c*b); setDenominator(b*d); }
```
Correct the following call to the add method.
```java
f3 = f1.add(f2);
```
Answer:
(c): Given the class definition,
public class Pet{
private String name; private String breed; private int age;
public setName(String n){ name = n;}
public setBreed(String b){breed = b; } }
Correct the following, if necessary
public class PetDemo{
public static void main(String[] args){
Pet p1 = null;
p1.setName("Max");
p1.setBreed("Dog");
}
}
Answer:
19. If str is a String in Java the str.length() method gives its length and the str.charAt(i) method gives the character of the string at index i.
For example,
String s1 = "Hello";
int len = s1.length(); //Assigns 5 to len because the length of s1 is 5.
char c = s1.charAt(len-1); //assigns ‘o’ to c because the character at index 4 = len - 1 is ‘o’
Use these two string methods to write a complete Java program to find the number of uppercase letters in a sentence typed on the input screen. For example here is a sample run of the program.
Enter a sentence: Here Is A Long String.
It has 5 upper-case characters.
Use at most 12 lines of code.
Answer:
20. What will be printed by this program?
public class Question4 {
static String s1, s2, s3, s4;
public static void main(String[] args) {
s1 = new String("Kona");
s2 = new String("Ethiopian");
s3 = new String("Blue Mountain");
s4 = new String(s1);
System.out.println("Match by ==: " + (s1 == s4));
System.out.println("Match by equals(): " + s1.equals(s4));
coffee(s1, s2);
System.out.println(mixThem(s2, s3, s4));
}
private static void coffee(String a1, String a2) {
if (a1.compareTo(a2) > 0)
System.out.println(a2);
else
System.out.println(a1);
}
private static String mixThem(String b1, String b2, String b3) {
String c1 = b1;
b1 = b2;
b1 = new String("Sulawesi");
System.out.println(c1);
System.out.println(b2);
System.out.println();
b2 = new String("b3");
return b2;
}
}
Answer:
21. Write static methods for the class P5 that carry out the indicated tasks in this program.
import java.io.PrintWriter;
public class P5 {
public static void main(String args[]) {
int a[][] = {{1, 2, 3, 4}, {0, 1, 2, 3, 6}, {3, 5, 8, 1, 4, 7}};
int b[] = {3, 6, 1, 4, 1, 8};
int x = 1; int y = 2;
// (a) Return true if at least one of x and y is positive. Here Yes is printed
if (positive(x, y)) System.out.println("Yes");
}
/ (b) Return the sum of the third row. Here 28 is printed.
System.out.println(rowSum(a));
/ (c) Return the smallest element. Here 1 is printed.
System.out.println(smallest(b));
}
|
{"Source-Url": "https://venus.cs.qc.cuny.edu/~ryba/cs212/prac1.pdf", "len_cl100k_base": 5545, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 31443, "total-output-tokens": 6695, "length": "2e12", "weborganizer": {"__label__adult": 0.0009636878967285156, "__label__art_design": 0.0007376670837402344, "__label__crime_law": 0.000973224639892578, "__label__education_jobs": 0.07708740234375, "__label__entertainment": 0.00020515918731689453, "__label__fashion_beauty": 0.00047206878662109375, "__label__finance_business": 0.0004425048828125, "__label__food_dining": 0.0013208389282226562, "__label__games": 0.002437591552734375, "__label__hardware": 0.0023708343505859375, "__label__health": 0.0008416175842285156, "__label__history": 0.0006766319274902344, "__label__home_hobbies": 0.000377655029296875, "__label__industrial": 0.0008463859558105469, "__label__literature": 0.0009226799011230468, "__label__politics": 0.0005321502685546875, "__label__religion": 0.0011882781982421875, "__label__science_tech": 0.009429931640625, "__label__social_life": 0.00038909912109375, "__label__software": 0.007602691650390625, "__label__software_dev": 0.88720703125, "__label__sports_fitness": 0.000980377197265625, "__label__transportation": 0.0013580322265625, "__label__travel": 0.0005354881286621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20911, 0.01896]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20911, 0.95218]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20911, 0.76614]], "google_gemma-3-12b-it_contains_pii": [[0, 1652, false], [1652, 2553, null], [2553, 3780, null], [3780, 5439, null], [5439, 6769, null], [6769, 7990, null], [7990, 8595, null], [8595, 10277, null], [10277, 11690, null], [11690, 12126, null], [12126, 13246, null], [13246, 15446, null], [15446, 16482, null], [16482, 18214, null], [18214, 19282, null], [19282, 20724, null], [20724, 20911, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1652, true], [1652, 2553, null], [2553, 3780, null], [3780, 5439, null], [5439, 6769, null], [6769, 7990, null], [7990, 8595, null], [8595, 10277, null], [10277, 11690, null], [11690, 12126, null], [12126, 13246, null], [13246, 15446, null], [15446, 16482, null], [16482, 18214, null], [18214, 19282, null], [19282, 20724, null], [20724, 20911, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 20911, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20911, null]], "pdf_page_numbers": [[0, 1652, 1], [1652, 2553, 2], [2553, 3780, 3], [3780, 5439, 4], [5439, 6769, 5], [6769, 7990, 6], [7990, 8595, 7], [8595, 10277, 8], [10277, 11690, 9], [11690, 12126, 10], [12126, 13246, 11], [13246, 15446, 12], [15446, 16482, 13], [16482, 18214, 14], [18214, 19282, 15], [19282, 20724, 16], [20724, 20911, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20911, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
953c288178cf99d8b1067fff670e26144fe32728
|
Design and Implementation of a Coordination Model for Distributed Simulations
Rolf Hennicker, Matthias Ludwig
Institut für Informatik
Ludwig-Maximilians-Universität München
Oettingenstrasse 67
D-80538 München, Germany
Abstract: The coordination of time-dependent simulation models is an important problem in environmental systems engineering. We propose a design model based on a formal specification using the CSP-like language FSP of Magee and Kramer. The heart of our design model is a global timecontroller which coordinates distributed simulation models according to their local time scales. We show how a systematic transition from the design model to a Java implementation can be derived from the design model. The strong practical relevance of the approach is ensured by the fact that our strategy is used to produce the kernel of the integrative simulation system DANUBIA developed within the GLOW-Danube project.
1 Introduction
In the last decade environmental systems engineering became an important application area for information and software technology. Setting out from geographical information systems and GIS-based expert systems nowadays one is particularly interested in the development of integrative systems with a multilateral view of the world in order to understand better the mutual dependencies between environmental processes. Of particular importance are water-related processes which have an impact on the global change of the hydrological cycle with various consequences concerning water availability, water quality and water risks like water pollution, water deficiency and floods.
There are several projects dealing with methods, techniques and tools to support a sustainable water resource management, for instance within the European research activity EESD (Energy, Environment and Sustainable Development, cf. [3]) or within the German initiative GLOWA (Global Change in the Hydrological Cycle; cf. [4]). Within the GLOWA framework the project GLOWA-Danube [8] deals with the Upper Danube watershed as a representative area for mountain-foreland regions. The principle objective of GLOWA-Danube is to develop new techniques of coupled distributed simulations that allow to
*This work is partially supported by the GLOWA-Danube project (01LW0303A) sponsored by the German Federal Ministry of Education and Research.
integrate simulation models of various disciplines in order to study water-related global change scenarios. For this purpose the integrative simulation system DANUBIA is developed which is designed as an open, distributed network integrating the simulation models of all socio-economic and natural science disciplines taking part in GLOWA-Danube. Actually seventeen simulation models, either implemented directly in Java or surrounded by a Java wrapper, are integrated in the DANUBIA system, covering the disciplines of meteorology, hydrology, remote sensing, ground- and surface water research, glaciology, plant ecology, environmental psychology, environmental and agricultural economy, and tourism. As a result of coupled simulations transdisciplinary effects of mutually dependent processes can be analysed and evaluated. For example, the agricultural economy model determines sowing and harvesting dates of different crops, while the plant ecology model simulates growing of the plants, dependent on the precipitation provided by an atmosphere model. After the harvesting date the plant ecology model returns the crop yield to the agricultural economy model, which in turn is used for future calculations of the farmers.
An important characteristic of DANUBIA is the possibility to perform integrative simulations where the single simulation models run concurrently and exchange information at runtime. Since any integrative simulation models water-related processes over a specific period of time (usually several years) and since each simulation model has an individual local time step in which computations are periodically executed (ranging from hours, like in meteorology, to months, like in social sciences) the distributed models must be coordinated to work properly together. For this purpose it must be guaranteed that during the simulation run
- all values accessed through model interfaces are in a stable state (which corresponds to the usual read/write exclusion) and, moreover, that
- every simulation model is supplied with valid data, i.e. with data that fits to the local model time of the importing simulation model.
This informal description of the coordination problem provides only an intuitive idea of the requirements for integrative simulations. In [5] the authors have presented a formalization of the coordination problem and a formal design model which are both specified in terms of the language FSP (Finite State Processes) introduced by Magee and Kramer [9]. In particular, it has been shown by model checking techniques that the coordination requirements are satisfied by the design model. The basic idea of the design model is to introduce a global timecontroller which stores the current status of all simulation models participating in an integrative simulation in order to coordinate them appropriately. Technically, the timecontroller and the single simulation models are represented by FSP-processes and the simulation system itself is represented by the parallel composition of the single processes which are synchronized through appropriate shared actions.
In this work we demonstrate how the formal design model can be systematically transformed into a UML implementation model which can be directly realized by a Java program. For this purpose we first classify the given FSP-processes such that all processes
1 An alternative formalization on a meta level using purely mathematical notations is given in [2].
representing simulation models are considered as active objects and the timecontroller is considered as a reactive object. The shared actions used in the FSP-model for synchronization are translated into synchronized methods offered by the timecontroller which must be called by a simulation model whenever the model wants to get data from other models or to provide data for other models. Hence the timecontroller is realized by a Java monitor object and the single simulation models are realized by concurrently executing Java threads. In order to abstract from concrete computations that a simulation model performs and which are not relevant for the coordination we develop an appropriate system architecture such that the developer of a concrete DANUBIA simulation model must only extend an appropriate abstract model class provided by the DANUBIA framework.
The paper is organized as follows: We start, in section 2, with a brief introduction to FSP. In section 3, we describe the coordination problem and, in section 4, we provide a solution in terms of a formal design model represented by FSP processes. In section 5 the design model is transformed into a Java implementation that leads to a flexible system architecture for the kernel of the DANUBIA system.
2 A Brief Introduction to FSP
The language FSP has been introduced by Magee and Kramer as a formalism for modeling concurrent processes. An elaborated description of the syntax and semantics of FSP can be found in [9]. Syntactically FSP resembles CSP [6]. Essential constructs for building FSP processes are
<table>
<thead>
<tr>
<th>Syntax</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>STOP</td>
<td>process termination</td>
</tr>
<tr>
<td>$(a \rightarrow P)$</td>
<td>action prefix</td>
</tr>
<tr>
<td>$(a \rightarrow P \mid \text{when (cond) } b \rightarrow Q)$</td>
<td>choice (involving a guarded action)</td>
</tr>
<tr>
<td>$P + {a_1, \ldots, a_n}$</td>
<td>alphabet extension</td>
</tr>
<tr>
<td>$P/{\text{new}_1/\text{old}_1, \ldots, \text{new}_n/\text{old}_n}$</td>
<td>action relabeling</td>
</tr>
<tr>
<td>$(P</td>
<td>Q)$</td>
</tr>
<tr>
<td>$P \setminus {a_1, \ldots, a_n}$</td>
<td>hiding</td>
</tr>
<tr>
<td>$P @{a_1, \ldots, a_n}$</td>
<td>interface definition</td>
</tr>
</tbody>
</table>
Each process $P$ has an alphabet, denoted by $\alpha P$, consisting of those actions in which the process can be engaged. For instance, a process $(a \rightarrow P)$ obtained by action prefix first engages in the action $a$ and then behaves like the process $P$. If we build the parallel composition $(P|Q)$ then actions that are shared by $P$ and $Q$ (i.e., belong to $\alpha P$ and $\alpha Q$) must be performed simultaneously. For the non-shared actions interleaving semantics of parallel processes is used. The hiding operator allows to hide certain actions which are then invisible and represented by $\tau$. The construction of an interface is the complement of hiding.
Processes can be defined by process declarations of the form $P = E$ or, in the case of parallel processes, by $||P = (E||F)$. A (non-parallel) process declaration can be recursive
and can involve local, indexed processes of the form
\[ P = Q[value], \]
\[ Q[i : T] = E. \]
where \( T \) is a (finite) type and \( i \) is an index variable of type \( T \).
Often we will use indexed actions of the form \( a[i] \). A shorthand notation for a choice over a finite set of indexed actions is \( (a[T] \to P) \), which is equivalent to \( (a[x] \to P) \ldots (a[y] \to P) \), where range \( T = x..y \). We will also use labeled actions of the form \([label].a\) and choice over a finite set of labeled actions \([T].a\) with \( T \) as above. To obtain several copies of a process \( P \) we use process labeling \([label] : P\) which denotes a process that behaves like \( P \) with all actions labeled by \([label]\).
The semantics of a process is given by a finite labeled transition system (LTS) which can be pictorially represented by a directed graph whose nodes are the process states and whose edges are the state transitions labeled with actions. Since FSP is restricted to a finite number of states one can automatically check safety and progress properties of processes with the LTSA tool [7].
The following example shows an FSP model of a simple producer/consumer system with a bounded buffer. The bounded buffer is modeled by a parameterized FSP process \( BUFFER \) whose formal parameter \( MAX \) has the default value 1. The definition of the \( BUFFER \) process uses local, indexed processes \( BUF[i] \).
\[
\begin{align*}
\text{PRODUCER} & = (\text{put} \to \text{PRODUCER}) , \\
\text{CONSUMER} & = (\text{get} \to \text{CONSUMER}) . \\
\text{BUFFER}(\text{MAX}=1) & = \text{BUF}[0] , \\
\text{BUF}[i:0..\text{MAX}] & = \\
& \quad (\text{when} \ (i<\text{MAX}) \ \text{put} \to \text{BUF}[i+1] \\
& \quad | \text{when} \ (i>0) \ \text{get} \to \text{BUF}[i-1]) . \\
\text{||SYS} & = (\text{PRODUCER} | \text{CONSUMER} | \text{BUFFER}(3)) .
\end{align*}
\]
Note that in the composite process \( SYS \) the formal parameter of the \( BUFFER \) process is instantiated by 3. Hence, the semantics of the composite process is given by the following LTS:
3 The Coordination Problem of Integrative Simulations
A simulation model simulates a physical or social process for a finite period of time which we call simulation time. Typically a simulation model does not work on a continuous but
on a discrete time scale. Thus the simulation period is represented by a strictly ordered,
discrete set of points in time (denoted by natural numbers), at which data is provided by
a simulation model. Each model has an individual time step (the distance between two
subsequent simulation points) which depends on the simulated process. We assume that
the time step of a model remains fixed during the whole simulation. Abstraction from the
concrete simulated process of a simulation model leads us to the following common life
cycle of each model within an integrative simulation:
After a simulation model has been started it provides first some initial data. Then it per-
forms periodically the following steps until the end of the simulation is reached:
1. Get required data from other models.
2. Compute new data which are valid at the next simulation point.
3. Provide the newly computed data.
We can model this behavior by the following (generic) FSP process which is parameterized
w.r.t. the individual time step of a simulation model. Note that in the process definition
we provide a default time step (e.g. \texttt{step=1}) which is necessary according to the finite
states assumption of FSP. For the same reason it is necessary to model the simulation start
and the simulation end by some predefined constants.
\begin{verbatim}
const simStart = 0
const simEnd = 6
range Time = simStart..simEnd
MODEL(step=1) = (start -> INIT),
INIT = (enterProv[simStart] -> prov[simStart] ->
exitProv[simStart] -> M[simStart]),
M[t:Time] =
if (t+step <= simEnd)
then (enterGet[t] -> get[t] -> exitGet[t] ->
compute[t] -> enterProv[t+step] ->
prov[t+step] -> exitProv[t+step] -> M[t+step])
else STOP.
\end{verbatim}
In the above process description the (indexed) actions \texttt{prov[x]} represent providing of
export data which are valid at time \texttt{x}, the actions \texttt{get[x]} represent getting of import
data which are valid at time \texttt{x} and the actions \texttt{compute[x]} represent the computation
of new data based on import data which are valid at time \texttt{x}. The actions \texttt{get[x]} and
\texttt{prov[x]} are enclosed by corresponding \texttt{enter} and \texttt{exit} actions which are needed for
the coordination of concurrently running simulation models.
To represent a particular instance of a simulation model we have to provide a model name
(model identifier) and the particular time step of the model under consideration. For spec-
ifying model identifiers we use process labels (cf. Section 2) and the time step of a model
is determined by an actual parameter. For instance, the FSP processes \texttt{[1] : MODEL(2)}
and \texttt{[2] : MODEL(3)} represent two simulation models, one with number 1 and time step
2 and the other one with number 2 and time step 3, respectively.
In an integrative simulation several simulation models are coupled in the sense that they
mutually exchange data among each other at runtime. Data exchange is performed via a
port which holds data that is valid at a particular point in time. Since in each time step a huge amount of data is produced previous values will be overwritten in each computation cycle. Hence the different models must be coordinated such that the following conditions are satisfied:
(C1) Whenever simulation models exchange data, the values must be in a stable state.
(C2) Every simulation model must be supplied with valid data, i.e. with data that fit to the local model time of the importing simulation model.
Condition (C1) corresponds to the well-known read/write exclusion which in our context means that get and prov actions must be mutually exclusive. The critical condition is (C2) which becomes quite complex if we consider arbitrarily many simulation models. We can, however, simplify the problem, if we consider only two simulation models at a time and, moreover, if we consider each of the two models only under one particular role, either as a provider or as a user of information. In the following let U denote a user model and let P denote a provider model. From the user’s point of view we obtain the following requirement (R1), from the provider’s point of view we obtain requirement (R2).
(R1) U gets data expected to be valid at time $t_U$ only if the following holds:
The next data that P provides is valid at time $t_P$ with $t_U < t_P$.
(R2) P provides data valid at time $t_P$ only if the following holds:
The next data that U gets is expected to be valid at time $t_U$ with $t_U \geq t_P$.
(R1) ensures that a user does not get obsolete data, (R2) guarantees that a provider does not overwrite data which is still needed. An execution trace $w$ of an integrative simulation with an arbitrary number of simulation models $[1] : \text{MODEL}(\text{Step}_1),\ldots,[n] : \text{MODEL}(\text{Step}_n)$ is called legal, if $w$ meets the above requirements (R1) and (R2) for all pairwise combinations of models considered as users and as providers.
4 Formal Design Model for Integrative Simulations
In this section we present a solution of the coordination problem by providing a formal design model in terms of FSP processes. The basic idea is to introduce a global timecontroller that coordinates appropriately all simulation models participating in an integrative simulation. More precisely, we want to design an FSP process, called TIMECONTROLLER, such that for $n$ simulation models the composite process
$$\|\text{SYS} = ([1] : \text{MODEL}(\text{Step}_1)||\ldots||[n] : \text{MODEL}(\text{Step}_n))||\text{TIMECONTROLLER}(\text{Step}_1,\ldots,\text{Step}_n)/\{\text{start}/[\text{Models}].\text{start}\}$$
with range Models = 1..$n$ restricts the execution traces of the uncontrolled simulation models to the legal ones. The relabeling clause /{\text{start}/[\text{Models}].\text{start}} ensures
that the processes synchronize on the start action. The composite process SYS is then considered as the design model for the simulation system. The (static) structure of SYS is represented by the diagram in figure 1 which indicates the required communication links.

Figure 1: Structure diagram of the design model
The communication links show that each simulation model \( m \) communicates with the timecontroller via the shared enter and exit actions in the (labeled) set \([m].\text{EnterExits}\), where
\[
\text{set EnterExits} = \{(\text{enterGet}, \text{exitGet}, \text{enterProv}, \text{exitProv})\{\text{Time}\}\}.
\]
This means that the simulation models synchronize with the timecontroller on actions of the form \([m].\text{enterGet}[t]\) etc., where \( m \in \text{Models} \) and \( t \in \text{Time} \). It is then the task of the timecontroller to guarantee that synchronization can only occur if the constraints for integrative simulations described in Section 3 are satisfied. For this purpose the enter actions of the timecontroller are guarded by appropriate conditions which monitor the validity of the constraints. To express the necessary conditions in FSP the timecontroller is equipped with a local state (modeled by index variables) which records the execution status of all simulation models to be coordinated. More precisely, the timecontroller stores for each model the time for which it gets the next import data (represented by the index \( \text{nextGet} \)) and the time for which the model will provide the next export data (represented by the index \( \text{nextProv} \)).
The following timecontroller definition is formulated for the case of two simulation models where the time steps of the two models are given by parameters (with default value 1). It is obvious that this description provides a general pattern which can be easily applied to an arbitrary number of simulation models. For a timecontroller definition which is generic w.r.t. the number of simulation models one would need array types which are not available in FSP (but will, of course, be used in the Java implementation). Let us still remark that the guards of the enterGet actions are inferred from requirement (R1) by considering each model as a potential provider and the guards of the enterProv actions are inferred from requirement (R2) by considering each model as a potential user.
const nrModels = 2
range Models = 1..nrModels
TIMECONTROLLER(modelStep1=1,modelStep2=1) =
(start -> TC[1][1][1][1]),
TC[nextGet1:Time][nextProv1:Time]
[nextGet2:Time][nextProv2:Time] =
(dummy[t:Time] ->
//enterGet
when (t<nextProv1 & t<nextProv2)
[Models].enterGet[t] ->
TC[nextGet1][nextProv1][nextGet2][nextProv2]
//exitGet
[1].exitGet[t] ->
TC[t+modelStep1][nextProv1][nextGet2][nextProv2]
[2].exitGet[t] ->
TC[nextGet1][nextProv1][t+modelStep2][nextProv2]
//enterProv
when (nextGet1>=t & nextGet2>=t)
[Models].enterProv[t] ->
TC[nextGet1][nextProv1][nextGet2][nextProv2]
//exitProv
[1].exitProv[t] ->
if (t+modelStep1<=simEnd)
then TC[nextGet1][t+modelStep1][nextGet2][nextProv2]
else TC[1][1][1][1]
[2].exitProv[t] ->
if (t+modelStep2<=simEnd)
then TC[nextGet1][nextProv1][nextGet2][t+modelStep2]
else TC[1][1][1][1]
dummy[t] ->
TC[nextGet1][nextProv1][nextGet2][nextProv2])
Let us still mention that the actions dummy[t:Time] are only introduced for technical reasons, such that the index variable t is known where necessary. The dummy actions are finally made invisible by applying the hiding operator.
As an example, the design model of a distributed simulation with two simulation models with time steps 2 and 3 resp. is given by the following composite process SYS where the formal parameters of the single, parallel processes are instantiated appropriately.
const stepModel1 = 2
const stepModel2 = 3
SYS =
([1]:MODEL(stepModel1)||[2]:MODEL(stepModel2))
TIMECONTROLLER(stepModel1,stepModel2)
/
[Models].start).
We cannot visualize the labeled transition system of the process SYS because it has too many states and transitions. However, for an analysis of the behavior of the design model we can consider different views on the system which can be formally defined by means of the FSP interface operator. For instance, if we want to focus only on the get and prov actions executed by the system we can build the process SYS@\{Models\}.GetProvs where the set GetProvs is defined as set GetProvs = \{(get,prov)[Time]\}. The corresponding LTS, after minimalization w.r.t. invisible actions, is shown in the following diagram.
In [5] we have formalized the coordination requirements (R1) and (R2) by so-called property processes of FSP and we have shown by model checking with the LTSA tool that the timecontroller-based design model is a correct solution of the coordination problem.
5 Implementation
In this section we will show how to derive in a systematic way a Java implementation from the timecontroller-based design model. In principle, many steps of the translation procedure, which follows the pragmatic ideas of [9], could be automated if the FSP model would be enhanced by additional information, saying, for instance, which processes are considered as active or passive objects and which actions are considered as input or output actions.
5.1 Static Structure
Let us first consider the static structure of the implementation model which is given by the UML class diagram in figure 2. In the following we will explain how this diagram evolved from the design model.
At first the processes TIMECONTROLLER, MODEL and SYS give rise to the classes Timecontroller, Model and Sys respectively. As the process MODEL runs through
the model’s life cycle, the class Model is an active class and inherits from the Java class Thread. All actions of the MODEL process are considered as output actions which correspond to method calls on a Timecontroller object on the one hand (enterGet, exitGet, enterProv, exitProv) and on a concrete simulation model object on the other hand (get, compute, prov). These methods are extracted to the interface TimecontrollerInterface and the abstract class AbstractModel respectively. A Model instance communicates with the Timecontroller via synchronous method calls which are specified by the interface TimecontrollerInterface. Note that one Timecontroller instance controls arbitrarily many Model instances which is denoted by the multiplicities 1 and * at the respective ends of the (directed) association to TimecontrollerInterface.
Let us briefly explain how a concrete simulation model is integrated into the DANUBIA system. DANUBIA provides a core system that is divided into a common runtime environment on the one hand, and a framework containing classes to be used by individual model developers on the other hand (the so-called developer framework). For example the implementation of a model’s life cycle in the class Model is part of the runtime environment, while the class AbstractModel is part of the developer framework (cf. figure 3).
To integrate a concrete model its developer has to build a subclass of AbstractModel and to implement the abstract methods get, compute, prov, the so-called plug points. E.g.
the plug point compute allows to realize the concrete computation algorithm of a model. Besides the plug points depicted in figure 2 there are some further plug points (e.g. for simulation recovery) which are out of the scope of this paper. Let us now consider the attributes of the classes Timecontroller and Model. The attribute nrModels of the class Timecontroller corresponds to the constant nrModels of the FSP process. The array modelStep contains the individual time steps of the models involved in the simulation. It complies with the parameters of the TIMECONTROLLER process. The attributes nextGet and nextProv correspond to the indices of the local processes TC. For the class Model the attributes result as follows. While step derives from the parameter of the process MODEL and simStart and simEnd are derived from global constants of the FSP model, the model identifier id represents the process label in the composite process SYS.
Data exchange between distributed simulation models is performed by corresponding import and export ports. These ports are realized by interfaces (Model1ToModel2, Model2ToModel1) on the one hand and by attributes (m1Info, m2Info) which store a spatial set of data (a so-called data table) on the other hand. Let us consider this from the viewpoint of the class Model1. The interface Model1ToModel2 acts as an export interface and Model2ToModel1 as an import interface for Model1. The attribute m1Info is the corresponding export table which is updated with new data each time the method prov is called and returned when the method getM1Info is invoked (by Model2). Vice versa the attribute m2Info is an import table that is updated within the get method by invoking getM2Info on the import interface. Network communication between distributed simulation models is realized by the Java Remote Method Invocation (RMI) technology and hidden from the model developer by appropriate classes in the runtime environment.
5.2 Dynamic Behavior
Let us now consider the dynamic behavior of the single entities. Since Model is an active class we have to implement its run method. For this purpose we translate the actions of the MODEL process into appropriate method calls. The while loop corresponds to the conditional recursive call of the local FSP process M. The actions of the local process
INIT are extracted to a private method `init`. To make the code better readable we abstain from a proper exception handling here.
```java
public void run() {
init();
int t=0;
while (t+step<=simEnd) {
try {
tc.enterGet(id, t);
} catch (InterruptedException e) {}
m.get(t);
tc.exitGet(id, t);
m.compute(t);
try {
tc.enterProv(id, t+step);
} catch (InterruptedException e) {}
m.prov(t+step);
tc.exitProv(id, t+step);
t = t+step; }
}
private void init() {
try {
tc.enterProv(id, 0);
} catch (InterruptedException {}
prov(0);
tc.exitProv(id, 0); }
}
```
The TIMECONTROLLER process is implemented as a passive entity that reacts on method calls. The class `Timecontroller` acts as a monitor whose (relevant) state is determined by the values of the array attributes `nextGet` and `nextProv` which store the information about the current progress of the models involved in the simulation. These attributes are initialized within the method `start`, such that each array entry is set to `simStart`. The remaining fields of the timecontroller are initialized within the constructor of the class `Timecontroller`. All public methods of the class `Timecontroller` offered by its interface are synchronized methods which are implemented according to the behavior specified in the TIMECONTROLLER process. For the implementation of the guarded enter and exit actions we apply a transformation rule provided in [9] which translates an FSP expression of the form
```
when (cond) op -> MONITOR[nextState]
```
with some condition `cond` and action `op` into the following Java code:
```java
public synchronized void op()
throws InterruptedException {
while (!cond) wait();
... // monitor state = nextState
notifyAll(); }
```
The action op is implemented by the synchronized method op. If the condition cond is not satisfied, the calling thread will be blocked by wait. If the condition is satisfied the thread may enter the critical region and change the monitor state. After that it releases all waiting threads by notifyAll. Note that the while loop ensures that the condition is checked again after a thread has been released. We demonstrate the application of this rule with the enterGet action. The action enterProv is implemented analogously. The TIMECONTROLLER specification reads
\[
\text{... when } (t < \text{nextProv}_1 \&\& t < \text{nextProv}_2) \\
\text{[Models], enterGet}[t] \rightarrow \\
\text{TC}[\text{nextGet}_1][\text{nextProv}_1][\text{nextGet}_2][\text{nextProv}_2] \\
\text{...}
\]
This results in the following implementation:
```java
public synchronized void enterGet(int id, int t) throws InterruptedException {
while (!checkProv(t)) wait();
}
```
Note that a call of notifyAll is not necessary here, since the monitor state is not changed in this method. Note also that the action label in Models which denote the model identifiers and the index t denoting the model time are translated into method parameters. For reasons of better readability the implementation of the guard condition is outsourced to the private auxiliary method checkProv.
```java
private boolean checkProv(int t) {
boolean b = true;
for (int i = 0; i < nrModels; i++) {
b = (b && (t < nextProv[i]));
}
return b;
}
```
In contrast to the enter actions, for the exit actions exitGet and exitProv no guard is provided, but the timecontroller changes its state. As an example, let us translate the exitGet action. Since FSP does not allow arrays we had to specify the effect of the action for each model separately:
\[
\text{... \{1\}.exitGet}[t] \rightarrow \\
\text{TC}[t+\text{modelStep}_1][\text{nextProv}_1][\text{nextGet}_2][\text{nextProv}_2] \\
\text{\{2\}.exitGet}[t] \rightarrow \\
\text{TC}[\text{nextGet}_1][\text{nextProv}_1][t+\text{modelStep}_2][\text{nextProv}_2] \\
\text{...}
\]
By taking advantage of arrays the two lines are subsumed by
```java
public synchronized void exitGet(int id, int t) {
nextGet[id-1]=nextGet[id-1]+modelStep[id-1];
notifyAll();
}
```
Note that we must subtract 1 from the array index to match the correct model identifier.
Finally let us consider the class `Sys` which represents the composite process SYS.
```java
const stepModel1 = 2
const stepModel2 = 3
||SYS = ([1]:MODEL(stepModel1)||[2]:MODEL(stepModel2)||
TIMECONTROLLER(stepModel1, stepModel2))
/{start/[Models].start}.
```
Within its sole method `main(String[] args)` the global constants `simStart` and `simEnd` give rise to local variables with the same names. Furthermore a local array variable `modelStep` is filled with the time steps of the participating simulation models, where the array indices correspond to the model identifiers. The essential task of the main method is to create and start a `Timecontroller` object and a number of `Model` objects with appropriate actual parameters. The call of the `start` method on each created object corresponds to the synchronization of the `start` actions in the FSP process which is expressed by the relabelling clause `/{start/[Models].start}.
```java
public static void main(String[] args) {
int simStart = 0;
int simEnd = 6;
int nrModels = 2;
int[] modelStep = new int[] { 2, 3 };
Timecontroller tc =
new Timecontroller(nrModels, modelStep, simStart, simEnd);
tc.start();
new Model(1, 2, simStart, simEnd, tc,
new Model1()).start();
new Model(2, 3, simStart, simEnd, tc,
new Model2()).start();
}
```
6 Conclusion
We have shown how to construct an implementation of a formal design model for time-dependent integrative simulations. For this purpose we have applied a general translation scheme which transforms simulation models represented by FSP processes into concurrently executing threads and the global timecontroller process into a monitor object with appropriately synchronized methods. In order to hide the coordination problem from the developers of concrete simulation models we have proposed a system architecture which allows to plug in a specific simulation model by implementing particular plug points provided by the DANUBIA developer framework.
The strategy proposed here to coordinate coupled simulations is different from the approach pursued in the OpenMI Standard [10] where single simulation models do not follow a global time control but act autonomously by requesting data from other models whenever needed. Then, if no data for the respective point in time is yet available, it will be estimated by interpolation or extrapolation.
Our approach can be applied to all kinds of systems where concurrently executing components must be coordinated in accordance with some discrete order. Within the GLOWA-Danube project the approach is of high practical relevance for the development of the DANUBIA system because integrative simulations are the heart of all current and future features of DANUBIA and hence the reliability of the whole system depends on the correctness of the coordination implementation.
Acknowledgement
We are grateful to anonymous referees of this paper for helpful comments and suggestions.
References
|
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings82/GI-Proceedings-82-7.pdf", "len_cl100k_base": 7723, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36839, "total-output-tokens": 9084, "length": "2e12", "weborganizer": {"__label__adult": 0.0003695487976074219, "__label__art_design": 0.00037479400634765625, "__label__crime_law": 0.0004723072052001953, "__label__education_jobs": 0.0012941360473632812, "__label__entertainment": 9.351968765258788e-05, "__label__fashion_beauty": 0.0001995563507080078, "__label__finance_business": 0.0003814697265625, "__label__food_dining": 0.0004949569702148438, "__label__games": 0.0008792877197265625, "__label__hardware": 0.0013647079467773438, "__label__health": 0.0008015632629394531, "__label__history": 0.0005965232849121094, "__label__home_hobbies": 0.00014448165893554688, "__label__industrial": 0.0009522438049316406, "__label__literature": 0.0003020763397216797, "__label__politics": 0.0005521774291992188, "__label__religion": 0.0005574226379394531, "__label__science_tech": 0.193115234375, "__label__social_life": 0.0001289844512939453, "__label__software": 0.0103912353515625, "__label__software_dev": 0.78466796875, "__label__sports_fitness": 0.0005016326904296875, "__label__transportation": 0.0009984970092773438, "__label__travel": 0.0002963542938232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35817, 0.01949]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35817, 0.59675]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35817, 0.86411]], "google_gemma-3-12b-it_contains_pii": [[0, 2358, false], [2358, 5819, null], [5819, 9105, null], [9105, 11436, null], [11436, 14443, null], [14443, 17278, null], [17278, 19725, null], [19725, 21347, null], [21347, 23072, null], [23072, 24602, null], [24602, 26935, null], [26935, 28816, null], [28816, 31202, null], [31202, 33587, null], [33587, 35817, null], [35817, 35817, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2358, true], [2358, 5819, null], [5819, 9105, null], [9105, 11436, null], [11436, 14443, null], [14443, 17278, null], [17278, 19725, null], [19725, 21347, null], [21347, 23072, null], [23072, 24602, null], [24602, 26935, null], [26935, 28816, null], [28816, 31202, null], [31202, 33587, null], [33587, 35817, null], [35817, 35817, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35817, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35817, null]], "pdf_page_numbers": [[0, 2358, 1], [2358, 5819, 2], [5819, 9105, 3], [9105, 11436, 4], [11436, 14443, 5], [14443, 17278, 6], [17278, 19725, 7], [19725, 21347, 8], [21347, 23072, 9], [23072, 24602, 10], [24602, 26935, 11], [26935, 28816, 12], [28816, 31202, 13], [31202, 33587, 14], [33587, 35817, 15], [35817, 35817, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35817, 0.03729]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
f975909a752a4beafa80f0aa7bc9b3d24157acf8
|
Abstract—Recent developments in the area of small and smart devices led to a massive spread of them, which, in some cases are replacing traditional computers for performing common activities such as web browsing. These devices are usually equipped with specialized hardware to sense and interact with the environment. In this context the use of self-organizing techniques has been widely used to provide adaptation capabilities at the low level, such as for optimizing energy consumption, or for providing some fault-tolerance properties to the communication middleware. What we want to show in this work is how the same self-organization principles can be used at the user-experience level of context-aware applications. The approach we propose shows that self-organization can be used to model the introduction of gaming elements to motivate and simplify the use of context-aware applications, thus leading to higher quality software. This work is finally validated using a case study and empirical evidence from existing popular context-aware applications.
Keywords—self-organization; software engineering; human computer interaction; gamification; context-aware systems.
I. INTRODUCTION
The current trend of personal computers is to have them embedded in portable devices. An example is given by modern touch screen tablets, which are perfect for web browsing, social networking, and emails reading and writing. These devices are becoming increasingly interesting also for different kinds of uses since their touch-screen user interfaces are easier to use, they can be moved more easily and have a battery that usually lasts more than the one of a traditional laptop. Moreover these devices are often equipped with many kinds of sensors, such as a compass, a GPS receiver, 3G/WiFi/Bluetooth data connectivity. This makes them able to sense the context in which they are being used and to possibly use such information together with some applications to provide some kind of value-added services.
Besides browsing, messaging, and socializing, one of the most important uses for these devices is to play games. According to popular ranks it has emerged that users prefer to use games that are easy to learn, that keep the users motivated, and that have some component of sociality, collaboration, and competition with other people. Moreover, the most successful ones contain reward mechanisms that keep high the attention of the user and stimulate the search for new achievements.
What motivates our work is to find a way to classify the mechanisms that are widely used in videogames in a way that they may be abstracted and used in a wider class of software. The use of these guidelines at the design time of regular software would be the basis for the gamification process of such software.
To find proper guidelines we used our background experience on self-organization [1], which is a recurrent technology in distributed context-aware system [2]. Self-organization is a mechanism that is able to give to a system that is composed of smaller interconnected elements a high-level property in a spontaneous way. In other words a self-organizing system is able to converge to a desired state by relying on the interaction of its elements only, rather than relying on a special monitoring/actuating element or any other single decision point. Current mature uses of self-organizing techniques are usually relegated to the lower levels of the applications [3]. For example, they can be used at middleware level to give fault-tolerance to the application or to optimize some other non-functional requirements. The reason for adopting self-organizing models to support our gamification abstractions comes from an observation of the recent developments in the world of videogames. Some relevant examples are the matchmaking and achievements systems, which are calibrated using feedback information from the user that plays and any other user that is playing the game. Another example is the level of variability of the game, which, thanks to internal perturbations and past experiences of other players, is able to create new, often unpredictable challenges, at the right time.
In this work we first classify recurrent ways in which self-organization is used in some existing class of videogames, and then use our classification to derive abstractions/guidelines that can be used by software engineers to improve the usability of their context-aware software, under the assumption that usability is a non-functional requirement that is as important as other non-functional requirements that are addressed by the use of self-organization at the lower levels of the application. In this work we will stress the concept of context-awareness because self-organizing techniques rely not only on the local information of each element, but also on context information such as the location, the time, and any other information regarding the particular situation.
This paper is organized as follows. Section II describes more in general what self-organization is, what motivates its use, and in which contexts it is usually exploited. Moreover it clarifies the concept of gamification and discusses some relevant related work. Section III presents a classification of ways in which self-organization is used in modern games. In Section IV we propose our guidelines that can be used to
instrument context-aware software to improve the user-experience quality. Section V shows how the proposed abstractions can be applied in a case study, how we can find related real world uses of them in some popular software, and what are the social implications of these techniques. Finally, Section VI concludes the paper and shows some future research directions.
II. BACKGROUND AND RELATED WORK
A. Background on Self-organization
Self-organization is a phenomenon defined as “An increase of order which is not imposed by an external agent (not excluding environmental interactions)” [1]. This means that, if a system starts in a non-organized form (with respect to a generic property), and ends up in an organized form without any external intervention of the environment or any other entity external to the system, then such system is a self-organizing system. The concept of self-organization has been studied since ancient times because self-organizing systems may be observed everywhere. Examples of self-organization phenomena are the chemical reactions of compounds, the movement of stars and galaxies, the organization of the cells in life beings, the organization of insect colonies, the organization of markets and human society in general, and so on. The challenges of scientists that study self-organization are to understand the basic rules that govern the movement of the system from its initial state to the organized state.
With respect to what we have presented above, recent research [4,5,6] wants to transfer to the area of Software Engineering a systematic way to exploit the most common observed characteristics of natural self-organizing systems, such as the simplicity of the operations of their elements [7], the resistance to levels of randomness in the execution of such operations, and the capability to reach complex global goals. Self-organization is usually engineered in software systems as a set of rules or policies, which guide the evolution of the system through some different internal states [8]. These rules and the specification of these states have in some cases been enclosed in common design patterns that simplify their adoption in existing software architectures as well as new ones in order to run and exploit them [4,5,6].
Since pure architectural aspects are out of the scope of this work, which aims at giving a conceptual methodology to obtain some high-level property, from now on we will focus on the self-organizing rules, and, more in general, to the type of interactions among the system elements.
B. Related Work on Gamification
The term gamification is very recent. One of the first attempts to define it has been made in [9]: "gamification is defined as the use of game design elements in non-game context". Even if the research area is new, there is an established literature on this topic. In [10] and [11] for example the authors propose some guidelines for the design and development of games. In these guidelines they consider many key aspects to build successful games that include the necessity of challenge, of interactions, the inclusion of creativity in the gaming experience, and finally a view of games as contexts for social play.
This preliminary literature led to the adoption of the same features that make a game successful to the context of user applications in order to have similar engaging effects [12,13], thus introducing gamification patterns into the design phase of the development process.
In our work we give a contribution to the gamification area by observing and abstracting in existing games some features that are typical of the world of self-organizing systems, such as agonistic (collaborative) and antagonistic (competitive) behavior of self-organizing system elements.
There are some existing works that have already identified the idea of collaboration [14] and competition [15,16] as individual factors for developing gamified applications. In these works the authors observe for example that elements of collaboration/competitions in games can stimulate hobbyists to develop unpaid works. An example is the common creation of the so-called "mods", which are modified versions of existing games developed by volunteers [17]. Moreover, other works analyze how the power of competition may be a huge source of motivation for players to achieve the so-called pro status (professional players) [18,19]. The difference between existing works and the one we are proposing is that we model gamification features as self-organizing phenomena. This allows the use of gamification in the same way self-organization is already used to achieve other kinds of high-level goals, such as runtime system adaptation to improve performance.
III. A CLASSIFICATION OF EXISTING USES OF SELF-ORGANIZATION IN MODERN GAMES
To motivate our classification of self-organization approaches in modern games we have to go back to the time of the first video games. First-generation videogames have usually a fixed scenario; a fixed set of levels, and a final outcome that is "Game Over" (which may be the result of a victory or a defeat). The most primitive achievement mechanism was to define a metric to express the performance of the player through a score, and to save it under the player's name at the end of the game in order to be positioned in a rank with previous players. This preliminary kind of competition motivated players to play more, and it has been a good source of revenue for the owners of coin-operated arcade game machines. This very old model of gaming that looks completely static and centralized can still be modeled as a very simple self-organizing system. The system elements are the following: many players and a single arcade game machine. The players will interact with each other in an indirect way through the arcade game machine and its status changes. A change in the status of the arcade game machine happens when one player adds itself to the global rank of the machine. Such change of status may motivate the player that has been removed from the rank to play again just to have back its leading position in the rank. In this situation the actual game is no longer the game itself, but the fact to end the game in such a way to change the state of the machine. This situation clearly shows that the capability to allow many players to interact to each other
using the machine gives to the system an emergent property [20], which is the one to motivate playing more. Since we have an emergent property that is the result from the uncoordinated interaction of different elements, we are in front of a self-organizing phenomenon. Of course in recent times, things changed a lot since the wide use of arcade game machines, and the emergent properties that game designers want to achieve are more articulated than in the past. In the following subsections we will show three classes of recurrent use of self-organization in modern games along with examples of use and high-level properties that are given to the system (a summary of the classification scheme can be seen in Figure 1).

### A. Collaborative Self-organization
We define as collaborative self-organization all the social rules in which a player is motivated to give its contribution to a game to make it better for herself or for others. Successful collaborative self-organization can be seen in games in which new content can be created. In this case the creativity of a player is motivated by seeing other players using and extending her creation. Since usually the newly created content remains property of the company that owns the game and its infrastructure, this will create a virtuous circle that creates value for the company (thanks to the free new content available for other players) and for the players themselves that will experience a game with richer contents over time. Some examples of existing collaborative self-organization can be seen in games that are based upon user-generated content, such as Second Life, Little Big Planet, Minecraft, etc. These games come with a level editor that is often difficult to use, but the reward for making available some of their creations and increase their notoriety is enough for many players to climb up the steep learning curve of a difficult editor.
### B. Competitive Self-organization
We define as competitive self-organization all the social rules that govern the competition of a player with another player. This form of self-organization is more intuitive since many events of the life of an individual are purely pushed up by competition. To explain why the concept of competition can be modeled as self-organization we have to consider that each player of a game has usually a goal that is decided by the game. If the game has the possibility to use information of its players to adapt its goals (similarly to what happens in the ranking system of the old arcade game machine), then the players are motivated to play more to achieve the new goals. A reiteration of this process will create players that become progressively more skilled and that will help to keep the game popular because they will be trained as good challenging opponents for potential new players. For videogames experts this type of self-organization may be seen trivial, but it is fundamental to explain how this can be used in other types of software.
Examples of modern use of competitive self-organization can be seen in the achievement systems that are becoming popular in any kind of game, such as Starcraft II, and the mobile versions of classical board games such as UNO, Monopoly, and many others. These achievement systems will reward users with new game features such as new personalization options, badges, scores, and many other things that are absolutely free for the game provider, but acquire a high value from the point of view of the players.
### C. Environmental Self-organization
This third kind of self-organization is defined as the capability of the game environment to spontaneously change its state to create new, unpredictable challenges to the players. The difference between this type of self-organization and the others previously introduced is that this is not derived by social interactions among the players (neither collaborative nor competitive) but it is a spontaneous reaction of the game environment with respect to what the players have done, their context, and the evolution over time. This is usually needed to give variability to the game, so that it is perceived as a different game with new challenges every time it is played. A characteristic of this form of self-organization is that the rules governing it are often non-deterministic, meaning that the final result, although within some predetermined boundaries, is something with a good degree of originality.
Examples of games in which environmental self-organization is used are Diablo and World of Warcraft. In these games there is a map that may change over time and enemies controlled by the game, which may be influenced by the behavior of other users and evolve autonomously.
IV. Self-organizing Models for User Applications from a Software Engineering Point of View
In this section we will extend the self-organization features highlighted in the classification of Section III and apply them in the wider class of context-aware software. The reason why we emphasize context-aware class of software is given by the fact that in our approach we do not limit our considerations to a simple generic software application, but also to the possibility for the application to interact with the environment, which may contain additional instances of the application or hardware/software artifacts that, by their nature, may be useful to contribute to the global property we want to give to the system.
In this context our goal is to use self-organizing models and the possibility to exploit context-awareness to improve the quality of the software in terms of the following non-functional requirements:
R1. User Motivation: motivate the user to learn basic and advanced features of an application to ease its learning curve.
R2. Context-aware Learning: adapt the learning curve of the application in such a way that the user learns first what is needed in the specific usage context.
R3. Reward System: make sure that, through a proper competition mechanism, a user feels like it can always improve and be better in its experience with an application.
R4. Collaborative creativity: if applicable, the user should be able to express its creativity when dealing with a given application, with the purpose of making some of its ideas, contents, or suggestions, available to other users.
All these requirements are non-functional because they aim at improving an application from a different angle with respect to its specific function.
Requirement R1 is fundamental for spreading the application and most of traditional HCI (Human Computer Interaction) efforts are spent on this point: if a software is too difficult and requires extensive and boring training to become usable it will lose market positions with respect to similar software (even if less powerful), but with a good usability.
Requirement R2 is something novel in software, since the context is something that was not easily detectable in the past, but today, thanks to automatic location detection and a constant integration with companies organizing tools, personal calendars, social networks, etc., it is possible to take advantage of this information. For example, a user of a spreadsheet application in a secretariat may be more interested in having standard features (2d charts, simple aggregation functions, etc.) emphasized in the software interface, rather than what it is done in a marketing/data mining department of a company, in which advanced statistics, and cubic charts may be more appropriate for that context.
Requirement R3 is a core part to reward some new achievements in the usage of the software and for feeling it like a game. Reward mechanisms, especially when there is the possibility to compare one's achievements with others, can be fundamental to increase the loyalty toward a software and to stimulate the learning of additional features with the only purpose to gain some achievements.
Last, but not least, some classes of software, especially in the context-aware area, may increase their value if there is the chance for a user to collaborate (requirement R4). An example would be to make available additional themes for documents in a word processor application, or to provide reviews in a mobile navigation application, and so on.
In the next subsections we will show how a Software Engineer can apply the self-organizing models that are widely used in games and introduced in Section III to the development of context-aware applications. A compact conceptual map of the guidelines needed to move from Requirements to Design is depicted in Figure 2.

A. Design of Collaborative Self-organizing Features
As we have introduced in the previous section, we want to introduce features of collaborative games in generic software. This will be used to address requirements R1 and R4. To support the design phase of these features we propose the following steps.
1) Identify reusable contents of the application.
2) Associate classes of contents to the context in which the application is used.
3) Instrument the application with a way to submit and search the content.
4) Add a rating mechanism to support content selection and to have feedback on the produced content.
In the first step, the application should be decomposed at design time into classes of reusable artifacts, which may include any kind of data (for example templates, clip arts, themes, etc.) or also application logic (for example macros, automation scripts, etc.). This step is needed to decide which parts of the application or its data can be collaboratively extended.
In the second step, valid only for context-aware applications, possible application contexts must be
enumerate and associated with the classes of artifacts. This step is needed to give to the application the capability to adapt itself, and show only the artifacts that are relevant for that specific context. For example, if we consider a video-player application for a mobile device, it may decide to use a software decoder that finds the right trade-off between battery consumption, current battery level, current video quality; but the same application, when run in a context that is not mobile (i.e., the device is plugged), may decide to ignore completely the energy/quality optimization artifacts.
In the third step the user, depending on its skills, can be motivated to share possible artifacts that she has created for herself, or even to produce artifacts for the community with the purpose of gaining some sort of gratification in terms of reputation.
Finally, the fourth step is needed to give to the application the technological features needed to make collaboration activities of its users recognizable by other users. This may not be a simple feedback mechanism, but something that can be integrated in the social life of the users (for example allowing to show the contribution in the social networks of the user).
B. Design of Competitive Self-organizing Features
A possible way to contribute to the fulfillment of requirements R1 and R3 is to give to the application characteristics of competitive games. This phase of the design is more delicate than the one explained in the previous subsection because, if the level of competition is too strong, it may cause a strong loss of motivation to the users that are not able to compete with the others (further details regarding this point will be explained in Section V.B). In order to introduce a competitive game in a context-aware application we propose the following steps.
1) Identify one or more metrics that may be used to measure the level of experience of the user in an application.
2) Associate these metrics to the context in which the application is used.
3) Allow the application to share and compare the metrics of its users.
The purpose of the first step is to define a way to measure the skills of the user. This can be done in several ways that can be easily inspired from the world of video games. An example would be to mark the capability to use a particular feature of the application as an achievement (that may be converted into points).
The second step, valid for context-aware applications, would be to select or influence the metrics defined in the previous point, using context information. An example would be to give a different set of reward points when an application is used at work rather than at home; another example would be to consider the kind of device that is used to run the application, and so on.
The third step is fundamental to satisfy the competitive attitude of the application's users. Once a score target or an achievement is met, it should be important to make other users aware of that. This way other users are motivated to reach the same targets and thus contribute to a wide adoption of the application.
C. Design of Environmental Self-organizing Features
In the context of application design, environmental self-organizing features are the ones that contribute to the satisfaction or requirements R1 and R2. The idea behind this is that the application should have a way to analyze the activity of its users that is additional to the collaboration/competition activities explained in the previous subsections. This knowledge can then be used to create a personalized behavior that is related to the particular characteristics of the user, his level of experience with the application, and the information of the context in which it is used. This can be deterministic or also non-deterministic to make it more difficult for the user to expect what it is going to happen from the "gaming" features of the application. The important fact that highlights the self-organizing nature of these features is that the "game" should be able to self-calibrate also from the past experiences of other users. An example would be to advertise possible achievements, create ways to gain more points than expected in certain situations by stimulating collaboration and competitions, and any other environmental change that can influence the way in which the application game is perceived. To achieve this we propose the following steps.
1) Identify possible stimuli that may attract the curiosity of the user to new cooperation or competition activities.
2) Identify situations in which the user may be less motivated to use the application. This may include user activity and the context (in context-aware applications).
3) Design a policy that associates stimuli to situations.
Differently from the previous two self-organizing features, all these three points may be all realized manually at design time or, with the proper training, at runtime.
In the first step the system already knows the possible ways to stimulate cooperation and competition because these aspects were already identified in the other specific self-organizing features. However, it should be given a way to select, based on previous experiences of other users, the right stimuli that may involve the user to the "hidden" application game. An example would be to offer a multiplier factor in the possibility to earn experience points as an incentive to increase user involvement.
In the second step the system should use its past and present information about the users and their context to identify problematic situations. Examples of problematic situations happen when the user has just started to use the application, when a user enters a new context, or when a user has been playing the hidden collaborative and competitive game with very poor performances.
In the third step a policy creates a runtime relationship between what has been identified in the first and in the second step. The final purpose would be to give some opportunities to the users, such as the possibility to earn additional experience points or special achievements in the first week of use of the application or when exploiting features offered in a particular usage context.
V. CASE STUDY AND SOCIAL ASPECTS
In this section we propose a simple case study in which we show how to apply our guidelines, some existing successful experiences that exhibit some of the properties of our approach, and finally we show some of the social implications that may arise when gamifying an application.
A. Case Study: Personal organizer
We assume we want to apply our guidelines to gamify a context-aware personal organizer. The Functional requirements of the personal organizer include the possibility to schedule tasks, keep track of previous/future tasks, modify how the tasks are advertised based on the device used, and the situation in which the personal organizer is being run (we assume to have the personal organizer accessible through a mobile device, a personal computer, and from the Internet using a web interface). We have chosen this case study because this application can be generally considered just another (boring) tool for supporting work and life activities rather than something that can be addictive and fun like a game.
**Collaborative self-organization steps.** 1) Reusable contents: invitations to some events that may be interesting for somebody; 2) associate the invitations above to the location in which somebody is at the moment, or is planning to go (according to the future events already scheduled in the organizer); 3) add to the user the possibility to advertise new events, specifying specific tags to identify the people that may be interested, and to identify valid contexts for these events (i.e., time and location); 4) make it possible to see how many people have put the newly added event into their schedule.
**Competitive self-organization steps.** 1) The level of experience can be a set of achievements that include the following: number of events per day, events scheduled for the first time, events scheduled with a particular person; 2) give a different type of achievements when an event is done in a particular time of a day, or in the weekend, or during holiday, or in a particular location, etc.; 3) let the user be notified when a friend or colleague obtains an achievement and rank the user against them to see who is the more "organized" person.
**Environmental self-organization steps.** 1) Suggest to schedule events that have never been scheduled (for example a dinner outside), or advertise the use of new features that have never been used in the application; 2) make it possible for the application to analyze the evolution of the context of the user (i.e., its location changes and its vicinity with other friends that are using the same application) and propose to add to the schedule social activities that have been scheduled by her friends (such as a walk in a park); 3) offer opportunities to earn extra points when the suggestions proposed from step 1 or step 2 are followed in order to increase user involvement and competitive attitude.
In the case study specified above we have shown informally how we can apply our steps for each kind of self-organizing feature. Of course the actual choices for each step are very coupled with the kind of application that needs to be gamified. More in general, the steps we propose should be done at the requirements level of a normal software engineering process, thus identifying use cases and scenarios in a more detailed and engineered way.
B. Existing Successful Experiences
The method we have discussed in this paper, although never proposed as a research methodology, is not completely new in the real world. We have a recent proliferation of some applications that already exhibit some of the features that we have proposed. The most recent examples are two mobile context-aware applications: *Waze* [21] and *Foursquare* [22]. Waze (see Figure 3a) is a free mobile application to support personal navigation; it contains maps collaboratively added by its users and provides updated information about traffic jams and nearby points of interests. Besides its usefulness this application managed to create its huge database using only the power of collaboration and competition among its users. Many users use Waze as a normal navigation support, but many others use it as a game, trying to earn achievements for discovering new streets, street changes, point of interests and so on. Foursquare (see Figure 3b) is another free mobile application to keep track of the visited locations of the users and their friends, and for finding new locations to visit. It uses a reward mechanism similar to Waze and its success is mostly due to the fact that most users use the application as a game. Both Waze and Foursquare are examples in which the gaming features of the application support those self-organizing phenomena that are able to motivate users to produce value for other users and for the application itself.
C. Social Aspects
We have seen that, with the use of gamification, in normal applications it is possible to create value for both the users and the application owners. However there are some critical social points that need to be evaluated before deciding to gamify an application. People who are not competitive may find the gaming features something that may leave a bad feedback to the user, especially if she fails the competition. For this reason the user should always have the freedom to disable them. As an alternative, instead of making a user in the condition of disabling them, it is easier for the application designer to find ways in which each user can feel to be the winner from some point of view, this way each one will have a positive reward in continuing
collaboration, competition, and thus in giving more value to its experience in using the application and learning new features.
VI. CONCLUSIONS AND FUTURE WORK
In this paper we have proposed a set of guidelines to design context-aware user applications in such a way that non-functional requirements related to the user involvement and motivation in the use of the application have a good chance of being satisfied.
Our methodology is inspired by two worlds: self-organizing systems and existing experience in videogame design. We have seen that many features of modern (and less modern) videogames that make them addictive and long lasting can be abstracted using self-organizing models. These features include the possibility to give to their users a good sense of cooperation or competition to reach a given goal. Moreover the repeatability of a game is supported by the capability of changing overtime based on past performances of its users or other non-deterministic factors. What we have shown in this work is that the same features used to make videogames a pleasant experience can be also added to normal applications, with the double advantage of speeding-up the learning curve of the application thanks to the satisfaction of the motivitional requirements, and of giving to the users that sense of fun that is typical of games even if the application itself is just (in an extreme case) a boring tool for a boring work.
What we plan to do in the future is to perform a more thorough analysis on which part of our methodology can be automatized or not, moreover some quantitative information on the productivity increase of a gamified application should be defined and reported using a real-world experiment. Last but not least, the social aspects related to the introduction of delicate psychological mechanisms such as competition in a work environment may need further attention and analyzed in an interdisciplinary way.
ACKNOWLEDGMENT
This research has been partially funded by the European Commission, under project SMSCom IDEAS-ERC 227977.
REFERENCES
|
{"Source-Url": "http://home.deib.polimi.it/dubois/papers/gas12.pdf", "len_cl100k_base": 6810, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 27365, "total-output-tokens": 8339, "length": "2e12", "weborganizer": {"__label__adult": 0.0009756088256835938, "__label__art_design": 0.0009250640869140624, "__label__crime_law": 0.000946044921875, "__label__education_jobs": 0.0016040802001953125, "__label__entertainment": 0.00034236907958984375, "__label__fashion_beauty": 0.0004875659942626953, "__label__finance_business": 0.0004038810729980469, "__label__food_dining": 0.0009298324584960938, "__label__games": 0.03143310546875, "__label__hardware": 0.0019426345825195312, "__label__health": 0.0011014938354492188, "__label__history": 0.0007004737854003906, "__label__home_hobbies": 0.00014388561248779297, "__label__industrial": 0.000652313232421875, "__label__literature": 0.0007925033569335938, "__label__politics": 0.0006241798400878906, "__label__religion": 0.0008950233459472656, "__label__science_tech": 0.05548095703125, "__label__social_life": 0.00014865398406982422, "__label__software": 0.0104827880859375, "__label__software_dev": 0.88671875, "__label__sports_fitness": 0.0009627342224121094, "__label__transportation": 0.000865936279296875, "__label__travel": 0.0004818439483642578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38987, 0.01486]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38987, 0.34452]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38987, 0.94957]], "google_gemma-3-12b-it_contains_pii": [[0, 5382, false], [5382, 11758, null], [11758, 16559, null], [16559, 21567, null], [21567, 27424, null], [27424, 33441, null], [33441, 38987, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5382, true], [5382, 11758, null], [11758, 16559, null], [16559, 21567, null], [21567, 27424, null], [27424, 33441, null], [33441, 38987, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38987, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38987, null]], "pdf_page_numbers": [[0, 5382, 1], [5382, 11758, 2], [11758, 16559, 3], [16559, 21567, 4], [21567, 27424, 5], [27424, 33441, 6], [33441, 38987, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38987, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0a9c727510fe7e317b28849dda1cafb2d83a2bb9
|
Numerical Python for Scalable Architectures
PGAS '10 Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model
Kristensen, Mads Ruben Burgdorff; Vinter, Brian
Published in:
PGAS '10 Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model
DOI:
10.1145/2020373.2020388
Publication date:
2010
Document version
Early version, also known as pre-print
Citation for published version (APA):
Numerical Python for Scalable Architectures
Mads Ruben Burgdorff Kristensen
Brian Vinter
eScience Centre
University of Copenhagen
Denmark
madsbk@diku.dk/vinter@diku.dk
Abstract
In this paper, we introduce DistNumPy, a library for doing numerical computation in Python that targets scalable distributed memory architectures. DistNumPy extends the NumPy module[15], which is popular for scientific programming. Replacing NumPy with Dist-NumPy enables the user to write sequential Python programs that seamlessly utilize distributed memory architectures. This feature is obtained by introducing a new backend for NumPy arrays, which distribute data amongst the nodes in a distributed memory multiprocessor. All operations on this new array will seek to utilize all available processors. The array itself is distributed between multiple processors in order to support larger arrays than a single node can hold in memory.
We perform three experiments of sequential Python programs running on an Ethernet based cluster of SMP-nodes with a total of 64 CPU-cores. The results show an 88% CPU utilization when running a Monte Carlo simulation, 63% CPU utilization on an N-body simulation and a more modest 50% on a Jacobi solver. The primary limitation in CPU utilization is identified as SMP limitations and not the distribution aspect. Based on the experiments we find that it is possible to obtain significant speedup from using our new array-backend without changing the original Python code.
Keywords NumPy, Productivity, Parallel language
1. Introduction
In many scientific and engineering areas, there is a need to solve numerical problems. Researchers and engineers behind these applications often prefer a high level programming language to implement new algorithms. Of particular interest are languages that support a broad range of high-level operations directly on vectors and matrices. Also of interest is the possibility to get immediate feedback when experimenting with an application. The programming language Python combined with the numerical library NumPy[15] supports all these features and has become a popular numerical framework amongst researchers.
The idea in NumPy is to provide a numerical extension to the Python language. NumPy provides not only an API to standardized numerical solvers, but a possibility to develop new numerical solvers that are both implemented and efficiently executed in Python, much like the idea behind the MATLAB[8] framework. NumPy is mostly implemented in C and introduces a flexible N-dimensional array object that supports a broad range of numerical operations. The performance of NumPy is significantly increased when using array-operations instead of scalar-operations on this new array.
Parallel execution is supported by a limited set of NumPy functions, but only in a shared memory environment. However, many scientific computations are executed on large distributed memory machines because of the computation and memory requirements of the applications. In such cases, the communication between processors has to be implemented by the programmer explicitly. The result is a significant difference between the sequential program and the parallelized program. DistNumPy eliminates this difference by introducing a distributed version of the N-dimensional array object. All operations on such distributed arrays will utilize all available processors and the array itself is distributed between multiple processors, which makes it possible to expand the size of the array to the aggregated available memory.
1.1 Motivation
Solutions to numerical problems often consist of two implementations: a prototype and a final version. The algorithm is developed and implemented in a prototype by which the correctness of the algorithm can be verified. Typical many iterations of development
are required to obtain a correct prototype, thus for this purpose a
high productivity language is used, most often MATLAB. How-
ever, when the correct algorithm is finished the performance of the
implementation becomes essential for doing research with the al-
gorithm. This performance requirement presents a problem for the
researcher since highly optimized code requires a fairly low-level
programming language such as C/C++ or Fortran. The final ver-
sion will therefore typically be a reimplementaiton of the prototype,
which involves both changing the programming language and par-
allellizing the implementation (Fig. 1a).
The overall target of DistNumPy is to provide a high productiv-
ity tool that meets both the need for a high productivity tool that
allows researcher to move from idea to prototype in a short time,
and the need for a high performance solution that will eliminate the
need for a costly and risky reimplementaiton (Fig. 1b). It should be
desirable to develop and implement an algorithm using a sim-
ple notebook and then effortlessly execute the implementation on a
cluster of computers while utilizing all available CPUs.
1.2 Target architectures
NumPy supports a long range of architectures from the widespread
x86 to the specialized Blue Gene architecture. However, NumPy is
incapable of utilizing distributed memory architectures like Blue
Gene supercomputers or clusters of x86 machines. The target of
DistNumPy is to close this gap and fully support and utilize dis-
tributed memory architectures.
1.3 Related work
Libraries and programming languages that support parallelization
on distributed memory architectures is a well known concept. The
existing tools either seek to provide optimal performance in parallel
applications or, like DistNumPy, seek to ease the task of writing
parallel applications.
The library ScalAPACK[2] is a parallel version of the linear
algebra library LAPACK[1]. It introduces efficient parallel opera-
tions on distributed matrices and vectors. To use ScalAPACK, an
application must be programmed using MPI[7] and it is the respon-
sibility of the programmer to ensure that the allocation of matrices
and vectors comply with the distribution layout ScalAPACK spec-
ifies.
Another library, Global Arrays[13], introduces a distributed
data object (global array), which makes the data distribution trans-
parent to the user. It also supports efficient parallel operations and
provides a higher level of abstraction than ScalAPACK. However,
the programmer must still explicitly coordinate the multiple
processes that are involved in the computation. The programmer
must specify which region of a global array is relevant for a given
process.
Both ScalAPACK and Global Arrays may be used from within
Python and can even be used in combination with NumPy, but it is
only possible to use NumPy locally and not with distributed
operations. A more closely integrated Python project IPython[16]
supports parallelized NumPy operations. IPython introduces a dis-
tributed NumPy array much like the distributed array that is intro-
duced in this paper. Still, the user-application must use the MPI
framework and the user has to differentiate between the running
MPI-processes.
Co-Array Fortran[14] is a small language extension of Fortran-95
for parallel processing on Distributed Memory Machines. It in-
troduces a Partitioned Global Address Space (PGAS) by extending
Fortran arrays with a co-array dimension. Each process can ac-
cept remote instances of an array by indexing into the co-array
dimensions. A similar PGAS extension called Unified Parallel C
(UPC)[3] extent the C language with a distributed array declara-
tion. Both languages provide a high abstraction level, but users still
program with the SPMD model in mind, writing code with the un-
derstanding that multiple instances of it will be executing cooper-
atively.
A higher level of abstraction is found in projects where the ex-
ecution, seen from the perspective of the user, is represented as a
sequential algorithm. The High Performance Fortran (HPF)[12]
programming languages provide such an abstraction level. How-
ever, HPF requires the user to specify parallelizable regions in the
code and which data distribution scheme the runtime should use.
The Simple Parallel R INTerface (SPRINT)[9] is a parallel
framework for the programming language R. The abstraction level
in SPRINT is similar to DistNumPy in the sense that the distribu-
tion and parallelization is completely transparent to the user.
2. NumPy
Python has become a popular language for high performance com-
puting even though the performance of Python programs is much
lower than that of compiled languages. The growing popularity is
because Python is used as the coordinating language while the com-
pute intensive tasks are implemented in a high performance lan-
guage.
NumPy[15] is a library for numerical operations in Python
which is implemented in the C programming language. NumPy
provides the programmer with an N-dimensional array object and
a whole range of supported array operations. By using the array
operations, NumPy takes advantage of the performance of C while
retaining the high abstraction level of Python. However, this also
means that no performance improvement is obtained otherwise e.g.
using a Python loop to traverse a NumPy array does not result in
any performance gain.
2.1 Interfaces
The primary interface in NumPy is a Python interface and it is pos-
sible to use NumPy exclusively from Python. NumPy also provides
a C interface in which it is possible to access the same function-
ality as in the Python interface. Additionally, the C interface also
allows programmers to access low level data structures like point-
ers to array data and thereby provides the possibility to implement
arbitrary array operations efficiently in C. The two interfaces may
be used interchangeably through the Python program.
2.2 Universal functions
An important mechanism in NumPy is a concept called Universal
function. A universal function (ufunc) is a function that operates
on all elements in an array independently. That is, a ufunc is a
vectorized wrapper for a function that takes a fixed number of scalar
inputs and produces a fixed number of scalar outputs. Using ufunc
can result in a significant performance boost compared to native
Python because the computation-loop is implemented in C.
2.2.1 Function broadcasting
To make ufunc more flexible it is possible to use arrays with
different number of dimensions. To utilize this feature the size of
the dimensions must either be identical or have the length one.
When the ufunc is applied, all dimensions with a size of one will
be broadcasted in the NumPy terminology. That is, the array will
be duplicated along the broadcasted dimension (Fig. 2).
It is possible to implement many array operations efficiently
in Python by combining NumPy’s ufunc with more traditional
numerical functions like matrix multiplication, factorization etc.
2.3 Basic Linear Algebra Subprograms
NumPy makes use of the numerical library Basic Linear Algebra
Subprograms (BLAS) [11]. A highly optimized BLAS implemen-
tation exists for almost all HPC platforms and NumPy exploits
this when possible. Operations on vector-vector, matrix-vector and matrix-matrix in NumPy.
3. DistNumPy
DistNumPy is a new version of NumPy that parallelizes array operations in a manner completely transparent to the user – from the perspective of the user, the difference between NumPy and DistNumPy is minimal. DistNumPy can use multiple processors through the communication library Message Passing Interface (MPI)[7]. However, we have chosen not to follow the standard MPI approach in which the same user-program is executed on all MPI-processes. This is because the standard MPI approach requires the user to differentiate between the MPI-processes, e.g. sequential areas in the user-program must be guarded with a branch based on the MPI-rank of the process. In DistNumPy MPI communication must be fully transparent and the user needs no knowledge of MPI or any parallel programming model. However, the user is required to use the array operations in DistNumPy to obtain any kind of speedup. We think this is a reasonable requirement since it is also required by NumPy.
The only difference in the API of NumPy and DistNumPy is the array creation routines. DistNumPy allow both distributed and non-distributed arrays to co-exist thus the user must specify, as an optional parameter, if the array should be distributed. The following describes the only difference between the creation of a standard array and a distributed array:
```python
# Non-Distributed
A = numpy.array([1,2,3])
# Distributed
B = numpy.array([1,2,3], dist=True)
```
3.1 Interfaces
There are two programming interfaces in NumPy – one in Python and one in C. We aim to support the complete MPI interface and a great subset of the C interface. However, the part of the C interface that involves direct access to low level data structures will not be supported. It is not feasible to return a C-pointer that represents the elements in a distributed array.
3.2 Data layout
Two-Dimensional Block Cyclic Distribution is a very popular distribution scheme and it is used in numerical libraries like ScalAPACK[2] and LINPACK[5]. It supports matrices and vectors and has a good load balance in numerical problems that have a diagonal computation workflow e.g. Gaussian elimination. The distribution scheme works by arranging all MPI-processes in a two dimensional grid and then distributing data-blocks in a round-robin fashion either along one or both grid dimensions (Fig. 3); the result is a well-balanced distribution.
Figure 2. Universal function broadcasting. The ufunc addition is applied on a 3x2 array and a 1x2 array. The first dimension of the 1x2 array is broadcasted to the size of the first dimension of the 3x2 array. The result is a 3x2 array in which the two arrays are added together in an element-by-element fashion.
Figure 3. The Two-Dimensional Block Cyclic Distribution of a matrix on a 2 x 3 grid of processors.
NumPy is not limited to matrices and vectors as it supports arrays with an arbitrary number of dimensions. DistNumPy therefore use a more generalized N-Dimensional Block Cyclic Distribution inspired by High Performance Fortran[12], which supports an arbitrary number of dimensions. Instead of using a fixed process grid, we have a process grid for every number of dimensions. This works well when operating on arrays with the same number of dimensions but causes problems otherwise. For instance in a matrix-vector multiplication the two arrays are distributed on different process grids and may therefore require more communication. ScalAPACK solves the problem by distributing vectors on two-dimensional process grids instead of one-dimensional process grids, but this will result in vector operations that cannot utilize all available processors. An alternative solution is to redistribute the data between a series of identical leveled BLAS operations using a fast runtime redistribution algorithm like [18] demonstrates.
3.3 Operation dispatching
The MPI-process hierarchy in DistNumPy has one MPI-process (master) placed above the others (slaves). All MPI-processes run the Python interpreter but only the master executes the user-program, the slaves will block at the import numpy statement.
The following describes the flow of the dispatching:
1. The master is the dispatcher and will, when the user applies a python command on a distributed array, compose a message with meta-data describing the command.
2. The message is then broadcasted from the master to the slaves with a blocking MPI-broadcast. It is important to note that the message only contains meta-data and not any actual array data.
3. After the broadcast, all MPI-processes will apply the command on the sub-array they own and exchange array elements as required (Point-to-Point communication).
4. When the command is completed, the slaves will wait for the next command from the master and the master will return to the user’s python program. The master will return even though some slaves may still be working on the command, synchronization is therefore required before the next command broadcast.
3.4 Views
In NumPy an array does not necessarily represent a complete contiguous block of memory. An array is allowed to represent a subpart of another array i.e. it is possible to have a hierarchy of arrays where only one array represent a complete contiguous block of memory and the other arrays represent a subpart of that memory.
Inspired by NumPy, DistNumPy implements an array hierarchy where distributed arrays are represented by the following two data structures.
- **Array-base** is the base of an array and has direct access to the content of the array in main memory. An array-base is created with all related meta-data when the user allocates a new distributed array, but the user will never access the array directly through the array-base. The array-base always describes the whole array and its meta-data such as array size and data type are constant.
- **Array-view** is a view of an array-base. The view can represent the whole array-base or only a sub-part of the array-base. An array-view can even represent a non-contiguous sub-part of the array-base. An array-view contains its own meta-data that describe which part of the array-base is visible and it can add non-existing 1-length dimensions to the array-base. The array-view is manipulated directly by the user and from the users perspective the array-view is the array.
Array-views are not allowed to refer to each other, which means that the hierarchy is flat with only two levels: array-base below array-view. However, multiple array-views are allowed to refer to the same array-base. This hierarchy is illustrated in Figure 4.
### 3.5 Optimization hierarchy
It is a significant performance challenge to support array-views that are not aligned with the distribution block size, i.e. an array view that has a starting offset that a not aligned with the distribution block size or represents a non-contiguous sub-part of the array-base. The difficulty lies in how to handle data blocks that are located on multiple MPI-processes and are not aligned to each other. Such problems can be handled by partitioning data blocks into sub-blocks that both are aligned and located on a single MPI-process. However, in this work we will not focus on problems that involve non-aligned array-views, but instead simply handle them by communicating and computing each array element individually.
In general we introduce a hierarchy of implementations where each implementation is optimized for specific operation scenarios. When an operation is applied a lookup in the hierarchy determines the best suited implementation for that particular operation. All operations have their own hierarchy some with more levels than others, but at the bottom of the hierarchy all operations have an implementation that can handle any scenario simply by handling each array element individually.
### 3.6 Parallel BLAS
As previously mentioned NumPy supports BLAS operations on vectors and matrices. DistNumPy therefore implements a parallel version of BLAS inspired by PBLAS from the ScaLAPACK library. Since DistNumPy uses the same data-layout as ScaLAPACK, it would be straightforward to use PBLAS for all parallel BLAS operations. However, to simplify the installation and maintenance of DistNumPy we have chosen to implement our own parallel version of BLAS. We use SUMMA[6] for matrix multiplication, which enable us to use the already available BLAS library locally on the MPI-processes. SUMMA is only applicable on complete array-views and we therefore use a straightforward implementation that computes one element at a time if partial array-views are involved in the computation.
### 3.7 Universal function
In DistNumPy, the implementation of ufunc uses three different scenarios.
1. In the simplest scenario we have a perfect match between all elements in the array-views and applying an ufunc does not require any communication between MPI-processes. The scenario is applicable when the ufunc is applied on complete array-views with identical shapes.
2. In the second scenario the array-views must represent a continuous part of the underlying array-base. The computation is parallelized by the data distribution of the output array and data blocks from the input arrays are fetched when needed. We use non-blocking one-side communication (MPI_Get) when fetching data blocks, which makes it possible to compute one block while fetching the next block (double buffering).
3. The final scenario does not use any simplifications and works with any kind of array-view. It also uses non-blocking one-side communication but only one element at a time.
### 4. Examples
To evaluate DistNumPy we have implemented three Python programs that all make use of NumPy’s vector-operations (ufunc). They are all optimized for a sequential execution on a single CPU and the only program change we make, when going from the original NumPy to our DistNumPy, is the array creation argument dist.
A walkthrough of a Monte Carlo simulation is presented as an example of how DistNumPy handles Python executions.
#### 4.1 Monte Carlo simulation
We have implemented an efficient Monte Carlo Pi simulation using NumPy’s ufunc. The implementation is a translation of the Monte Carlo simulation included in the benchmark suite SciMark 2.0[17],
```python
from numpy import *
(x, y) = (empty([S], dist=True),)
empty([S], dist=True)
(x, y) = (random(x), random(y))
(x, y) = (square(x), square(y))
z = (x + y) < 1
print add.reduce(z) * 4.0 / S #The result
```
Figure 5. Computing Pi using Monte Carlo simulation. S is the number of samples used. We have defined a new ufunc (ufunc_rand) to make sure that we use an identical random number generator in all benchmarks. The ufunc uses `rand()/(double)RAND_MAX` from the ANSI C standard library (stdlib.h) to generate numbers.
which is written in Java. It is very simple and uses two vectors with length equal the number of samples used in the calculation. Because of the memory requirements, this drastically reduces the maximum number of samples. Combining multiple simulations will allow more samples but we will only use one simulation. The implementation is included in its full length (Fig. 5) and the following is a walkthrough of a simulation (the bullet-numbers represents line numbers):
1: All MPI-processes interpret the import statement and initiate DistNumPy. Besides calling MPI_Init() the initialization is identical to the original NumPy but instead of returning from the import statement, the slaves, MPI-processes with rank greater than zero, listen for a command message from the master, the MPI-process with rank zero.
2-3: The master sends two CREATE ARRAY messages to all slaves. The two messages contain an array shape and unique identifier (UID), which in this case identifies \( x \) and \( y \), respectively. All MPI-processes allocate memory for the arrays and stores the array information.
4: The master sends two UFUNC messages to all slaves. Each message contains a UID and a function name ufunk_random. All MPI-processes apply the function on the array with the specified UID. A pointer to the function is found by calling PyObject_GetAttrString with the function name. It is thereby possible to support all ufuns from NumPy.
5: Again the master sends two UFUNC messages to all slaves but this time with function name square.
6: The master sends a UFUNC messages with function name add followed by a UFUNC messages with function name less than. The scalar 1 is also in the message.
7: The master sends a UFUNC, REDUCE messages with function name add. The result is a scalar, which is not distributed, and the master therefore solely computes the remainder of the computation and print the result. When the master is done a SHUTDOWN message is sent to the slaves and the slaves call exit(0).
4.2 Jacobi method
The Jacobi method is an algorithm for determining the solutions of a system of linear equations. It is an iterative method that uses a spitting scheme to approximate the result. Our implementation uses ufunk operations in a while-loop until it converges. Most of the implementation is included here (Fig. 6).
Figure 6. Iteratively Jacobi solver for matrix \( A \) with solution vector \( B \) both are distributed arrays. The import statement and the creation of \( A \) and \( B \) is not included here. tol is the maximum tolerated value of the diagonal-element with the highest value (dmax).
using more than 16 CPU-cores is very poor. The problem is memory bandwidth – since we use multiple CPU-cores per node when using more than 8 CPU-cores, the aggregated memory bandwidth of the Core 2 cluster does only increase up to 8 CPU-cores. The Nehalem cluster is a bit better because it has two memory buses per node, but using more than 16 CPU-cores will not increase the aggregated memory bandwidth.
5.2.1 Profiling of the Jacobi implementation
To investigate the memory bandwidth limitation observed in the Jacobi execution we have profiled the execution by measuring the time spend on computation and communication (Fig. 9). As expected the result shows that the percentages used with communication increases when the number of CPU-cores increases. Furthermore, a noteworthy observation is the almost identical communication overhead at eight CPU-cores and sixteen CPU-cores. This is because half of the communication is performed through the use of shared memory at sixteen CPU-cores, which also means that the communication, just like the computation, is bound by the limited memory bandwidth.
5.3 Newtonian N-body simulation
The result of the speedup benchmark is illustrated in Figure 10. Compared to the Jacobi method we see a similar speedup and CPU utilization. This is expected because the dominating operations are also simple ufuncs. Even though there are some matrix multiplications, which have a great scalability, it is not enough to significantly boost the overall scalability.
5.4 Alternative programming language
DistNumPy introduces a performance overhead compared to a lower-level programming language such as C/C++ or Fortran. To investigate this overhead we have implemented the Jacobi benchmark in C. The implementation uses the same sequential algorithm as the NumPy and DistNumPy implementations.
Executions on both architectures show that DistNumPy and NumPy is roughly 50% slower than the C implementation when executing the Jacobi method on one CPU-core. This is in rough runtime numbers: 21 seconds for C, 31 seconds for NumPy and 32 seconds for DistNumPy.
Obviously highly hand-optimized implementations have a clear performance advantages over DistNumPy. For instance by the use of a highly optimized implementation in C [10] demonstrates extreme scalability of a similar Jacobi computation – an execution by 16384 CPU-cores achieves a CPU utilization of 70% on a Blue Gene/P architecture.
5.5 Summary
The benchmarks clearly show that DistNumPy has both good performance and scalability when execution is not bound by the memory bandwidth, which is evident from looking at the CPU utilization when only one CPU-core per node is used. As expected the scalability of the Monte Carlo simulation is better than the Jacobi and the N-body computation because of the reduced communication requirements and more CPU-intensive ufunc operation.
The scalability of the Jacobi and the N-body computation is drastically reduced when using multiple CPU-cores per node. The problem is the complexity of the ufunc operations. As opposed to the Monte Carlo simulation, which makes use of a complex ufunc, the Jacobi and the N-body computation only use simple ufuncs e.g. add and multiplication.
As expected the performance of the C implementation is better than the DistNumPy implementation. However, by utilizing two CPU-cores it is possible to outperform the C implementation in the case of the Jacobi method. This is not a possibility in the case of the Monte Carlo simulation where the algorithm does not favor vectorization.
6. Future work
In its current state DistNumPy does not implement the NumPy interface completely. Many specialized operations like Fast Fourier transform or LU factorization is not implemented, but it is our intention to implement the complete Python interface and most of the C interface.
Figure 8. Speedup of the Jacobi solver. In graph (a) the two architectures uses a minimum number of CPU-cores per node. Added in graph (b) is the result of using multiple CPU-cores on a single node (SMP).
Figure 9. Profiling of the Jacobi experiment. The two figures illustrate the relationship between communication and computation when running on the Core 2 Quad architecture (a) and the Nehalem architecture (b). The area with the check pattern represent MPI communication and the clean area represent computation. Note that these figures relates directly to the Jacobi speedup graph (Fig 8a).
The performance of NumPy programs that make use of array-views that are not aligned with the distribution block size is very poor because each array element is handled individually. This is not a problem for a whole range of NumPy programs, including the experiments presented in this paper, since they do not use non-aligned array-views. However some operations, such as stencil operations, require non-aligned array-views and an important future work is therefore to support all array views with similar efficiency.
Other important future work includes performance and scalability improvement. As showed by the benchmarks, applications that are dominated by non-complex ufuncs easily become memory bounded. One solutions is to merge calls to ufuncs, that operate on common arrays, together in one joint operation and thereby make the joint operation more CPU-intensive. If it is possible to merge enough ufuncs together the application may become CPU bound rather than memory bound.
7. Conclusions
In this work we have successfully shown that it is possible to implement a parallelized version of NumPy[15] that seamlessly utilize distributed memory architectures. The only API difference between NumPy and our parallelized version, DistNumPy, is an extra optional parameter in the array creation routines.
Performance measurements of three Python program, which make use of DistNumPy, show very good performance and scalability. A CPU utilization of 88% is achieved on a 64 CPU-core Nehalem cluster running a CPU-intensive Monte Carlo simulation. A more memory-intensive N-body simulation achieves a CPU utilization of 91% on 16 CPU-cores but only 63% on 64 CPU-cores. Similar a Jacobi solver achieves a CPU utilization of 85% on 16 CPU-cores and 50% on 64 CPU-cores.
To obtain good performance with NumPy the user is required to make use of array operations rather than using Python loops. DistNumPy take advantage of this fact and parallelizes array operations. Thus most efficient NumPy applications should be able to benefit from DistNumPy with the distribution parameter as the only change.
We conclude that it is possible to obtain significant speedup with DistNumPy. However, further work is needed if shared memory machines are to be fully utilized as nodes in a scalable architecture.
References
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/51458292/Numerical_python.pdf", "len_cl100k_base": 6504, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29169, "total-output-tokens": 8395, "length": "2e12", "weborganizer": {"__label__adult": 0.0003952980041503906, "__label__art_design": 0.0005016326904296875, "__label__crime_law": 0.0004503726959228515, "__label__education_jobs": 0.0016021728515625, "__label__entertainment": 0.00018680095672607425, "__label__fashion_beauty": 0.00021648406982421875, "__label__finance_business": 0.0003781318664550781, "__label__food_dining": 0.0005297660827636719, "__label__games": 0.0008039474487304688, "__label__hardware": 0.001697540283203125, "__label__health": 0.0009145736694335938, "__label__history": 0.0005178451538085938, "__label__home_hobbies": 0.00015282630920410156, "__label__industrial": 0.0010356903076171875, "__label__literature": 0.0003464221954345703, "__label__politics": 0.00047898292541503906, "__label__religion": 0.0007152557373046875, "__label__science_tech": 0.430419921875, "__label__social_life": 0.00018024444580078125, "__label__software": 0.01284027099609375, "__label__software_dev": 0.5439453125, "__label__sports_fitness": 0.0004761219024658203, "__label__transportation": 0.0009479522705078124, "__label__travel": 0.00027680397033691406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35172, 0.02985]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35172, 0.57654]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35172, 0.87176]], "google_gemma-3-12b-it_contains_pii": [[0, 804, false], [804, 4646, null], [4646, 11890, null], [11890, 17316, null], [17316, 22799, null], [22799, 25417, null], [25417, 29265, null], [29265, 29863, null], [29863, 33922, null], [33922, 35172, null]], "google_gemma-3-12b-it_is_public_document": [[0, 804, true], [804, 4646, null], [4646, 11890, null], [11890, 17316, null], [17316, 22799, null], [22799, 25417, null], [25417, 29265, null], [29265, 29863, null], [29863, 33922, null], [33922, 35172, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35172, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35172, null]], "pdf_page_numbers": [[0, 804, 1], [804, 4646, 2], [4646, 11890, 3], [11890, 17316, 4], [17316, 22799, 5], [22799, 25417, 6], [25417, 29265, 7], [29265, 29863, 8], [29863, 33922, 9], [33922, 35172, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35172, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
22389ed5d7d9ed887804d0efb2723a9193518540
|
[REMOVED]
|
{"Source-Url": "https://www.intelligence.tuc.gr/~petrakis/publications/WoT3.pdf", "len_cl100k_base": 6424, "olmocr-version": "0.1.48", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26625, "total-output-tokens": 7273, "length": "2e12", "weborganizer": {"__label__adult": 0.0003323554992675781, "__label__art_design": 0.00075531005859375, "__label__crime_law": 0.00044155120849609375, "__label__education_jobs": 0.0003752708435058594, "__label__entertainment": 0.00013446807861328125, "__label__fashion_beauty": 0.00016820430755615234, "__label__finance_business": 0.00028204917907714844, "__label__food_dining": 0.0003573894500732422, "__label__games": 0.0004811286926269531, "__label__hardware": 0.0029087066650390625, "__label__health": 0.0004487037658691406, "__label__history": 0.0003871917724609375, "__label__home_hobbies": 9.018182754516602e-05, "__label__industrial": 0.0005478858947753906, "__label__literature": 0.00034737586975097656, "__label__politics": 0.00029921531677246094, "__label__religion": 0.0005373954772949219, "__label__science_tech": 0.1552734375, "__label__social_life": 9.16123390197754e-05, "__label__software": 0.0269927978515625, "__label__software_dev": 0.8076171875, "__label__sports_fitness": 0.00024580955505371094, "__label__transportation": 0.000659942626953125, "__label__travel": 0.0002357959747314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29696, 0.02595]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29696, 0.34456]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29696, 0.84266]], "google_gemma-3-12b-it_contains_pii": [[0, 2683, false], [2683, 5798, null], [5798, 8488, null], [8488, 10401, null], [10401, 12650, null], [12650, 15913, null], [15913, 18196, null], [18196, 20655, null], [20655, 23311, null], [23311, 26333, null], [26333, 29049, null], [29049, 29696, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2683, true], [2683, 5798, null], [5798, 8488, null], [8488, 10401, null], [10401, 12650, null], [12650, 15913, null], [15913, 18196, null], [18196, 20655, null], [20655, 23311, null], [23311, 26333, null], [26333, 29049, null], [29049, 29696, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29696, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29696, null]], "pdf_page_numbers": [[0, 2683, 1], [2683, 5798, 2], [5798, 8488, 3], [8488, 10401, 4], [10401, 12650, 5], [12650, 15913, 6], [15913, 18196, 7], [18196, 20655, 8], [20655, 23311, 9], [23311, 26333, 10], [26333, 29049, 11], [29049, 29696, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29696, 0.15026]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
ab5479e0f5105b5663b65f684f33467d189c95bb
|
[REMOVED]
|
{"Source-Url": "https://people.inf.ethz.ch/dhofheinz/slides/signatures/lecture_04_print.pdf", "len_cl100k_base": 5437, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 50864, "total-output-tokens": 6685, "length": "2e12", "weborganizer": {"__label__adult": 0.0007061958312988281, "__label__art_design": 0.000545501708984375, "__label__crime_law": 0.0035724639892578125, "__label__education_jobs": 0.0008602142333984375, "__label__entertainment": 0.00014126300811767578, "__label__fashion_beauty": 0.0002853870391845703, "__label__finance_business": 0.0008292198181152344, "__label__food_dining": 0.0006604194641113281, "__label__games": 0.0017404556274414062, "__label__hardware": 0.00391387939453125, "__label__health": 0.0018758773803710935, "__label__history": 0.0005154609680175781, "__label__home_hobbies": 0.0002682209014892578, "__label__industrial": 0.0015239715576171875, "__label__literature": 0.0004405975341796875, "__label__politics": 0.0006260871887207031, "__label__religion": 0.0009169578552246094, "__label__science_tech": 0.466064453125, "__label__social_life": 0.00013875961303710938, "__label__software": 0.01244354248046875, "__label__software_dev": 0.5, "__label__sports_fitness": 0.0007472038269042969, "__label__transportation": 0.00098419189453125, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14005, 0.01702]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14005, 0.4759]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14005, 0.56501]], "google_gemma-3-12b-it_contains_pii": [[0, 204, false], [204, 1734, null], [1734, 2660, null], [2660, 3234, null], [3234, 3839, null], [3839, 4366, null], [4366, 5408, null], [5408, 5824, null], [5824, 6680, null], [6680, 7606, null], [7606, 8103, null], [8103, 8708, null], [8708, 9304, null], [9304, 9777, null], [9777, 10705, null], [10705, 11296, null], [11296, 12012, null], [12012, 12953, null], [12953, 13782, null], [13782, 14005, null]], "google_gemma-3-12b-it_is_public_document": [[0, 204, true], [204, 1734, null], [1734, 2660, null], [2660, 3234, null], [3234, 3839, null], [3839, 4366, null], [4366, 5408, null], [5408, 5824, null], [5824, 6680, null], [6680, 7606, null], [7606, 8103, null], [8103, 8708, null], [8708, 9304, null], [9304, 9777, null], [9777, 10705, null], [10705, 11296, null], [11296, 12012, null], [12012, 12953, null], [12953, 13782, null], [13782, 14005, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14005, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14005, null]], "pdf_page_numbers": [[0, 204, 1], [204, 1734, 2], [1734, 2660, 3], [2660, 3234, 4], [3234, 3839, 5], [3839, 4366, 6], [4366, 5408, 7], [5408, 5824, 8], [5824, 6680, 9], [6680, 7606, 10], [7606, 8103, 11], [8103, 8708, 12], [8708, 9304, 13], [9304, 9777, 14], [9777, 10705, 15], [10705, 11296, 16], [11296, 12012, 17], [12012, 12953, 18], [12953, 13782, 19], [13782, 14005, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14005, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0a06e0ead351fc24995f4bc231218c1e37c23540
|
Parallelism and Performance in the OpenVMS TCP/IP Kernel
Robert Rappaport, HP Software Engineer
Yanick Pouffary, HP Software Technical Director
Steve Lieman, HP Software Engineer
Mary J. Marotta, HP Information Developer
Introduction
In October 2003, TCP/IP Services for OpenVMS introduced into OpenVMS production environments a radically modified and improved Scalable Kernel. The Scalable Kernel enables parallelism in TCP/IP by taking advantage of available CPU capacity in a multiCPU configuration. It allows network performance to scale almost linearly as CPUs are added to the configuration. The Scalable Kernel was designed to enhance network application performance without jeopardizing the integrity of the basic UNIX code.
The SMP Challenge
The TCP/IP Kernel maintains a large in-memory database. Access to this database is synchronized by the use of several spin locks, all of which are associated with interrupt priority level (IPL) 8. On single CPU systems, only one active IPL 8 thread executes at a time. Therefore, there is no possibility for contention for the TCP/IP-specific spin locks on single CPU systems. On multiCPU systems, however, the potential for such contention increases as the number of CPUs in the configuration increases. The Scalable Kernel eliminates this contention.
When customers add CPUs to symmetric multiprocessing (SMP) systems, they expect the extra processing power to boost network performance, but the classic TCP/IP kernel does not take advantage of the extra processing power of the added CPUs. The number of users may actually increase, but almost all network I/O interactions are handled while holding the TCP/IP global spin lock (I/O lock 8). Contention for this lock makes it difficult to increase network throughput under these circumstances.
The Architecture of the TCP/IP Kernel
The OpenVMS TCP/IP kernel, the heart of the OpenVMS TCP/IP architecture, was ported from BSD UNIX. It was intentionally designed to operate like the UNIX- TCP/IP kernel and to interoperate with the OpenVMS operating system with a minimum of programming changes.
Figure 1 The Architecture of the TCP/IP Kernel
As illustrated in Figure 1, the OpenVMS TCP/IP kernel consists of two distinct parts:
- The **TCP/IP kernel** – code that is ported from UNIX.
- The **cradle** – OpenVMS code that supports and nurtures the UNIX code.
The cradle surrounds the UNIX code, creating an environment in which only a small percentage of the UNIX code has to be made aware that it is not operating in a UNIX system. The cradle provides transparent UNIX-like interfaces that serve the ported UNIX code in three general areas:
- User-level I/O requests are preprocessed in the cradle and fed into the UNIX code at the appropriate point.
- I/O terminations from the UNIX code are intercepted by the cradle transparently, as are all UNIX interactions with the LAN drivers.
- All interactions from the UNIX code with the OpenVMS operating system, such as the dynamic allocation and deallocation of memory, are handled transparently.
**TCP/IP Thread Contexts**
Code executing in the TCP/IP kernel is either in **process context** or **kernel context** mode. A thread running in process context mode has access to the user address space (for example, the user’s buffers). Threads running in process context are almost always executing code in the cradle. Threads running in kernel context run at IPL 8 holding the TCP/IP global spin lock.
In the classic TCP/IP kernel, when a thread changes mode to kernel context, it has to wait for the TCP/IP global spin lock. In the Scalable Kernel environment, kernel context threads are created as IPL fork threads, which then acquire the TCP/IP global spin lock. Kernel context threads are almost always executing in the UNIX-ported portion of the code.
Figure 2 illustrates how process context threads running in the classic Kernel environment contend for I/O lock 8 (the TCP/IP global spin lock of that environment) in order to change their mode to kernel. Once a thread acquires this spin lock it can then proceed to carry out its TCP/IP kernel
work while all process context threads contending for this spin lock must wait, spinning and wasting valuable CPU cycles.
Figure 2 Process Context in the Traditional Kernel
As the number of network links per system gets larger and as the links get faster and faster, the potential number of network users requesting service can expand rapidly. As the demand increases, more and more process context threads end up spinning in a loop, waiting for service while other threads are processed.
Introducing Parallelism into the TCP/IP Kernel
To address the problem of wasted CPU cycles spent spinning and to allow more work to get done on SMP systems, parallelism was introduced into the TCP/IP kernel code. Analysis of the classic kernel showed that only a small part of the processing of network operations had to be done while the TCP/IP internal database was locked. It was possible to change the order of the code flow in the two most frequently invoked network operations (read and write) so that:
- The kernel context portion of each read or write could run in an IPL 8 fork thread.
- The completion of read and write operations would not depend on these IPL 8 fork threads being completed.
In other words, read and write operations could be designed so that the process context portion of the work queues an IPL 8 fork thread to complete the kernel context portion of the work. Once this fork thread is queued, the user I/O request can then be completed. This is how the TCP/IP Scalable Kernel works.
In the Scalable Kernel, read and write operations are processed at IPL 2 in process context and queue IPL 8 fork threads to complete the kernel context work. Because each read or write
Parallelism and Performance in the OpenVMS TCP/IP Kernel – TCP/IP Engineering Team
operation does not have to wait until the fork thread has completed, the operation can be marked as completed (I/O Posted) immediately after queueing the fork thread.
The IPL 8 fork threads that are operating in kernel context need to acquire the TCP/IP global spin lock in order to access the in-memory database. Allowing these fork threads to run on any available CPU would lead to contention for the spin lock. Therefore, all of these TCP/IP kernel context fork threads are directed to a queue that is processed on one specific CPU in the configuration (the designated TCP/IP CPU) on a first-come, first-served order. Because all the threads in the system that need to acquire the TCP/IP global spin lock run on one single CPU, contention for this spin lock is eliminated. And because this spin lock is no longer I/O lock 8, no other OpenVMS code will attempt to use it.
The Scalable Kernel introduces a new mechanism for code to request the creation of a kernel context thread. The mechanism involves allocating a newly-defined data structure (the TCPIP_KRP), filling in the TCPIP_KRP, and then queuing this data structure to a global work queue. If the queue is empty at the time, an IPL 8 fork thread is created, which will run on the designated TCP/IP CPU and which will process every TCPIP_KRP in the queue.
Tracking a Write Operation
The object of any TCP/IP write operation is to take data from a user buffer and place this data into a socket. The operation is performed in two distinct steps:
1. Copy the user data from the user buffer into a system buffer (MBUF) or a chain of system buffers (does not require holding the TCP/IP global spin lock)
2. Append this chain of system buffers into the socket (requires holding the TCP/IP global spin lock)
In the Scalable Kernel, the processing of a write operation is straightforward. One or more MBUFs are allocated to accommodate the user data, and then the data is copied from user space into the new MBUF chain. A TCPIP_KRP is allocated and initialized so that it requests that this new MBUF chain be appended to the data in a particular socket. The initialization of the TCPIP_KRP includes passing the address of the MBUF chain, the address of the socket, and so forth. After the TCPIP_KRP is initialized, it is queued to the global work queue and the write request is completed.
At the same time that the write operation is being processed on one CPU, another write operation can be processed on another CPU in the system. Presumably, the other write operation is writing to a different socket. Because neither of these operations needs to acquire the global spin lock to complete, both operations run to completion without any interference. Similarly, they can run in parallel with ongoing read operations as well.
The power of the design of the Scalable Kernel becomes obvious. In a large multiCPU system, user programs running in parallel on the various CPUs of the system constantly call TCP/IP operations such as read and write. They run to completion, in parallel, without interfering with each other. Each of these requests leaves behind a TCPIP_KRP that is queued to be processed on the designated TCP/IP CPU; the processing of these TCPIP_KRP requests also runs in parallel with all the other operations.
Each process context operation leads to an associated kernel context operation. The amount of work entailed in each kernel context operation adds to the load of work on the designated TCP/IP CPU, but as long as this designated CPU is not completely saturated with work, the Scalable Kernel is able to scale close to linearly as more CPUs are added to the configuration.
The Scalable Kernel takes advantage of multiple CPUs by separating the user processes from the kernel process. Rather than blocking the CPU, it queues new user I/O requests. The flow of the send and receive logic in the cradle runs from start to finish without any interference from other TCP/IP threads. When they are successful, operations leave a pending asynchronous kernel-
context thread to complete their requests. The user application does not have to wait for the kernel context thread to complete. When it queues the kernel context thread, the user request is completed. Network operations become more like transaction-oriented operations, where the parallel threads prepare transactions to be processed by the designated TCP/IP CPU.
As illustrated in Figure 3, applications no longer compete with one other to acquire locks in order to proceed.

**Figure 3 Process Context Threads in the Scalable Kernel**
**Types of Kernel Request Packets (KRPs)**
The TCPIP_KRP describes the request to perform an operation in kernel context, including a pointer to the action routine to be called, and a series of parameters that the routine will need to complete the request. There are many different types of requests for kernel context work in the Scalable Kernel.
In total, there are over 50 different types of KRPs in the Scalable Kernel. The type of KRP created depends on the work:
- A thread executing in process context that wishes to **write** data to a socket packages up all the data to be written to the socket inside a KRP and then creates a kernel context thread to process the KRP. The processing of this KRP includes extracting the information from the KRP and calling the UNIX routines that insert new data into the transmit channel of a socket.
- A thread receiving a call from the OpenVMS LAN driver must pass **received** data from the network. This thread packages the received network data in a KRP and then creates a kernel context thread to process this KRP. To process this KRP, the kernel has to parse the received network data (IP header, TCP or UDP header, and so forth), place the parsed...
data into the receive channel of a socket, and possibly wake up a user thread waiting for data to arrive on this socket.
- When a thread running a **TCP/IP timer** goes off, the information about the timer is packaged in a KRP and a kernel context thread is created to process it, executing the appropriate code to deal with the specific timer that expired.
**Kernel Context Threads**
The processing of kernel context threads is invisible to the TCP/IP application program. All the kernel threads access the same shared in-memory database, which holds information that cannot be accessed concurrently by more than one thread at a time. Processing in kernel context is ensured by the fact that the threads that execute in kernel context are all directed to a single, designated CPU in the configuration, where they execute one by one, at high priority and at high speed with no interference from other threads.
Instead of I/O lock 8, the Scalable Kernel uses several new dynamic spin locks, like the TCP/IP global Spin lock, which is held for relatively long periods of time, and several mini-spin locks, which are never held for very long. Each TCPIP_KRP is processed in an IPL 8 fork thread on the designated TCP/IP CPU, while holding the TCP/IP global Spin lock. Since all of the threads that need the TCP/IP global spin lock run on the TCP/IP CPU, there is never any contention for the spin lock.
Executing all the kernel threads on the same CPU also optimizes CPU cache utilization because the same objects in the shared database are usually referenced from the same CPU.
**The Scalable Kernel**
TCP/IP Services Version 5.4 introduces the Scalable Kernel as an optional new feature for the specific purpose of validating and quantifying the performance gains for those systems with the heaviest TCP/IP loads on SMP systems. The Scalable Kernel significantly improves the potential for performance gains depending on the applications and configuration.
The Scalable Kernel will be the default TCP/IP kernel in the next major release of TCP/IP Services for OpenVMS beyond V5.4. Tests have shown equivalent or better operational performance on single-CPU systems, and indisputable benefits for multiCPU systems under the heaviest TCP/IP loads. To obtain the benefits of the Scalable Kernel, you must upgrade your system to TCP/IP Services Version 5.4 or higher.
**Measuring Throughput**
The maximum gain to expect in system throughput by using the Scalable Kernel is a direct function of the amount of MPSYNCH that is attributable to TCP/IP, based on measurements using the classic TCP/IP kernel. System throughput gain is always highly application-dependent.
In a given configuration running the Scalable Kernel, the amount of remaining capacity (headroom) can be estimated by measuring the amount of time that the TCP/IP global spin lock is held by the designated TCP/IP CPU under heavy TCP/IP load. For example, in a multiCPU configuration, if the TCP/IP global spin lock is held for 40% of the time, the number of CPUs in the configuration can be doubled before causing TCP/IP bottlenecks.
**Scalable Kernel Performance Tests**
The following graphs show real-life data confirming the success of parallelism in the field. Note that although performance tests may show higher I/O operations per second, better utilization of system resources, and so forth, the customer is only interested in getting more of his work done in a specific unit of time.
**Performance Summary**
On a 16-CPU GS160 Wildfire system running the classic TCP/IP kernel, the testbed application resulted in an MPSYNCH backup averaging four or more CPUs. That is, at any given time, four or more CPUs were spinning and doing nothing constructive. (This represents 25% to 35% of the potential productivity of a 16-CPU machine!)
The Scalable Kernel restores that 25% to 35% gain in throughput, virtually eliminating the waste of the previously spinning CPUs. In other words, the Scalable Kernel allows greatly expanded parallelism to make use of previously lost CPU cycles.
**Comparing the Traditional Kernel to the Scalable Kernel**
These test results show the overall pattern of improvement when the scalable kernel is running (in green), and when it is not running (in red).

The purpose of the scalable kernel is to increase the amount of network I/O that an OpenVMS system can process. As Figure 4 shows, the rate of TCP/IP traffic can potentially increase by 30% or more when the Scalable Kernel enabled.
Figure 5 – Customer Orders per Minute
As shown in Figure 5, the Scalable Kernel allows the system to format more customer orders in a given time than the classic kernel. When the load gets heavy, the Scalable Kernel is able to respond and complete more real work per minute than previously possible.
When the Scalable Kernel is running, multiprocessor synchronization contention (MPSYNCH) is dramatically reduced, as shown in Figure 6.
As Figure 7 shows, a greater percentage of CPU time is spent in user-mode, which means that more application work is getting done when the Scalable Kernel is running.
On OpenVMS, TCP/IP I/O activity is expressed as buffered I/O. Figure 8 shows how the rate of buffered I/O increases when the Scalable Kernel is running.
Comparing the Traditional Kernel to the Scalable Kernel
<table>
<thead>
<tr>
<th></th>
<th>Traditional Kernel</th>
<th>Scalable Kernel</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Hold % All Locks</strong></td>
<td>54.8%</td>
<td>35.2% (while completing more work)</td>
</tr>
<tr>
<td><strong>Spin % All Locks</strong></td>
<td>24.1%</td>
<td>4.4% (very good)</td>
</tr>
<tr>
<td><strong>Locks Per Second</strong></td>
<td>204,153</td>
<td>162,033 (doing more work with far fewer locks per unit of work)</td>
</tr>
<tr>
<td><strong>I/O Lock 8 Hold Time</strong></td>
<td>31.4%</td>
<td><5% (very good. Now much more of I/O lock 8 is available for handling heavy disk I/O.)</td>
</tr>
</tbody>
</table>
The Importance of Maintainability
Over half the OpenVMS TCP/IP kernel code is ported from UNIX and the TCP/IP code base is under constant development. In order for the OpenVMS TCP/IP kernel to remain up-to-date with leading-edge functionality, frequent infusions of new UNIX code are required. The OpenVMS TCP/IP engineering team must repeat the port of the UNIX code periodically. The amount of OpenVMS modifications introduced into the ported UNIX code must be restricted, so that re-porting operations remain a manageable task that can be accomplished in a time period of weeks, not years.
Limits on the amount of changes that could be introduced into the UNIX code dictated the approach to the challenge of achieving parallelism in the TCP/IP Services for OpenVMS product. Complicated locking schemes that would require that lock domains span both the ported UNIX code and the OpenVMS cradle would have greatly increased the complexity of the solution, introducing issues of quality as well as increased maintenance.
The Scalable Kernel is the ideal solution because it is customized to the OpenVMS SMP environment, it operates just as well in a single-CPU configuration as an SMP system, and it imposes the least amount of overhead for future maintenance.
Future Kernel Enhancements
In the future, the Scalable Kernel may be enhanced to handle even larger CPU configurations. To accomplish this, the current single, shared in-memory database could be divided into two or more databases, each of which would be serviced by its own designated kernel context CPU. This would ensure that the designated TCP/IP CPU does not become a limit to system throughput.
Additional performance gains can be realized by optimizing the processing of the transactions, so that the designated TCP/IP CPU takes less time and effort to process each individual transaction, thereby supporting a greater number of parallel threads without becoming overloaded.
For More Information about TCP/IP Services Performance
The Scalable Kernel allows greater parallelism in the processing of TCP/IP requests and is not a generalized performance panacea that solves everything. Processing tens of thousands of TCP/IP packets and distributing them over thousands of sockets requires CPU cycles, which will inevitably take its toll. The Scalable Kernel allows you to deal efficiently with this necessary use of CPU resources by adding CPUs to the configuration and then allowing these additional CPUs to be effectively used by the system instead of merely spinning, doing nothing and getting in the way.
There are other steps, independent of the Scalable Kernel, which you can take to improve the performance of individual TCP/IP operations, including TCP/IP tuning, adjusting window sizes for sockets, and so forth. For more information about these performance enhancement techniques, consult the *TCP/IP Services for OpenVMS Tuning and Troubleshooting* guide.
|
{"Source-Url": "http://h71000.www7.hp.com/openvms/journal/v4/tcp_ip_scalable_kernel.pdf", "len_cl100k_base": 4315, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23927, "total-output-tokens": 4755, "length": "2e12", "weborganizer": {"__label__adult": 0.0003132820129394531, "__label__art_design": 0.0002567768096923828, "__label__crime_law": 0.0003018379211425781, "__label__education_jobs": 0.0003986358642578125, "__label__entertainment": 7.76052474975586e-05, "__label__fashion_beauty": 0.00012969970703125, "__label__finance_business": 0.00038313865661621094, "__label__food_dining": 0.0003376007080078125, "__label__games": 0.0005698204040527344, "__label__hardware": 0.0048980712890625, "__label__health": 0.0005469322204589844, "__label__history": 0.00024211406707763672, "__label__home_hobbies": 9.614229202270508e-05, "__label__industrial": 0.000820159912109375, "__label__literature": 0.00015091896057128906, "__label__politics": 0.0001976490020751953, "__label__religion": 0.0004410743713378906, "__label__science_tech": 0.1285400390625, "__label__social_life": 7.218122482299805e-05, "__label__software": 0.0283966064453125, "__label__software_dev": 0.83154296875, "__label__sports_fitness": 0.0003380775451660156, "__label__transportation": 0.0006480216979980469, "__label__travel": 0.00023055076599121096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20734, 0.00596]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20734, 0.25841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20734, 0.92634]], "google_gemma-3-12b-it_contains_pii": [[0, 2102, false], [2102, 4114, null], [4114, 5809, null], [5809, 9928, null], [9928, 11741, null], [11741, 15207, null], [15207, 16298, null], [16298, 16599, null], [16599, 16735, null], [16735, 17056, null], [17056, 20377, null], [20377, 20734, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2102, true], [2102, 4114, null], [4114, 5809, null], [5809, 9928, null], [9928, 11741, null], [11741, 15207, null], [15207, 16298, null], [16298, 16599, null], [16599, 16735, null], [16735, 17056, null], [17056, 20377, null], [20377, 20734, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20734, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20734, null]], "pdf_page_numbers": [[0, 2102, 1], [2102, 4114, 2], [4114, 5809, 3], [5809, 9928, 4], [9928, 11741, 5], [11741, 15207, 6], [15207, 16298, 7], [16298, 16599, 8], [16599, 16735, 9], [16735, 17056, 10], [17056, 20377, 11], [20377, 20734, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20734, 0.06122]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
e324d77188e5cacbf78b3a691710bfba89b2bf24
|
Logical Architecture Model Development
From SEBoK
Logical Architecture Model Development
**Lead Authors:** Alan Faisandier, Garry Roedler, **Contributing Author:** Rick Adcock
Logical Architecture Model Development may be used as a task of the activity "Develop candidate architectures models and views," or a sub-process of the System Architecture Definition process (see **System Architecture**). Its purpose is to elaborate models and views of the functionality and behavior of the future engineered system as it should operate while in service. The logical architecture model of a engineered system of interest (SoI) is composed of a set of related technical concepts and principles that support the logical operation of the system. It may include a functional architecture view, a behavioral architecture view, and a temporal architecture view. Other additional views are suggested in architecture frameworks, depending on the domain.
Note: The term *Logical Architecture* is a contraction of the expression *Logical View of the System Architecture*.
---
**Contents**
- 1 Concepts and Principles
- 1.1 Functional Architecture Model
- 1.2 Behavioral Architecture Model
- 1.3 Temporal Architecture Model
- 2 Process Approach
- 2.1 Purpose
- 2.2 Activities of the Process
- 2.3 Artifacts, Methods and Modeling Techniques
- 3 Practical Considerations
- 3.1 Pitfalls
- 3.2 Proven Practices
- 4 References
- 4.1 Works Cited
- 4.2 Primary References
- 4.3 Additional References
**Concepts and Principles**
**Functional Architecture Model**
A functional architecture model is a set of functions and their sub-functions that defines the transformations performed by the system to complete its mission.
**Function and Input-Output Flow** - In the context of System Architecture, functions and input-output flows are architecture entities. A function is an action that transforms inputs and generates outputs, involving data, materials, and/or energies. These inputs and outputs are the flow items exchanged between functions. The general mathematical notation of a function is \( y = f( x, t) \), in which \( y \) and \( x \) are vectors that may be represented graphically and \( t = \) time.
In order to define the complete set of functions of the system, one must identify all the functions necessitated by the system and its derived requirements, as well as the corresponding inputs and outputs of those functions. Generally speaking, there are two kinds of functions:
1. Functions that are directly deduced from functional and interface requirements. These functions express the expected services of a system necessary to meet its system requirements.
2. Functions that are derived and issued from the alternative solutions of the physical architecture model and are dependent upon the result of the design; additionally, they rely upon on technology choice to implement the logical architecture model elements.
**Functional Hierarchy/Decomposition of Functions** - At the highest level of a hierarchy (Figure 1), it is possible to represent a system as a unique, central function (defined as the system's mission) that in many ways is similar to a “black box” (“F0" in plan A-0 in Figure 1). In order to understand, in detail, what the system does, this "head-of-hierarchy" (F0) is broken down into sub-functions (F1, F2, F3, F4) grouped to form a sub-level of the hierarchy (plan A0), and so on. Functions of the last level of a functional hierarchy can be called leaf-functions (F21, F22, F23, F24 in plan A2). Hierarchies (or breakdowns) decompose a complex or global function into a set of functions for which physical solutions are known, feasible, or possible to imagine.
This view of functional hierarchy represents a static view of functions which would be populated at different levels over a number of iterations, depending upon the synthesis approach used. In general, it is not created by a single top-down decomposition. A static functional hierarchy on its own does not represent how effectively the flows of inputs and outputs are exchanged, and may need to be viewed alongside the other models below.

Behavioral Architecture Model
A behavioral architecture model is an arrangement of functions and their sub-functions as well as interfaces (inputs and outputs) that defines the execution sequencing, conditions for control or data-flow, and performance level necessary to satisfy the system requirements (ISO/IEC 26702:2007). A behavioral architecture model can be described as a set of inter-related scenarios of functions and/or operational modes.
Control (Trigger) - A control flow is an element that activates a function as a condition of its execution. The state of this element, or the condition it represents, activates or deactivates the function (or elements thereof). A control flow can be a signal or an event, such as a switch being moved to the on position, an alarm, a trigger, a temperature variation, or the push of a key on a keyboard.
Scenario (of Functions) - A scenario of functions is a chain of functions that are performed as a sequence and synchronized by a set of control flows to work to achieve a global transformation of inputs into outputs, as seen in the figures below. A scenario of functions expresses the dynamic of an upper level function. A behavioral architecture is developed by considering both scenarios for each level of the functional hierarchy and for each level of the system hierarchy. When representing scenarios of functions and behavioral architecture models, it is appropriate to use diagrams as modeling techniques, such as functional flow block diagrams (FFBD) (Oliver, Kelliher, and Keegan 1997) or activity diagrams, developed with SysML (OMG 2010). Figures 2 and 3 provide examples of these diagrams.
Figure 2. Illustration of a Scenario (eFFBD). (SEBoK Original)
Operational Mode - A scenario of functions can be viewed by abstracting the transformation of inputs into outputs of each function and focusing on the active or non-active state of the function and its controls. This view is called a scenario of modes, which is a chain of modes performed as a sequence of transitions between the various modes of the system. The transition from one mode to another is triggered by the arrival of a control flow (event/trigger). An action (function) can be generated within a transition between two modes following the arrival of an event or a trigger, as demonstrated in Figure 4 below.
Behavioral Patterns - When defining scenarios or behavioral architecture models, architects may opt to recognize and use known models to represent the expected transformations and behaviors. Patterns are generic basic models that may be more or less sophisticated depending on the complexity of the treatment (Gamma, Helm, Johnson, and Vlissides 1995). A pattern can be represented with different notations. Behavioral patterns are classified into several categories, which can be seen in the following examples (see also SEBoK Part 2: Patterns of Systems Thinking):
- Basic patterns or constructs linking functions - such as sequence, iteration, selection, concurrence, multiple exits, loops with an exit, and replication.
- Complex patterns - such as monitoring a treatment, exchanging a message, man machine interfaces, modes monitoring, real-time monitoring of processes, queue management, and continuous monitoring with supervision.
- Failure detection, identification, and recovery (FDIR) patterns - such as passive redundancies, active redundancies, semi-active redundancies, and treatments with reduced performance.
**Temporal Architecture Model**
A temporal architecture model is a classification of the functions of a system that is derived according to the frequency level of execution. Temporal architecture models include the definition of synchronous and asynchronous aspects of functions. The decision monitoring that occurs inside a system follows the same temporal classification because the decisions are related to the monitoring of functions.
**Temporal and Decisional Hierarchy Concept** - Not every function of a system is performed at the same frequency. The frequencies change depending on the time and the manner in which the functions are started and executed. One must therefore consider several classes of performance. There are synchronous functions that are executed cyclically and asynchronous functions that are executed following the occurrence of an event or trigger.
To be more specific, *real-time* systems and *command-control* systems combine cyclical operations (synchronous) and factual aspects (asynchronous). Cyclical operations consist of sharing the execution of functions according to frequencies, which depend on either the constraints of capture or dispatching the input/output and control flows. Two types of asynchronous events can be distinguished:
1. Disturbances on High Frequencies (bottom of figure 5) - Decisions that are made at either the level they occur or one level above. The goal is to deter disturbances from affecting the low frequencies so that the system continues to achieve its mission objectives. This is the way to introduce exception operations, with the typical example relating to operations concerns, breakdowns, or failures.
2. Changes on Low Frequencies (top of figure 5) - Decisions pertaining to changes that are made at the upper levels. The ultimate goal is to transmit them toward bottom levels to implement the modifications. A typical example relates to operator actions, maintenance operations, etc.

**Process Approach**
**Purpose**
The purpose of the Logical Architecture Model Development is to define, select, and synthesize a
system's logical architecture model to provide a framework against which to verify that a future system will satisfy its system requirements in all operational scenarios, within which trade-offs between system requirements can be explored in developing such systems.
Generic inputs to the process include system requirements, generic architecture patterns that architects identify and use to answer requirements, outcomes from system analysis processes, and feedback from system verification and validation processes. Depending on the Life Cycle Model that is chosen, there will be iterations through which these inputs and outputs, and the relationships between them evolve and change throughout the process (see also Applying Life Cycle Processes).
Generic outputs from the process are either a single logical architecture model or a set of candidate logical architecture models together with the selected independent logical architecture model and a rationale for its selection. They include, at minimum, views and models. These involve functional, behavioral and temporal views, a traceability matrix between logical architecture model elements and system requirements.
**Activities of the Process**
Major activities and tasks performed during this process include the following:
- **Identify and analyze functional and behavioral elements:**
- Identify functions, input-output flows, operational modes, transition of modes, and operational scenarios from system requirements by analyzing the functional, interface, and operational requirements.
- Define necessary inputs and controls (energy, material, and data flows) to each function and outputs that result in the deduction of the necessary functions to use, transform, move, and generate the input-output flows.
- **Assign system requirements to functional and behavioral elements:**
- Formally characterize functions expressions and their attributes through the assignment of performance, effectiveness, and constraints requirements. In particular, study the temporal aspects from requirements to assign duration, response time, and frequency to functions.
- Formally characterize the input, output, and control flows expressions and their attributes through assignment of interface, effectiveness, operational, temporal and constraints requirements.
- Establish traceability between system requirements and these functional and behavioral elements.
- **Define candidate logical architecture models for each candidate:**
- Analyze operational modes as stated in the system requirements (if any) and/or use previously defined elements to model sequences of operational modes and the transition of modes. Eventually decompose the modes into sub-modes and then establish for each operational mode one or several scenarios of functions recognizing and/or using relevant generic behavioral patterns.
- Integrate these scenarios of functions in order to get a behavioral architecture model of the system (a complete picture of the dynamic behavior).
- Decompose previously defined logical elements as necessary to look towards implementation.
- Assign and incorporate temporal constraints to previously defined logical elements, such as the period of time, duration, frequency, response-time, timeout, stop conditions, etc.
- Define several levels of execution frequency for functions that correspond to levels of decision, in order to monitor system operations, prioritize processing on this time basis, and share out functions among those execution frequency levels to get a temporal architecture model.
- Perform functional failure modes and effects analysis and update the logical architecture elements as necessary.
- Execute the models with simulators (when possible) and tune these models to obtain the
Synthesize the selected independent logical architecture model:
- Select the logical architecture by assessing the candidate logical architecture models against assessment criteria (related to system requirements) and compare them, using the system analysis process to perform assessments and decision management process for the selection (see the System Analysis and Decision Management topics). This selected logical architecture model is called independent logical architecture model because, as much as possible, it is independent of implementation decisions.
- Identify and define derived logical architecture model elements created for the necessity of design and corresponding with the derived system requirements. Assign these requirements to the appropriate system (current studied system or external systems).
- Verify and validate the selected logical architecture models (using as executable models as possible), make corrections as necessary, and establish traceability between system requirements and logical architecture model elements.
- Feedback logical architecture model development and system requirements. This activity is performed after the physical architecture model development process:
- Model the allocated logical architecture to systems and system elements, if such a representation is possible, and add any functional, behavioral, and temporal elements as needed to synchronize functions and treatments.
- Define or consolidate derived logical and physical elements induced by the selected logical and physical architecture models. Define the corresponding derived requirements and allocate them to appropriate logical and physical architectures elements. Incorporate these derived requirements into the requirements baselines of impacted systems.
**Artifacts, Methods and Modeling Techniques**
Logical architecture descriptions use modeling techniques that are grouped under the following types of models. Several methods have been developed to support these types of models (some are executable models):
- **Functional Models** – These include models such as the structured analysis design technique (SADT/IDEF0), system analysis & real time (SA-RT), enhanced Functional Flow Block Diagrams (eFFBD), and the function analysis system technique (FAST).
- **Semantic Models** - These include models such as entities-relationships diagrams, class diagrams, and data flow diagrams.
- **Dynamic Models** – These include such models as state-transition diagrams, state-charts, eFFBDs, state machine diagrams (SysML), activity diagrams (SysML) (OMG 2010), and petri nets.
Depending on the type of domain (e.g. defense, enterprise), architecture frameworks provide descriptions that can help to represent additional aspects/views of architectures - see the section ‘Enterprise Architecture Frameworks & Methodologies’ in Enterprise Systems Engineering Key Concepts. See also practical means for using general templates related to ISO/IEC/IEEE 42010 (ISO 2011).
**Practical Considerations**
As stated above, the purpose of the logical architecture model is to provide a description of what a system must be able to do to satisfy the stated need. This should help to ensure that the needs and/or concerns of all stakeholders are addressed by any solution, and that innovative solutions, as well as those based on current solution technologies, can be considered. In practice it is human nature for problem stakeholders to push their own agendas and for solution architects or designers to offer their familiar solutions. If a logical architecture model is not properly enforced with the chosen life cycle, it is easy for both problem and solution stakeholders to ignore it and revert to their expected characteristics.
own biases (see Part 5: Enabling Systems Engineering). This is exacerbated if the logical architecture model becomes an end in its own right or disconnected from the main lifecycle activities. This can occur either through the use of abstract language or notations, levels of detail, time taken, or an overly complex final architecture that does not match the purpose for which it was created. If the language, scope, and timeliness of the architecture are not matched to the problem stakeholder or solution providers, it is easier for them to overlook it. Key pitfalls and good practices which can help to avoid problems related to logical architecture models are described in the next two sections.
**Pitfalls**
Some of the key pitfalls encountered in developing logical architecture are provided in Table 1.
**Table 1. Pitfalls with Logical Architecture Development.** (SEBoK Original)
<table>
<thead>
<tr>
<th>Pitfall</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Problem Relevance</td>
<td>The logical architecture model should relate back to the operational scenarios produced by mission analysis.</td>
</tr>
<tr>
<td>Inputs for Architecture Model</td>
<td>The major input for architecture definition activity involves the set of system requirements and the instances in which they do not address the right level of architecture. The consequence is that the architect allows the requirements to fall to the side and invents a solution with what he or she understands through the input.</td>
</tr>
<tr>
<td>Decomposition Too Deep</td>
<td>A common mistake made by many beginners in architecture consists of decomposing the functions too deeply or having too many functions and input/output flows in scenarios or in the functional architecture model of the current system block.</td>
</tr>
<tr>
<td>Not Considering Inputs and Outputs Together with Functions</td>
<td>A common mistake is to consider only the actions supported by functions and decomposing them, while forgetting the inputs and the outputs or considering them too late. Inputs and outputs are integral parts of a function.</td>
</tr>
<tr>
<td>Considering Static Decomposition of Functions Only</td>
<td>Static function decomposition is the smallest functional architecture model task and answers the basic question, "How is this done?" The purpose of the static decomposition is to facilitate the management of or navigation through the list of functions. The static decomposition should be established only when scenarios have been created and the logical architecture is close to complete.</td>
</tr>
<tr>
<td>Mixing Governance, Management, and Operation</td>
<td>Governance (strategic monitoring), management (tactical monitoring), and basic operations are often mixed in complex systems. Logical architecture model should deal with behavioral architecture model as well as with temporal architecture model.</td>
</tr>
</tbody>
</table>
**Proven Practices**
Some proven practices gathered from the references are provided in Table 2.
**Table 2. Proven Practices with Logical Architecture Development.** (SEBoK Original)
<table>
<thead>
<tr>
<th>Practice</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Constitute Scenarios of Functions</td>
<td>Before constituting a decomposition tree of functions, one must model the behavior of the system, establish scenarios of functions, and decompose functions as scenarios of sub-functions.</td>
</tr>
</tbody>
</table>
Analysis and Synthesis Cycles
When facing a system that contains a large number of functions, one should attempt to synthesize functions into higher abstraction levels of functions with the assistance of criteria. Do not perform analysis only; instead, conduct small cycles of analysis (decomposition) and synthesis. The technique of using scenarios includes this design practice.
Alternate Functional and Behavioral Views
A function (action verb; e.g. "to move") and its state of execution/operational mode (e.g. "moving") are two similar and complimentary views. Utilize this to consider a behavioral view of the system that allows for the transition from one operational mode to another.
The Order to Create a Scenario of Functions
When creating a scenario of functions, it is more efficient to first establish the (control) flow of functions, then to add input and output flows, and finally to add triggers or signals for synchronization.
References
Works Cited
Gamma, E., R. Helm, R. Johnson, and J. Vlissides. 1995. Design Patterns: Elements of Reusable Object-Oriented Software. Boston, MA, USA: Addison-Wesley.
Primary References
**Additional References**
---
< Previous Article | Parent Article | Next Article >
SEBoK v. 2.2, released 15 May 2020
Retrieved from
- This page was last edited on 10 May 2020, at 00:15.
|
{"Source-Url": "https://www.sebokwiki.org/w/index.php?action=mpdf&title=Logical_Architecture_Model_Development", "len_cl100k_base": 4341, "olmocr-version": "0.1.51", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20885, "total-output-tokens": 5650, "length": "2e12", "weborganizer": {"__label__adult": 0.0003535747528076172, "__label__art_design": 0.0010099411010742188, "__label__crime_law": 0.0003840923309326172, "__label__education_jobs": 0.0017404556274414062, "__label__entertainment": 6.532669067382812e-05, "__label__fashion_beauty": 0.00017499923706054688, "__label__finance_business": 0.0005044937133789062, "__label__food_dining": 0.00035381317138671875, "__label__games": 0.0005521774291992188, "__label__hardware": 0.0016193389892578125, "__label__health": 0.0004482269287109375, "__label__history": 0.00041103363037109375, "__label__home_hobbies": 0.00014770030975341797, "__label__industrial": 0.0009670257568359376, "__label__literature": 0.00032329559326171875, "__label__politics": 0.00025844573974609375, "__label__religion": 0.0006184577941894531, "__label__science_tech": 0.07745361328125, "__label__social_life": 9.28044319152832e-05, "__label__software": 0.007579803466796875, "__label__software_dev": 0.90380859375, "__label__sports_fitness": 0.00032258033752441406, "__label__transportation": 0.0007262229919433594, "__label__travel": 0.0002086162567138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25815, 0.01875]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25815, 0.56116]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25815, 0.90173]], "google_gemma-3-12b-it_contains_pii": [[0, 1730, false], [1730, 4314, null], [4314, 6034, null], [6034, 7781, null], [7781, 10046, null], [10046, 13841, null], [13841, 17593, null], [17593, 21278, null], [21278, 24534, null], [24534, 25815, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1730, true], [1730, 4314, null], [4314, 6034, null], [6034, 7781, null], [7781, 10046, null], [10046, 13841, null], [13841, 17593, null], [17593, 21278, null], [21278, 24534, null], [24534, 25815, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25815, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25815, null]], "pdf_page_numbers": [[0, 1730, 1], [1730, 4314, 2], [4314, 6034, 3], [6034, 7781, 4], [7781, 10046, 5], [10046, 13841, 6], [13841, 17593, 7], [17593, 21278, 8], [21278, 24534, 9], [24534, 25815, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25815, 0.07914]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
5d6f2741738ef9e5f57413f6eb97d3500092134c
|
Analysis DevOps efficiency and digital transformation of digital economy, cross-border e-commerce, and brand building:
*Note: Evidence GuangDong manufacturing industry in Chinese Guangdong Province
Jianzhong Li 1, a, Qiang Wan 2, *, Wenjuan Ma 3, Xiaoting Fang 4, Chuyi Yin 4, Jun Cui 5, b
1 School of Entrepreneurship, Guangdong Innovative Technical College, Associate Professor, Dongguan, 523000, China
2 Institute of Industrial Economics, Chinese Academy of Social Sciences, Ph.D. Student, Beijing, 102400, China
3 Guangzhou Nanyang Polytechnic Vocational College, Guangzhou City, Lecturer, Guangdong Province, 510000, China
4 School of Entrepreneurship, Guangdong Innovative Technical College, Lecturer, Dongguan, 523000, China
5 Solbridge International School of Business, Woosong University, Ph.D. Student, South Korea
* Corresponding author: Qiang Wan (Email: brick_wan@163.com)
a tech2022learn@163.com, b jcui228@student.solbridge.ac.kr
Abstract. As IT-based DevOps Abilities and Automation testing theory, this study examines factors that encourage and discourage DevOps abilities and automation technology have become a major trend in the development of internet or IT enterprises, The main aim of this paper is to investigate how the use of DevOps has affected the quality of software. Another main aim is to explore and identify ways to continuously increase software quality. One way of finding information on this study is to conduct a literature review. This article uses the ICTAM and TAM theoretical model for analysis. A literature review was developed to gather quantitative data while DevOps Abilities and Quality Assurance experts’ interviews were used to determine how DevOps and automation testing can enhance software quality and improve software development efficiency. Interview reviews, Sampling questionaries, hypothesis testing, and regression tests are recommendations. Five semi-structured interviews were conducted with experts and analyzed through the lens of the Interactive Communication Technology Adoption Model (ICTAM) as a guiding framework. Moreover, through a series of data investigation and analysis, DevOps Abilities and Automation testing can help many companies quickly and accurately improve software development efficiency and quality. The results demonstrated that China internet company's DevOps and automation technology helps enterprise's continuous integration and continuous delivery, automation testing and monitor etc. DevOps and CI/CD is one of the best practices for devops teams to implement. The findings is also an agile methodology best practice, as it enables software development teams to focus on meeting business requirements, code quality, and security because deployment steps are automated. Based on DevOps Abilities, Automation testing, CI, CD concepts, many companies can continuously improve their software R&D efficiency and software quality. The research of this paper mainly uses the mixed method of qualitative and quantitative analysis to conduct research and analysis, and uses excel or SPSS 23 software to perform ANOVA analysis on the sample data, Finally, this study contributions concludes the importance of DevOps and Automation testing to China Software company.
Keywords: DevOps Abilities, Automation testing, Communication Technology Adoption Model (ICTAM) and TAM model and CD, mixed method of qualitative and quantitative analysis.
1. Introduction (Heading 1)
Over these decade years, the digital transformation of the software industry in Dongguan City, Guangdong Province, China, and the rapid development of the digital economy. In fact, the DevOps efficiency improvement and digital transformation of software companies are very prominent. This article mainly discusses the DevOps energy efficiency improvement and digital transformation of the software industry in Dongguan City, Guangdong Province, China, as well as its application. Automation software theory and ICTAM theory are analyzed and described. Moreover, Organizations are social entities containing multiple people that are goal-directed and are linked to the external environment and software DevOps process. It is defined as a collection of people purposely organized to accomplish a common aim (Daft, 2006). Since its inception, DevOps engineer has a unique combination of skills and expertise that enables collaboration, innovation, and cultural shifts within an organization. DevOps is a process in which, Modern software engineering Culture and Practices to develop software where the development and operation teams work hand in hand as one unit, unlike the traditional ways, furthermore, the CI and CD stand for continuous integration and continuous delivery/continuous deployment. If the software R&D of enterprises wants to improve the efficiency and software quality of R&D, most enterprises will now adopt the DevOps R&D process and technical architecture, involving continuous integration, continuous delivery, automated testing, monitoring, and other aspects (Karamitsos, I., Albarhami, S. and Apostolopoulos, C., 2020). This article will discuss the DevOps architecture and automation. The importance of enterprises and the value of improving efficiency, and discuss and analyze the case of some enterprises. As the software market becomes more competitive, companies devote more time and resources to developing and delivering high-quality software at a quicker pace. Continuous Integration (CI), and Continuous Delivery (CDE) are some of the two continuous practices designed to assist organizations in accelerating the growth and delivery of product features while maintaining efficiency.
As an adaptation of Communication Technology Adoption Model (ICTAM), technology acceptance model (TAM) is introduced by Davis (1985) which is specific for modeling user acceptance of information systems. In an attempt to identify the impact of external factors on internal beliefs concerning to technology acceptance, TAM is introduced. TAM identifies the causal linkages between individual user’s attitudes and perceptions toward technology and the actual adoption of technology (King and Gribbins, 2002; Leong et al., 2013). According to TAM model, perceived usefulness and ease of use are the main external variables affecting attitude and intention toward using a technology.
However, Chinese firm’s software Developers over the past years have implemented new generation programming languages. DevOps is complemented by a range of agile practices designed to improve cooperation between development and operating teams. Implementing such DevOps process and automation testing technology in the organizations is not an easy task, and there exist some challenges that should be considered. These challenges and factors affecting each challenge are discussed in this paper. Briefly, research questions and aims of this study are as follows:
*RQ1. What is the impact of the adoption of DevOps and automation testing technology on software quality and delivery efficiency in Chinese Guangdong province engineering companies?*
*RQ2. Can automated testing on the mobile side, automated testing on the server side, unit testing technology, and DevOps performance improvements help to improve R&D in the Chinese Guangdong province enterprise?*
2. Literature Review
2.1. Literature descriptions
This study know that the R&D efficiency and quality of software are very important to the success of enterprise software, in which DevOps architecture and automated test R&D are very critical content, this article uses the TAM theoretical model for analysis. We all know that DevOps is a philosophy
and practice focused on agility, collaboration, and automation within IT and development team processes (Erich, F., Amrit, C. and Daneva, M., 2014). Traditionally, software development occurred in silos, with IT and development working independently within their teams and processes. This separation and competing values created an environment rife with miscommunication, poor alignment, and production delays (some have even nicknamed the operations department the “War Room”). DevOps is the response to the “us vs. them” culture of development. The goal of DevOps is to bridge the gap between IT operations and development to improve communication and collaboration, create more seamless processes, and align strategy and objectives for faster and more efficient delivery. This article analyzes and explains DevOps and the importance of automation in the process of enterprise software development, and in the form of electronic questionnaires, 100 observations of sample data from different companies were randomly selected for analysis, and the results were analyzed using a mixed method of quantitative analysis and qualitative analysis. Those aspects can improve the R&D efficiency of enterprise software, reduce the company's R&D costs, and achieve some final results in improving software R&D efficiency and automation. This involves testing a chain of components, sometimes external components too, that together handle a process or business transaction. Integration testing often includes testing the interactions between hardware and software as well as other infrastructural components. About automation testing Level 3: End-to-end testing is, and the name implies, about testing a process from end to end. The scope of end-to-end testing depends on the process, but end-to-end tests often span across multiple technologies (Battina, D.S., 2021). The purpose of end-to-end tests is to ensure that a flow works as intended from the perspective of the user. For this reason, the black box method is often used. The specific test pyramid structure diagram is as follows;
Furthermore, In Chinese software enterprises, many Automation (and the tools that support it) allows developers and IT professionals to combine their efforts into one seamless process and adopt agile practices like continuous integration, continuous delivery, and continuous deployment. In other words, this process enables collaboration throughout the entire development pipeline from concept and builds to deployment and testing. Additionally, DevOps principles prioritize iterative processes that make space for continuous testing and feedback. Again, this practice makes it possible to accelerate the development process, but it also improves the quality and security of the products (Marijan, D., Liaaen, M. and Sen, S., 2018). This article will analyze some cases of software automation improvement and DevOps efficiency improvement of Chinese Internet companies Jingdong Group, Ctrip.COM, and Tencent Group, and discuss how software automation is Improve the efficiency of enterprise software development well (Nurullah, F., Wang, G., Kaburuan, E.R. and Fajar, A.N., 2018). In JD.com, JD.com has developed an automated testing system to help the automated verification work in the software R&D process. R&D and testers only need to perform page-based configuration according to the business to complete the automated process. R&D and testers log in to this A platform for automated testing, create new test plans and test cases that you need to test, and display test cases according to the structure of the directory tree. Each automated test case also has a dependency relationship, so that you can quickly complete an automated test of a business system. Configure the task. This configured automated test task can be reused, or the task can be scheduled to be automatically executed at a certain time, and a test report can be generated and automatically sent to the relevant responsible person (Marijan, D., Liaaen, M. and Sen, S., 2018). This automated testing process of JD has greatly improved the efficiency and quality of research and development. Another case is that the Internet Tencent company in China used Appium, an automated testing framework on the mobile terminal, and packaged it, adding some customized testing and verification functions in the Tencent Group business system. The test is made into a function in the form of a webhook. By the way, Software monitoring is also very important for enterprise automation, including many Internet companies that use log monitoring, service monitoring, and business system monitoring and early warning. This monitoring can help companies monitor their systems in real-time to maintain a healthy state. In the development process of any business system, the DevOps pipeline process can be reused and integrated, which greatly reduces the cost of automated testing on
the mobile terminal and can be quickly applied to the business system and generate value for the enterprise.
To address these gaps, the present study examines stakeholders’ perspectives on blockchain technology adoption in online advertising through semi-structured interviews. Crucially, this study examines adoption from a whole ecosystem perspective, which is essential for gaining a complete understanding of factors encouraging and discourse aging adoption for all stakeholders in online advertising. To this end, it employs Lin’s (2003) Interactive Communication Technology Adoption Model (henceforth, ICTAM). Until now, studies that have examined blockchain adoption have done so primarily through the lenses of the Unified Theory of Acceptance and Use of Technology (UTAUT) and the Technology-Organization-Environment (TOE) framework (e.g., Clohessy & Acton, 2019; Dehghani et al., 2022; Jain et al., 2022; Toufaily et al., 2021).
2.2. Maintaining orgnizational of the paper
The remainder of the paper is organized as follows. Besides, Section 1 presents the introduction; Section 2 presents the literature review, research questions, Section 3 presents the theoretical basis and model, research hypothesis, interviews, variable statements. Section 4 presents the method and data analysis. Finally, Section 5 presents the discussion, limitation, and conclusions.
3. Prepare Your Paper Before Styling
The Theoretical framework of Research model.
Furthermore, Base on above Figure 1’s Conceptual framework description and analysis, we can see that the importance of automated testing for software R&D efficiency and quality improvement, but it has the potential to prepare to think critically and solve complex software development management and DevOps process problems, software automation testing, software CICD Process and has more autonomy independence in the automation testing [5 independent variables and 1 dependent variable]. Some Chinese internet corporate software automation testing includes many technologies and processes, such UI automation, mobile automation, server-side automation testing etc.
Hypothesis summary;
According to the above the conceptual framework of the research model, Thus, this study drawing new research model and the relationship between IV and DV. IV also has the following five aspects introduced;
H1: there is a positive relationship between UI automation testing and Improve the R&D efficiency of the enterprise's software and the automation capability in the DevOps process.
H2: there is a positive relationship between Mobile automation testing and Improve the R&D efficiency of the enterprise's software and the automation capability in the DevOps process.
H3: there is a positive relationship between Server-side automation testing and Improve the R&D efficiency of the enterprise's software and the automation capability in the DevOps process.
H4: there is a positive relationship between Automation testing integration and monitor testing and Improve the R&D efficiency of the enterprise's software and the automation capability in the DevOps process.
H5: there is a positive relationship between DevOps automated integration and automated testing system and Improve the R&D efficiency of the enterprise's software and the automation capability in the DevOps process.
The Independent variables (IV) of Conceptual framework are as follows;
1. DevOps UI automation testing
2. DevOps Mobile automation testing
3. DevOps Server-side automation testing
4. DevOps Automation testing integration and monitor testing
5. DevOps automated integration and automated testing system
The Dependent variable (DV) of Conceptual framework is as follows;
Improve the R&D efficiency of the enterprise's software and automation capability in the DevOps process.
Furthermore, this study uses questionnaires to collect data because there are not many data samples. Electronic questionnaires can quickly collect the company's employee WeChat groups and conduct data analysis. Besides, this study uses a mixed method of qualitative and quantitative research analysis to study the research and development and conclusions of DevOps process and automation testing and automation platform. This study wanted to find out how the industry sees DevOps and Automation technology along with its advantages and limitations. Thus, the results of this study provide a better understanding of DevOps on software quality for both professionals and researchers working in this area. The study shows research was mainly focused in automation, culture, continuous delivery, fast feedback of DevOps. The hypotheses in this research study involved the verification of relationship variables which are DevOps process and software automation testing, software quality. The main objectives of the investigation of this study paper are as follows:
• Research the DevOps UI Automation and Mobile Automation testing for software effective
• Research the DevOps process effectiveness of Chinese GuangDong internet company
• Research the DevOps Server-side automation testing of Chinese GuangDong internet company
• Research the DevOps integration platform of Chinese GuangDong internet company
• Research the DevOps Monitor and automation Monitor of Chinese GuangDong internet company
Interviews
In addition to ANOVA data analysis, this study also conducted interviews with employees of some companies. The interviews of this study took place between February and March 2024. Before the interviews, participants were informed about the study procedure and provided informed consent. All interviews were held online, and video recorded. The interview duration varied between 18 and 92 min, with a median duration of 56.5 min. The interview schedule covered four topics: 1) demographic information, organization and role; 2) DevOps and automation as a technology and the interviewees’ DevOps and automation usage; 3) questions for a possible follow-up study; 4) factors influencing DevOps and automation adoption in the interviewee’s organization, and in online advertising.
4. Results and data analysis
Additionally, The study mainly uses the mixed method of qualitative analysis and quantitative analysis to conduct research and analysis. To use the data to analyze the importance of automated testing, we designed a questionnaire, collected a small-scale data sample for analysis, conducted an electronic data questionnaire for 100 samples of several Internet companies, and conducted ANOVA data for the collected data analysis, to prove the importance of automated testing for enterprise R&D efficiency, this questionnaire has 5 questions, corresponding to the hypothesis verification conditions of 5 IV variables, each question has 1, 2, 3, 4, 5 The score of 5 is the highest, representing the most important, and 1 is the lowest, representing the least important. We collected data from the most 100 observations, then performed ANOVA data analysis on the collected excel data, and obtained theoretical results to support this article. DV variable as well. And the next stage, this study conducted data analysis and statistics with the collected 100 sample observations.
Thus, this study set $H_0$ and $H_1$ Variables: Automation testing (5 IV levels)
$H_0: \mu_1 = \mu_2 = \mu_3 = \mu_4 = \mu_5$
$H_1$: at least 1 $\mu_i \neq \mu_j$ for at least 1 $(i, j)$
Set up significant level $\alpha = 0.05$
ANOVA Analysis.
**Table 1.** ANOVA table of DevOps Satisfaction for different location. Source: (Author’s research work)
<table>
<thead>
<tr>
<th>Source of Variation</th>
<th>SS</th>
<th>df</th>
<th>MS</th>
<th>F</th>
<th>P-value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Between Groups</td>
<td>2526.582</td>
<td>4</td>
<td>631.6455</td>
<td>31.04154</td>
<td>3.42E-22</td>
</tr>
<tr>
<td>Within Groups</td>
<td>7264.378</td>
<td>357</td>
<td>20.3484</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Furthermore, based on the above table 1, we have an information that p-value (3.42 E-22) is less than the significance level. Thus, we reject the Ho. This gives an idea that the DevOps Satisfaction of the automation testing technology in different automation platform are not the same.
**Table 2.** Summary statistics of DevOps and software development Satisfaction in different software automation testing skills. Source: (Author’s research work)
<table>
<thead>
<tr>
<th>Groups</th>
<th>Count</th>
<th>Sum</th>
<th>Average</th>
<th>Variance</th>
</tr>
</thead>
<tbody>
<tr>
<td>UI auto</td>
<td>74</td>
<td>535.84</td>
<td>7.241081</td>
<td>3.727558</td>
</tr>
<tr>
<td>Mobile</td>
<td>60</td>
<td>499.02</td>
<td>8.317</td>
<td>14.10177</td>
</tr>
<tr>
<td>Server-side</td>
<td>99</td>
<td>705.91</td>
<td>7.130404</td>
<td>7.71553</td>
</tr>
<tr>
<td>Integrations</td>
<td>84</td>
<td>985.33</td>
<td>11.73012</td>
<td>34.40074</td>
</tr>
<tr>
<td>System</td>
<td>45</td>
<td>650.21</td>
<td>14.44911</td>
<td>57.92905</td>
</tr>
</tbody>
</table>
Furthermore, based on the table 2, Since we reject the Ho, this indicates that there is a significant difference in terms of software development Satisfaction in different automation testing skills. Also, based on summary statistics we can see or there is a higher chance that the automation testing in the automation testing System is comparatively bigger than other automation technologies. This can be further examined by using independent sample T test. On the other hand, DevOps and automation skills helped to bridge the communication gap between developers, managers, quality engineers and operations engineers. This fosters collaboration towards improving the development process and end software product (Toh, M.Z., Sahibuddin, S. and Mahrin, M.N.R., 2019.). Furthermore, the existing different skills can be readily utilized hence increasing the team’s reactivity to problems. The above data analysis proves that we can see that the automation testing system is more valuable and less
5. Conclusions
In conclusion, this study aimed to examine the factors that encourage and discourage digital automation testing and DevOps process affects businesses in every market. Either they are driving it or they are being driven by it. Based on the literature review and conceptual framework, there is enough evidence that shows that the software quality and software efficiency is improved when DevOps process and technology is applied. All hypothesis testing were accepted, indicating that the efficiency improvement and automation capabilities of DevOps can help companies improve their R&D and work efficiency and save R&D costs (Luz, W.P., Pinto, G. and Bonifácio, R., 2019). The DevOps process and software automation testing are crucial to improving the efficiency of software R&D in enterprises, any industry is at risk of disruption. As new business models emerge and customer demands keep evolving, enterprises everywhere struggle to stay relevant. They must be ready to adapt the way they do business. Technology is used in new and ever-more complex ways to drive value. On the other hand, these indicates include automation of enterprise systems, cloud-based commerce, and cross-channel user experiences. Enterprises can use automated testing to improve product development speed and efficiency and reduce manual intervention costs and errors. Automated testing improves software development efficiency and quality assurance process through code, and ensures software iteration process and delivery quickly and efficiently (Yarlagadda, R.T., 2018). In the process of automated testing, the automation of the server is mainly based on the protocol of the server, which is relatively simple. Generally, enterprises mainly use it for UI automation and mobile automated testing. Let’s take a look at the front-end automated testing and mobile automated testing (Toh, M.Z., Sahibuddin, S. and Mahrin, M.N.R., 2019.).
In addition, Regarding the front-end technology of automated testing, we generally use the open-source selenium framework, which has many advantages. It includes Selenium is an Open-Source Software, Selenium supports various programming languages to write programs (Test scripts), it has No reliable Technical Support from anybody, it supports Web-based applications only. Difficult to use takes more time to create Test cases (Riungu-Kalliosaaari 2016). For mobile automation of software testing, we often use the open-source Appium framework for development and packaging, it is language-agnostic that supports web driver application program interface, allows cross-platform tests. Applications are efficient, accurate, fast, and free from bugs, which saves a lot of time and cost (Riungu-Kalliosaaari 2016). Keeping all this in mind learning Appium will be worth it. The Appium framework also includes some of the following advantages. It is free and open-source. It supports both Android and iOS. Automation tests for iOS and Android can be written using the same API. Appium tests can be written using any language. No need to install any extra software on the mobile device to support Appium.
Furthermore, these are lots of hurdles to overcome in bridging the gaps between CI and CDE, and automated testing, but It's worth taking a systematic approach when adopting DevOps automation testing in any business (Bou Ghantous, G. and Gill, A., 2017). According to the above hypothesis testing process, and discussion and analysis, all hypothesis testing verifications are finally accepted, and it proves that automated testing can indeed continuously improve the R&D efficiency of enterprises and the automation capability in the DevOps process. We can see that automated testing has played a very good role in promoting the enterprise's R&D efficiency and DevOps process integration. In short, this study uses the Communication Technology Adoption Model (ICTAM) and TAM model theory to analyze and describe the DevOps energy efficiency improvement, software efficiency R&D, and brand building of technology companies in Guangdong Province, China, and finally draws an analysis conclusion.
To sum up, the study indicates some benefits and challenges involved in adopting DevOps and automation technologies. The benefits include more software frequent releases, improved test
automation, promoting R&D efficiency, software automation monitor, better communication and enhanced occupational welfare with team or company in Chinese Guangdong province.
Finally, a research framework has been proposed by Pham and Ho (2015) to investigate the factors helping the adoption of the DevOps development process, and CICD Process, automation testing. According to the survey, the observations studied in this study is due to the internet company's questionnaires and survey data, so the limitation of this paper is that the number of research samples in this paper is limited to only about 100 samples of the questionnaire. Automated testing is not a 100% guarantee of software quality. Automated testing requires investment in R&D resources and human resources, and continuous maintenance of automated programs is also required. In addition, this is also the gap studied in this paper. During the DevOps process, the efficiency of R&D is generally improved (Perera, P., Bandara, M. and Perera, I., 2016). It is composed of aspects of multiple dimensions, and automated testing is only an important node of it. However, automated testing is not a panacea for improving the R&D efficiency of an enterprise. It's important to note that while automation has many benefits, it also has its limits; it only tests what you tell it to. Another research gap is that the automation capabilities and DevOps processes of each enterprise are also different, so the coverage objects studied in this paper may not be able to cover all Internet companies (Chen, L., 2018). Too small sample data is also a shortcoming of this study. This study makes several theoretical contributions. For one, it supports the findings of previous studies on blockchain technology adoption in other industries, implying that stakeholders in other ecosystems may face similar challenges, and suggests that these findings may be digital transformation to other business contexts. Moreover, it extends these findings by exploring adoption through the lens of the ICTAM (Lin, 2003), whereas other studies have primarily focused on adoption from the perspectives of the UTAUT and TOE frameworks (e.g., Clohessy & Acton, 2019; Dehghani et al., 2022; Jain et al., 2022; Toufaily et al., 2021). Though this may seem obvious, it's important to keep in mind because even after unit, integration, and performance tests have passed, a single end-user can make the whole system crash in seconds. This usually happens if the user does something the developers did not expect as well. Future studies should also test the efficacy of various educational tools for raising the level of awareness regarding issues inherent in the AI and DevOps and CICD Process, Automation testing ecosystem, as well as those educating the public about DevOps technology in Chinese Guangdong province.
6. Acknowledgment (Heading 5)
The preferred spelling of the We thank all classmates and professors, assistance professors, friends. Besides, this work was supported in part by a grant from Institute of Industrial Economics, Chinese Academy of Social Science, and School of Entrepreneurship, Guangdong Innovative Technical College, and Guangzhou Nanyang Polytechnic Vocational College, and Solbridge International School of Business, Woosong University. Moreover, I hope that my technical paper journals can help more and more people understand and familiarize themselves with the Chinese Guangdong province firm's digital software Philosophy theory and DevOps concepts and be used in the business management, cross-border e-commerce, and brand building Digital DevOps, brand building, cross-e-commerce, CICD Process, automation testing etc.
Ethics approval
Ethics approval was not required for this research.
Funding
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
Competing Interests
The authors declare no conflict of interest. Moreover, the authors have no relevant financial or non-financial interests to disclose.
References
|
{"Source-Url": "https://wepub.org/index.php/TCSISR/article/download/1951/2159", "len_cl100k_base": 6201, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23994, "total-output-tokens": 8698, "length": "2e12", "weborganizer": {"__label__adult": 0.0003464221954345703, "__label__art_design": 0.00031375885009765625, "__label__crime_law": 0.00023508071899414065, "__label__education_jobs": 0.0033512115478515625, "__label__entertainment": 8.600950241088867e-05, "__label__fashion_beauty": 0.00013768672943115234, "__label__finance_business": 0.0021991729736328125, "__label__food_dining": 0.00029540061950683594, "__label__games": 0.0005650520324707031, "__label__hardware": 0.0006060600280761719, "__label__health": 0.0003952980041503906, "__label__history": 0.00021564960479736328, "__label__home_hobbies": 7.37309455871582e-05, "__label__industrial": 0.00035190582275390625, "__label__literature": 0.0003612041473388672, "__label__politics": 0.00026988983154296875, "__label__religion": 0.0002987384796142578, "__label__science_tech": 0.01490020751953125, "__label__social_life": 0.00012969970703125, "__label__software": 0.01372528076171875, "__label__software_dev": 0.96044921875, "__label__sports_fitness": 0.00020396709442138672, "__label__transportation": 0.0004470348358154297, "__label__travel": 0.00019633769989013672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37166, 0.05076]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37166, 0.26541]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37166, 0.89586]], "google_gemma-3-12b-it_contains_pii": [[0, 3427, false], [3427, 7667, null], [7667, 12586, null], [12586, 15305, null], [15305, 18683, null], [18683, 22312, null], [22312, 26622, null], [26622, 30511, null], [30511, 34999, null], [34999, 37166, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3427, true], [3427, 7667, null], [7667, 12586, null], [12586, 15305, null], [15305, 18683, null], [18683, 22312, null], [22312, 26622, null], [26622, 30511, null], [30511, 34999, null], [34999, 37166, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37166, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37166, null]], "pdf_page_numbers": [[0, 3427, 1], [3427, 7667, 2], [7667, 12586, 3], [12586, 15305, 4], [15305, 18683, 5], [18683, 22312, 6], [22312, 26622, 7], [26622, 30511, 8], [30511, 34999, 9], [34999, 37166, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37166, 0.09016]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
c5e3d12714dcdf8bf346ade59c35850d3ba5c216
|
Research Article
Trade-off Analysis of Crosscutting Functionalities using Lazy Counting-based Splay Tree in Aspect Oriented Programming
1K. Santhi, 1G. Zayaraz and T. Chellatamilan
1Department of CSE, Pondicherry Engineering College, Puducherry, India
2Department of CSE, Arunai Engineering College, Tiruvannamalai, India
Abstract: Aspect Oriented Programming (AOP) provides new modularization of software systems through encapsulation of crosscutting functionalities, providing a clear isolation and utilization thereof. The trade-offs are typically a consequence of technical contradictions in requirements. We employ a data structure called a lazy counting based splay tree to analyze the trade-off between the conflicting quality attributes. These contradictions must be conquered in order to achieve breakthrough. The performance of this data structure is verified after considering Cross Site Request Forgery (CSRF) which could be prevented by same-origin policy. The results are promising and show good potential for lazy counting-based splaying, which is capable of analyzing the overall performance of a splay tree compared with a lazy counting-based splay tree and providing interesting results about both.
Keywords: Aspect-oriented programming, crosscutting functionalities, cross-site request forgery, lazy counting-based splay tree, same-origin policy, trade-off analysis
INTRODUCTION
Developing an application software system always demands consideration of both functional and non-functional requirements. Modularizing the expansion of different requirements has important advantages in system evolution. Since such requirements typically originate from different stakeholders, they may cause different iterations of various parts of the software development process. Successful distribution of concerns can guide effortless development, maintenance and possible reuse, amongst others (Boström, 2004). State-of-the-art software techniques already support separation of concerns, for example, by means of method structuring, Object-Oriented Programming (OOP) and design patterns. On the other hand, these techniques are inadequate for separating the crosscutting functionality in broad-based functionality. A major cause of this limitation is the separation of concerns in an intuitive manner by grouping them into objects, though this technique is only efficient in separating concepts that map easily to objects and not for separating concerns.
Aspect Oriented Programming (AOP) provides techniques for managing crosscutting concerns into a single manageable component, which is referred to as an aspect. The concept of an aspect is at the heart of AOP and is used to solve many problems, such as representations of tangling and scattering. Tangling refers to how concerns intermingle with each other in a module, while scattering refers to how concerns are separated over many modules.
However, the detection and order of crosscutting concerns and their consideration as an aspect, are challenging tasks. Developing secure software systems requires more than protecting objects from illegal manipulation; it also requires the prevention of illegal information flow among objects in a system (Izaki et al., 2001).
Another benefit of AOP is that because the core functionality of the system is executed separately, the developer no longer needs to refer to or use security mechanisms in the system. Implementing security could be left to a security expert and security policies can be independently implemented (Win et al., 2002).
Incorporating information flow control during software development is tremendously challenging. First, the main issue with building real applications with information flow options is interfacing the new application with existing infrastructure that has not been designed with information flow in mind. Second, it is challenging to manage and assign security policies throughout the software development process. Third, the programmer is required to understand not only the algorithm, but also what the desired security policy is and how it can be formalized (Zdancewic, 2004).
Several authors have discussed the benefits of using AOP to implement security concerns (Viega et al., 2001; De Win et al., 2001). Aspect-oriented software development is relevant to all the key aspects of
security, namely, verification, validation, access control, integrity checks, non-repudiation and synchronization, as well as for supporting the administration and exception handling required for effective security. Aspect-oriented software design is flexible enough to accommodate the implementation of additional security features after the functional system has been developed.
**Overview of aspect-oriented programming:** AOP is the perfect complement to OOP in software engineering by providing more advanced modularization techniques to handle the scattering and tangling problems than existing models.
In OOP the objects have properties and perform intended actions. However, the process of applying abstracted functionality is performed by the developer, which is more error prone and hence, less secure. On the contrary, in AOP this process is performed methodically, consistently and more precisely by the aspect weaver. In AOP the aspects have properties that can affect the entire performance or that of some of the components, such as the way a method is executed, synchronization, concurrency, resource allocation, exception handling, logging and so on.
The main feature of this technology is its ability to specify both the behavior of one specific functionality as well as binding this functionality to other functionalities or non-functionalities such as its relation to these. An aspect is a modular unit of a crosscutting implementation, which is provided in terms of pointcuts and advices, specifying what type of advice and when a pointcut aspect is going to be executed, as shown in Fig. 1. In the execution of a program, there will be join points where calls to an aspect can be injected. A pointcut is used to find a set of join points where an aspect can be injected. An advice declaration can be used to specify code that should run when the join points specified by the pointcut expression are reached. The advice code will be executed when a particular join point is reached, either before or after execution proceeds. A before/after advice on a method execution defines code that must be run before/after the particular method is executed, while an around advice defines code that is executed when the join point is reached and has control over whether the computation at the join point is allowed to be performed (Kiczales et al., 2001).
The final application is created using both the functional code and its specific pointcut aspects. These two entities are combined to generate the byte code at compile time by invoking a special method called a weaver.
**LITERATURE REVIEW**
A software system that manipulates and stores credentials like passwords, identification documents, security clearances and tax information must prevent such information from being leaked during execution (Sabelfeld and Myers, 2003). Security mechanisms, such as firewalls or anti-virus software and also access control mechanisms are not adequate to protect against this type of information leakage. For example, determining whether communication breaches confidentiality is beyond the scope of any firewall mechanism. Similarly with encryption, there is no guarantee that once data is decrypted that its confidentiality will be maintained (Sabelfeld and Myers, 2003). An access control policy determines the right to access objects containing information (Huang et al., 2004). However, this type of control only relates to the release of data and does not control how data ‘flows’ during the execution of each statement in a program. Numerous instances of information “leakage” arise not from defective access control, but from the lack of policies about information flow (Wand et al., 2004).
According to Mourad et al. (2008), AOP permits security hardening of applications by allowing the incorporation of supplementary security requirements to previously existing code that was designed to operate in a different security context. AOP also permits the integration of security into applications even when the source code is no longer available. For example, it may be required to apply tighter security requirements to legacy applications for which the source code may have been lost. In addition, with AOP, security can be
selectively applied to important areas of the application either explicitly or declaratively without having to change the code. The capability to declaratively applying security to specific areas of the code also provides the ability to easily apply multiple security features to a specific scheme.
The Bell-La Padula (Bell, 2005) model is a mathematical model that utilizes the principles of mathematical theory to describe access methods in computer systems. This model uses four access modes, namely, read, append, execute and write. A set of rules is defined and proved to possess certain characteristics such as preserving simple security, discretionary security and the *-property. The Bell-La Padula model has mostly been applied in military based systems where confidentiality of data is of the utmost priority.
De Win et al. (2001) defined three types of aspects, namely, identification, authentication and authorization, for access control policies in the aspect-oriented paradigm. The identification aspect is used to tag those entities that must be authenticated and is used as a container for identity information of the subject.
The authentication aspect passes authentication information to the access control mechanism. The authorization aspect checks access based on the identity information received from the authentication aspect.
Ramachandran et al. (2006) also addressed authentication and authorization within the aspect-oriented paradigm by providing a more general approach. However, they do not address information flows.
Kawauchi and Masuhara (2004) used an aspect to identify cross-site scripting. Their approach is predicated on validating the parameters by replacing special characters by quoted ones within input files submitted by users to web applications. They found that although sanitizing is a crosscutting concern, there is no possible way to define a pointcut that would be able to detect whether a string originated from an unauthorized source or contained unwanted information. Hence, they proposed a new pointcut called dflow that addresses the dataflow between join points as an extension to the AspectU language. Kawauchi and Masuhara (2004) do not address security classifications or their dataflow definitions and only deal with direct information flow. Furthermore, they do not comment on transmission of information between objects in a method. Since no advancement has been made exclusively in this area and since AOP and OOP are complementary, it is important to investigate information flow control and security policies from this viewpoint first.
Hermosillo et al. (2007) proposed a solution for web security against SQL injection and XSS attacks using AOP. Their work is a test of the reward for chaining security policies at runtime, with testing carried out against the first two insecurities, SQL injection and XSS. The solution presented provides a security aspect for a web application server. The authors have used aspects to validate the injected SQL in the web application server and also to validate XSS attacks in the user’s requests to the web application and from the web server to a database server. This allows the interception of all database accesses and the validation thereof before potentially dangerous information is stored (Lee et al., 2012). A significant amount of work has been carried out in aspect-oriented security to make the process more systematic in terms of software design and development (De Win et al., 2002; De Win et al., 2001; Hermosillo et al., 2007).
Simic and Walden (2013) designed a system to mitigate cross-site scripting and SQL injection vulnerabilities, the most common web application vulnerabilities, with no demand in expensive and potentially hazardous modifications to the source code of Web applications. At runtime, the application executes the protective aspect code to mitigate security issues when a block of vulnerable code is executed.
The subsequent discussion highlights the relevance of aspect-oriented technology in terms of implementing some of the key components of security such as access control, authentication, persistence, transaction and monitoring, exception handling and synchronization in software systems.
METHODOLOGY
An aspect-based approach for representing information flow control: Plugging in non-functional aspects to functional aspects may result in several attacks, which may lead to a change in the composition rules, modifying the precedence of advices, unexpected behavior combinations, unhandled inputs and so on.
These security vulnerabilities can be taken advantage of by using Cross-Site Request Forgery (CSRF or XSRF). A user may not be aware that such an attack has occurred and may only find out about the attack after the damage has been done since no remedy is applied. To avoid such attacks, the approach of using same-origin policies could be applied.
To secure the functionality completely, one needs to perform internal dataflow analysis and ensure that untrusted input is sanitized before being used and that sensitive data is not released without authorization. Identifying illegal flows between objects requires an aspect based on the principle of a same-origin policy. Here pointcuts can be utilized to identify flows between objects. This aspect observes objects and intercepts
Table 1: Injection of risky aspects
<table>
<thead>
<tr>
<th>Aspect CSFRRestriction</th>
</tr>
</thead>
<tbody>
<tr>
<td>pointcut aspect injection (Aspect aspect)</td>
</tr>
<tr>
<td>execution (<em>Aspect.</em> (...) && this (aspect))</td>
</tr>
<tr>
<td>before (Aspect aspect): aspectInjection (aspect)</td>
</tr>
<tr>
<td>//reference to the same origin this aspect is deployed in</td>
</tr>
<tr>
<td>Sop sop = ...;</td>
</tr>
<tr>
<td>//this aspect does not belong to same origin policy</td>
</tr>
<tr>
<td>If (! sop. belongs (aspect)) {</td>
</tr>
<tr>
<td>throw new AccessControlException ();</td>
</tr>
<tr>
<td>}</td>
</tr>
</tbody>
</table>
messages flowing between them. The aspect’s advice would then determine upon examining the given message and classification of the sender and recipient, whether to allow the information flow. A same-origin policy helps to detect these and many other vulnerabilities by identifying data sources and sinks and how secure they are. Generally, all sources outside the component should be considered untrustworthy (e.g., system calls, third party plug-ins and routines that can copy data from the user space or network).
The same-origin policy approach is utilized to apply the policy model at runtime, thereby allowing aspects originating from the same location, such as a combination of scheme, hostname and port number. The weaver supplements woven software with logic to maintain the permission state of the software. As such, only the weaver is altered and no modification to the virtual machine or language semantics is needed.
Table 1 illustrates how the mechanism of injection can be used to disable a CSRF malicious aspect by intrinsically calling security features.
This policy protects against eventualities in which un-trusted aspects, which may originate from third party libraries (especially from CSRF), are incorrectly woven into the rest of the application code, as shown in Fig. 2. This policy has the means to detect and prevent a number of remote-user exploits, such as cross-site scripting, HTML injection, SQL injection and command injection (Simic and Walden, 2013). Severe security flaws would exist if there were no restrictions isolating the aspect from different origins and all the pointcuts of aspects between them were allowed. While it may be considered safe to perform the pointcut of aspects between different origins through hyperlinks, automated injection of aspects could potentially be dangerous.
Lazy counting-based splay tree: By injecting crosscutting functionalities into core functionalities, cross site request forgery could results in many attacks. CSRF attack can be used to modify the precedence of advices and unexpected combinations, fire unauthorized aspects, prevent the aspects from advising a joining point, or conduct fraudulent financial transactions. To organize injected aspects from the same origin, a counting-based splay tree is used. While the authorized aspect could be compromised after the same origin policy is verified before injection into the core functionality, when a new pointcut aspect enters the system the following actions are needed:
- The CSRF is checked using the same origin policy
- Legitimacy of the code is verified
If the pointcut is already available in the counting-based splay tree, the counters are updated accordingly.
Fig. 2: Approach using a same-origin policy
and the pointcut aspect is injected into the core functionality. The most frequently accessed aspects are moved closer to the root to minimize search time. If it is a newly arrived pointcut aspect, it is inserted in the required position in the tree.
A counting-based self-adjusting search tree that is similar to splay trees moves more frequently injected nodes closer to the root. After $M$ injections on $N$ items, $Q$ of which access some item $V_1$, an operation on $V_1$ passes through a path length of $O(\log M/Q)$ while performing fewer if any rotations (Afek et al., 2012). In lazy splaying, in addition to the item’s value, each node $w$ has three counters: selfCnt, which is an estimate of the total number of operations performed on the item in $w$ (number of find ($w: v$) and insert ($w: v$) operations) and rightCnt and leftCnt, which are estimates of the total number of operations that have been performed on items in the right and left subtrees, respectively. Each find ($i$) and inject ($i$) operation increments selfCnt of the node containing $i$. When node $i$ is found in the tree, all the nodes along the path from the root to $i$’s parent increase their rightCnt/leftCnt counter depending on whether $i$ is in their right or left subtree, respectively (Afek et al., 2012; Sleator and Robert, 1985; Bronson et al., 2010).
Figure 3, zig-zag is carried out if the total number of accesses to the node’s right subtree is greater than the total number of accesses to the node-parent and its right subtree. If zig-zag is not performed, then zig is performed if the total number of accesses to the node and its left subtree is greater than the total number of accesses to the node-parent and its right subtree.
After the rotation the rightCnt and leftCnt counters are updated to represent the number of accesses in the
---
**Table 2: Rebalancing algorithm**
```
Rebalance (Node parent, Node node) {
nodePlusLeftCount = node.selfCnt + node.leftCnt;
parentPlusRightCount = parent.selfCnt + parent.rightCnt;
nodeRightCount = node.rightCnt;
// decide whether to perform zig-zag step
if (nodeRightCount ≥ parentPlusRightCount)
Node grand = parent.parent;
ZigZag (grand, parent, node, rightChild);
parent.leftCnt = rightChild.rightCnt;
node.rightCnt = rightChild.leftCnt;
rightChild.rightCnt += parentPlusRightCount;
rightChild.leftCnt += nodePlusLeftCount;
}
else
// decide whether to perform zig step
if (nodePlusLeftCount > parentPlusRightCount)
Node grand = parent.parent;
Zig (grand, parent, node, node.right);
parent.leftCnt = node.rightCnt;
node.rightCnt += parentPlusRightCount;
}
}
```
---
new right/left subtrees, respectively. Note that zig and zig-zag have symmetric mirror operations when the subtree at node $p$ leans to the right.
To avoid a chain of nodes where all nodes are left (or right) parents of their children in the case of a descending/ascending insertion order, when a new node is inserted into the tree a re-balancing operation, as specified in Table 2, which in turn calls Table 3, is performed from this new node up to the root. If the depth is greater than $2\log N$, splaying is performed up to the root by using either double rotation, in which the total number of accesses to $x$'s right subtree is greater than or equal to the total number of accesses to $x$'s parent and its right subtree, or single rotation, in which the total number of accesses to $x$ and its left subtree is greater than the total number of accesses to $x$'s parent and its right subtree, as shown in Fig. 4.
In contrast to the traditional self-adjusting splay tree in which each accessed item is moved closer to the root by means of a sequence of tree rotations, the counting-based splay tree performs rotations infrequently and mostly at the bottom of the tree. Therefore, it scales with the level of concurrency. The algorithms given in Table 2 and 4 are used while injecting aspects.
Table 3: Examples of same-origin/cross-origin URLs
<table>
<thead>
<tr>
<th>URL1</th>
<th>URL2</th>
<th>Are URL1 and URL2 from the same origin?</th>
<th>Reason</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="http://pec.edu">http://pec.edu</a></td>
<td><a href="https://pec.edu">https://pec.edu</a></td>
<td>No</td>
<td>Different scheme</td>
</tr>
<tr>
<td><a href="http://pec.edu">http://pec.edu</a></td>
<td><a href="http://pec.edu:8080">http://pec.edu:8080</a></td>
<td>No</td>
<td>Different ports</td>
</tr>
<tr>
<td><a href="http://mail.pec.edu">http://mail.pec.edu</a></td>
<td><a href="http://chat.pec.edu">http://chat.pec.edu</a></td>
<td>No</td>
<td>Different sub domain</td>
</tr>
<tr>
<td><a href="http://pec.edu/usr1/index.php">http://pec.edu/usr1/index.php</a></td>
<td><a href="http://pec.edu/usr2/index.php">http://pec.edu/usr2/index.php</a></td>
<td>Yes</td>
<td>Path is not a part of the origin. Only scheme, host and port number</td>
</tr>
</tbody>
</table>
Table 4: Attempt injection algorithm
```
AttemptInjection (key, parent, node, height)
{
if (node == null)
{
node = newNode (key, node);
node.selfCnt++; node.rightCnt = 0;
node.leftCnt = 0; node.height++; break
}
else
{
if (key == node.key)
{
if (height >= (2 * log-size))
splay (node); else
Rebalance (parent, node);
node.selfCnt++; return node.value;
}
else
{
if (key == node.key)
{
if (height >= (2 * log-size))
splay (node); else
Rebalance (parent, node);
node.selfCnt++; return node.value;
}
}
While (true)
{
//child in the direction of key
Child = node.child (key);
if (child == null) //Not found
{
//generate a new node and link to node
child = newNode (key, node);
if (height >= (2 * log-size))
splay (child); return null;
}
else
{
result = attemptInjection (key, node, child, height + 1);
if (direction to child == left)
node.leftCnt++; else node.rightCnt++;
}
}
}
return result;
}
```
Table 5: Aspects and their node labels
<table>
<thead>
<tr>
<th>Aspects</th>
<th>Node labels</th>
</tr>
</thead>
<tbody>
<tr>
<td>Exception handling</td>
<td>1</td>
</tr>
<tr>
<td>Persistence</td>
<td>2</td>
</tr>
<tr>
<td>Security</td>
<td>3</td>
</tr>
<tr>
<td>Monitoring</td>
<td>4</td>
</tr>
<tr>
<td>Logging</td>
<td>5</td>
</tr>
<tr>
<td>Synchronization</td>
<td>6</td>
</tr>
<tr>
<td>Transaction</td>
<td>7</td>
</tr>
</tbody>
</table>
**Case study 1:** To construct lazy counting-based splay trees, aspects with the corresponding node labels given in Table 5 should be considered.
Injection of synchronization, persistence, transaction, exception, monitoring, logging and security aspects is shown in Fig. 5 to 8.
In an unbalanced tree, if the number of operations on $c$ and on nodes in $c_{Left}$ and $c_{Right}$ is greater than the number of operations on $p$ and on nodes in $p_{Right}$, then perform a zig-zag rotation. In other words, if $Node_{RightCnt} > Parent.Self_{RightCnt}$ perform a zig-zag rotation. According to Fig. 6 and 3 ($Node_{RightCnt} > 2 (Parent.Self_{RightCnt})$ and therefore a zig-zag rotation is performed. While performing the zig-zag rotation, the counter values are updated as follows:
- $parent.leftCnt = rightChild.rightCnt$
- $node.rightCnt = rightChild.leftCnt$
- $rightChild.rightCnt = parentPlusRightCount$
- $rightChild.leftCnt = nodePlusLeftCount$



Fig. 8: Injection of security aspect
Fig. 9: (a) During the zig-zag operations, (b) after the zig-zag rotations
Figure 7 shows the intermediate steps while performing a zig-zag rotation, while Fig. 8 illustrates how the most frequently accessed aspects are moved closer to the root node thereby leaving the tree in a balanced state. If the tree is in a balanced state, the cost of injecting an aspect is proportional to the depth of the node (Fig. 9).
The cost to splay node \( x \) at depth \( d \) is defined as follows. The number of comparisons increases for an unsuccessful search to avoid having to go for fictitious nodes (external nodes). The search terminates in internal nodes when the search is successful and in external nodes when the search is unsuccessful. If depth \( d \) is odd, we need to perform \( d/2 \) zig-zig or zig-zag rotation operations and \( d \) tree-rotation operations. Here depth \( d = 3 \) and thus, \( 3/2 = 1 \) zig-zag operations and \( 3 \) tree-rotation operations are performed. The cost to splay node \( x \) at depth \( d \) depends on the \( d \) tree-rotation operations involved.
The amortized cost to splay node \( x \) is \( d + \Delta (r(T)) \). \( \Delta (r(T)) \) refers to the change in balance and can be calculated using the difference between \( r'(T) - r(T) \):
Before splay (M): \( r(T) = \lg(15) + \lg(11) + \lg(7) + 4\lg(3) \)
After splay (M): \( r'(T) = \lg(15) + 2\lg(7) + 4\lg(3) \)
The amortized cost of splay (M) is 2.652076697.
Case study 2: To construct lazy counting-based splay trees, we consider aspects with the corresponding node labels given in Table 6.
Figure 10 shows the update of the counter and its organization after injecting aspects like synchronization, monitoring, logging, persistence, security and exception handling, which all come from the same origin. As before, the exception handling aspect is injected, after verifying its origin using same-origin policy techniques. If it indeed comes from the
Table 6: Aspects and their node labels
<table>
<thead>
<tr>
<th>Aspects</th>
<th>Node labels</th>
</tr>
</thead>
<tbody>
<tr>
<td>Exception handling</td>
<td>3</td>
</tr>
<tr>
<td>Persistence</td>
<td>4</td>
</tr>
<tr>
<td>Security</td>
<td>7</td>
</tr>
<tr>
<td>Monitoring</td>
<td>2</td>
</tr>
<tr>
<td>Logging</td>
<td>6</td>
</tr>
<tr>
<td>Synchronization</td>
<td>5</td>
</tr>
<tr>
<td>Transaction processing</td>
<td>1</td>
</tr>
</tbody>
</table>
same origin, the exception handling self counter value is incremented by one and the rightCnt/leftCnt values on the path from the exception handling’s parent to the root node are incremented by one depending on whether the exception occurred in the left/right subtree. Consequently, the synchronization aspect’s rightCnt, the logging aspect’s leftCnt and the persistence aspect’s leftCnt values are all incremented by one. After updating the counter values, either semi-splaying or rebalancing operations are performed if balancing is needed. The same procedure is applied when injecting security, exception and persistence aspects. When injecting a persistence aspect, the tree becomes unbalanced as shown in Fig. 11.
In an unbalanced tree if the number of operations on n and nodes in nLeft is greater than the number of operations on p and nodes in pRight, then perform a zig rotation. That is, if Node+leftCnt > parent.Self+rightCnt then perform a zig rotation. From Fig. 11 we see that if (2+3) Node+leftCnt > (1+2) parent.Self+rightCnt we perform a zig rotation. While performing the zig rotation the counter values are updated as follows:
\[
\text{parent.leftCnt} = \text{node.rightCnt}
\]
\[
\text{node.rightCnt} = \text{parentPlusRightCount}
\]
After performing the zig rotation the counter values are updated as shown in Fig. 12.
The cost to splay node x at depth d is defined as follows. If depth d is even, we need to perform \((d-1)/2\) zig-zig or zig-zag rotations or 1 zig operation and \(d\) tree-rotation operations. Here depth \(d = 2\) and thus, 1 zig operation and 2 tree-rotation operations are performed:
Before splay (P):
\[
\text{Before splay (P):} \quad r(T) = \text{lg} (13) + \text{lg} (9) + \text{lg} (5) + 3\text{lg} (3)
\]
After splay (P):
\[
\text{After splay (P):} \quad r'(T) = \text{lg} (13) + \text{lg} (9) + \text{lg} (5) + 3\text{lg} (3)
\]
The amortized cost of splay (P) is 2.0.
Applying the lazy counting-based splay tree: Good security in a system reduces the chance of malicious or unintended actions outside the designed usage affecting the system and prevents the discovery or loss of information. Improving security can also boost the reliability of the system by reducing the chances of a successful attack that damages system operation. Securing a system implies defending the resources and avoiding illegitimate access to or alteration of the information. Composing a crosscutting concern into a requirements model may result in conflicts that have to be solved. It is possible that crosscutting concerns can cause contradictory situations in a system (Table 7 and 8).
Table 7: Contribution of response time
<table>
<thead>
<tr>
<th>Name</th>
<th>Response time</th>
</tr>
</thead>
<tbody>
<tr>
<td>Description</td>
<td>Period of time in which the system must respond to a service</td>
</tr>
<tr>
<td>Priority</td>
<td>Very important</td>
</tr>
<tr>
<td>Decomposition</td>
<td>None</td>
</tr>
<tr>
<td>Contribution</td>
<td>(-) to security and (+) to multi-access</td>
</tr>
</tbody>
</table>
We have come across a situation where during the composition of crosscutting concerns with functional concerns, conflicting activities may occur. For example, response time and security are two crosscutting concerns that affect a system. When trying to compose these concerns, a conflict will occur, since both crosscutting concerns contribute negatively to each other. Thus, a tradeoff has to be found in terms of which crosscutting concern should have the highest priority and be composed first.
This contribution can be positive or negative. If two (or more) crosscutting concerns contribute negatively to each other, a conflicting situation occurs if and only if these crosscutting concerns influence the same set of requirements. To resolve these kinds of conflicts, which affect the whole system or parts thereof, a tradeoff is discussed with the stakeholders. In this situation, we propose the application of a lazy counting-based splay tree as this requires less time to check for operations like lookup () and attempt inject () and hence we need not perform tradeoff analysis between crosscutting concerns like response time and security.
**METRIC EVALUATION AND DISCUSSION**
The results are promising and show good potential for lazy counting-based splays, which is capable of analyzing the overall performance of splay trees versus lazy counting-based splay trees. Interesting results are obtained from this comparison. Both splay trees and counting-based splay trees use implicit caching by bringing the aspect to the root element and taking advantage of locality in incoming lookup requests for the aspect. Locality in this context refers to looking for the same aspect several times. A stream of requests exhibits no locality if every aspect is equally likely to be injected at each point. For our applications, locality does exist since aspects tend to be injected repeatedly (Fig. 13).
In lazy counter-based splay trees, according to the splay rotations involved, since most frequently accessed aspects move closer to the root, the depth of the tree is reduced and hence the delay in rotations tends to be stable (Fig. 14).
Fig. 14: Response time versus number of nodes
Fig. 15: Splay ratio versus number of rotations
In counter-based splay trees, over a period of time as the number of most frequently accessed nodes increases, the response time is reduced since most frequently accessed aspects are moved closer to the root (Fig. 15).
Comparing the splay ratio in both splay trees and lazy counter-based splay trees, by using the formula splay ratio = number of operations/number of splay, its values are in increasing order in counter-based lazy splay trees as the number of splay operations is largely reduced owing to the application of the counter-based technique.
CONCLUSION AND RECOMMENDATIONS
Conflicts between software quality attributes are common. Poor quality eventually affects cost and schedule because software requires fine-tuning, recoding, or even redesign to meet original requirements. Design flaws and policy errors or bugs are some of the sources of security flaws in software. This research study has been used to check whether pointcut aspects belong to the same origin and to apply the lazy counter-based splaying technique to reduce the time complexity needed to inject the legitimate pointcut aspects with the core functionality. This approach can be used effectively when implementing tradeoff analysis among aspects like security and response time. Security is a primary concern in software development and has generated a great deal of awareness amongst experts. Furthermore, few web evangelists argue that the same origin policy is too strict to block genuine third party aspects, which are essential for strict security in an application. As a future work, stack-based methods will be used to find fraudulently injected aspects even if they come from the same origin.
REFERENCES
|
{"Source-Url": "http://maxwellsci.com/msproof.php?doi=rjaset.9.1419", "len_cl100k_base": 7632, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34149, "total-output-tokens": 9261, "length": "2e12", "weborganizer": {"__label__adult": 0.000385284423828125, "__label__art_design": 0.0002586841583251953, "__label__crime_law": 0.0005168914794921875, "__label__education_jobs": 0.0004301071166992187, "__label__entertainment": 4.416704177856445e-05, "__label__fashion_beauty": 0.0001512765884399414, "__label__finance_business": 0.00019490718841552737, "__label__food_dining": 0.00029349327087402344, "__label__games": 0.0003712177276611328, "__label__hardware": 0.00061798095703125, "__label__health": 0.0004146099090576172, "__label__history": 0.00016486644744873047, "__label__home_hobbies": 7.462501525878906e-05, "__label__industrial": 0.00031304359436035156, "__label__literature": 0.0001811981201171875, "__label__politics": 0.0002301931381225586, "__label__religion": 0.00035500526428222656, "__label__science_tech": 0.007328033447265625, "__label__social_life": 7.998943328857422e-05, "__label__software": 0.0038852691650390625, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00029277801513671875, "__label__transportation": 0.0004377365112304687, "__label__travel": 0.00017821788787841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38284, 0.01376]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38284, 0.61768]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38284, 0.88742]], "google_gemma-3-12b-it_contains_pii": [[0, 4340, false], [4340, 8584, null], [8584, 13931, null], [13931, 17143, null], [17143, 19881, null], [19881, 23394, null], [23394, 24905, null], [24905, 26900, null], [26900, 29956, null], [29956, 32409, null], [32409, 32504, null], [32504, 37425, null], [37425, 38284, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4340, true], [4340, 8584, null], [8584, 13931, null], [13931, 17143, null], [17143, 19881, null], [19881, 23394, null], [23394, 24905, null], [24905, 26900, null], [26900, 29956, null], [29956, 32409, null], [32409, 32504, null], [32504, 37425, null], [37425, 38284, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38284, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38284, null]], "pdf_page_numbers": [[0, 4340, 1], [4340, 8584, 2], [8584, 13931, 3], [13931, 17143, 4], [17143, 19881, 5], [19881, 23394, 6], [23394, 24905, 7], [24905, 26900, 8], [26900, 29956, 9], [29956, 32409, 10], [32409, 32504, 11], [32504, 37425, 12], [37425, 38284, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38284, 0.16466]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
77b0510a71381e37543f79411842a0932d7779a8
|
CISC327 - Software Quality Assurance
Lecture 19
Regression Testing
Regression Testing
• Today we look at regression testing
• Purpose of regression testing
• Method
– Establishing a regression test set
– Maintaining a regression test set
– Observable artifacts
Regression Testing
• regression:
1. a return to a former or less developed state
Regression Testing
• Purpose
– Ensure that existing functionality and behaviour is not broken by changes in new versions
– Ensure that intended changes to functionality and behaviour are actually observed
– Catch accidental or unintentional changes in functionality and behaviour before deployment, reducing costs
Regression Testing
• Method
– Maintain a regression set of test inputs designed to exhibit existing functionality and behaviour
– Choose a set of observable artifacts of computation that demonstrate desired aspects of functionality and behaviour (not just output!)
– Maintain a history of the observable artifacts for each version of the software
– Compare observable artifacts of each new version of software to previous version to ensure that differences are intentional
Regression Testing
• Regression Series
– It's really called regression testing because we incrementally compare the results (functionality and behaviour) of tests for each new version of the software only to the previous version
– And that one was compared to the one before it, and so on, forming a regression series based on the original software
– It's a sort of inductive proof that we still have the behaviour we want to maintain
Regression Testing
• It's also called regression testing because to keep the total number of tests to be run at a practical level, we replace old tests with new ones to "cover" the same cases but to include testing of new/changed functionality.
• This sequence of replaced tests covering previous tests also forms a (more complex) regression series of test cases based on the original test set:
– Where old tests are retired from the set as new tests are added to "cover" them.
• The reasoning that the tests have not lost anything is also an induction:
– New tests cover retired old tests, which in turn cover previous older tests, and so on, back to the original validated test set.
Establishing a Baseline
• Begin with the original *functionality* test suite, plus early *failure* tests (if any), plus first *operational* tests
• Validate that these tests all run correctly
• Choose the set of *observable artifacts* to be tracked
– These should characterize the functionality and behaviour we want to maintain across versions (*more* on this later)
• Run these first tests and *save* the observable artifacts in an easy to compare form (*more* on this later also)
Adding and Retiring Tests
- Whenever functionality is added or changed in the software, **add and validate** new tests for the new or changed functionality, and retire the tests for the replaced old functionality.
- Some practitioners retire **failure** tests after a fixed number of new versions do not exhibit the failure, as a way to keep the number of failure tests from growing too large.
Adding and Retiring Tests
- **Operational** tests must also be maintained, and retired or replaced when they no longer reflect current functionality.
As software grows, the test set grows out of control, unless we retire old tests covered by new ones.
Choosing Observable Artifacts
• Observable artifacts include at least the direct outputs of the software, but also other indicators of behaviour.
• Because many program have multiple kinds, streams, or files of output, we normally include all of them together in the observable artifacts.
• Because subtle unintended changes in behaviour may not be immediately visible in direct test output, we normally turn on all debugging, tracing, and instrumenting flags the software may have when running regression tests, in order to have more detail in observable artifacts.
Choosing Observable Artifacts
• Because performance is part of the user-visible behaviour of software, we usually measure time and space when running regression tests
– Add these to the observable artifacts in order to observe unintended changes in performance
• Most systems provide some kind of external performance measuring tools
– E.g., Unix "time" command, which can be used to give us this information
• To allow easy differencing, we normally translate all observable artifacts to text in the stored test results
Combining Observable Artifacts
• To allow easy differencing and archival, the entire set of observable artifacts resulting from running all of the tests in the entire set of regression tests is often combined into a single text file.
• This file includes the direct and indirect output, tracing, and debugging information, time and space statistics, and all other observable artifacts resulting from running each test, all concatenated together in a fixed order into one text file.
• This file forms a kind of behavioural signature for the version of the software, storing every observable characteristic of behaviour on the test set in one file.
Comparing Signatures
• The actual **regression** aspect of the test is implemented by looking at the difference between the signature files for the **previous version** and the **new version**
• If we're careful, this difference can be implemented by simple tools such as Unix's "**diff**"
```
diff -b OldSignatureFile NewSignatureFile
314c314
< 0.3u 0.0s 0:00 97% 359+781k 0+0io 0pf+0w
---
< 0.7u 0.0s 0:01 95% 361+770k 0+0io 0pf+0w
2721c2721,2722
< End of run - goodbye!
---
> *** Error: invalid command 'create'
> End of run - goodbye!
```
Normalizing Signatures
• To allow easy differencing, it is important that irrelevant or intentional differences between versions be factored out.
• Since the signature file is all text, this can be automated using editor scripts to normalize signature files to reduce or eliminate non-behavioural or intended differences.
– Example: If the previous version of the software did all output in upper case and the new version (intentionally) outputs mixed case instead, the new signature can be normalized to upper case before differencing.
Establishing the Baseline
- The **baseline** is the signature file of the version used to establish regression testing (the "original" version)
- The baseline signature **must** be carefully examined line by line **by hand** to ensure that every artifact is as it should be (a lot of work)
- Once established, only **differences** need to be examined for future versions
Regression Test Harness
• The regression test harness is the implementation of a procedure for automating the running, collection of observable artifacts, and differencing of versions for regression testing a product
• Should be developed such that it adapts automatically to addition or deletion of test cases or individual tests
• Again, requires care in planning and implementation, but once established, requires very little work
Regression Test Signatures
• The output of the entire run of the regression test script, including all test output, diagnostic output, and time and memory resource usage is saved in a single (large) signature file named for the version of the program being tested.
• The signature file is diff'ed against the previous version's signature file to check for differences in behaviour, and saved for comparison with the next version.
Example: Service Numbers
• I could change the requirements about length of service numbers
• Make “new” tests by adjusting old tests to meet the new requirements
• Existing tests that should pass would become failure tests
• Some failure tests might become tests that should pass
Regression Testing: TXL
• The **TXL interpreter** is a software product that implements the **TXL** programming language (http://www.txl.ca)
• It takes as input a TXL program “**foo.Txl**” and an input file to the program “**bar.foo**”, and compiles and runs the program on the input
• It produces two output streams:
1. compiler and run time error messages on the **standard error** stream,
2. output of the program on the **standard output** stream
The TXL Regression Tests
• The regression tests for the TXL interpreter are organized into one large directory in which subdirectories contain test cases.
• Test case directories are named to indicate the kind and source of the test case they cover (functionality tests, failure tests or operational tests).
• Each test case directory contains a number of test inputs, each named beginning with the letters “eg” (standing for “example”) to make them easy to find automatically, as well as a README file explaining the original source and intentions of the test case.
The TXL Regression Tests
Regression Test Directory
drwxr--r-- 4 cordy penguin 512 Apr 01 17:11 ASDT/
drwxr--r-- 3 cordy penguin 512 Apr 01 17:11 ASDT2/
drwxr--r-- 2 cordy penguin 512 Nov 07 1997 ASTI-issue/
drwxr--r-- 2 cordy penguin 512 Nov 27 1997 ASTI_issue/
drwxr--r-- 3 cordy 512 Apr 01 17:11 Abacus/
drwxr--r-- 2 cordy 512 Dec 19 1996 Analyzer_Bug/
drwxr--r-- 2 cordy 512 Apr 13 1996 AndCondition/
drwxr--r-- 2 cordy penguin 512 Jun 02 1996 Andy/
drwxr--r-- 2 cordy penguin 512 Apr 29 1997 Apr97Bugs/
drwxr--r-- 2 cordy penguin 512 Apr 13 1996 Backtrack/
drwxr--r-- 3 cordy penguin 512 Apr 13 1996 Booster/
drwxr--r-- 2 cordy penguin 512 Jun 24 1996 C2T/
./Abacus:
total 11
-rw-r--r-- 1 cordy penguin 898 Jun 30 1993 README
drwxr--r-- 2 cordy penguin 512 Dec 23 1994 Txl/
-rw-r--r-- 1 cordy penguin 487 Jun 30 1993 eg.Compound
-rw-r--r-- 1 cordy penguin 34 Jun 30 1993 eg1.Cascade
-rw-r--r-- 1 cordy penguin 375 Jun 30 1993 eg2.Cascade
-rw-r--r-- 1 cordy penguin 2102 Oct 16 1997 txltrace.out
Running the TXL Regression Tests
**TXL Regression Test Harness**
- The TXL regression tests are run by a C-shell script that walks through each subdirectory (test case) in the regression test directory, and runs each test input through TXL
```bash
#!/bin/csh
# NewTestAll - the TXL regression script
foreach i (*)
if -d $i then
echo "===== $i ====="
cd $i
foreach j (eg*.*)
time newtxl -v $j
end
endif
end
cd ..
```
Each Test Case Directory
Separator Message for each Test Case in Signature
Each Input in the Test Case Directory
Turn on All Verbose Diagnostic Messages
Run with Unix “time” command to Measure Time and Memory Use
Running the TXL Regression Tests
TXL Regression Test Signatures
- The output of the entire run of the regression test script, including all test output, diagnostic output, and time and memory resource usage, is saved in a single (large) signature file named for the version of TXL being tested.
- The signature file is diff'ed against the previous version's signature file to check for differences in behaviour, and saved for comparison with the next version.
```
# Run TXL regression tests
NewTestAll >& NTAout2.42
diff NTAout2.41 NTAout2.42
```
Run Putting All Direct and Diagnostic Output in Signature File
Compare to Previous Version
Example TXL Regression Signature
--- Abacus ---
TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
Bootstrapping TXL ...
... used 348 trees and 229 kids.
Scanning the TXL program Txl/Compound.Txl
Parsing the TXL program
... used 1445 trees and 2270 kids.
Making the object language grammar tree
TXL ERROR : (Warning) Declaration of define 'choice'
previous declaration
... used 72 trees and 49 kids.
Making the rule table
... used 252 trees and 261 kids.
Scanning the input file eg.Compound
Parsing the input file
... used 158 trees and 266 kids.
Applying the transformation rules
Forced to copy 16 local vars (27%)
... used 93 trees and 158 kids.
Generating transformed output
Used a total of 2368 trees (0%) and 3233 kids (0%).
True = true ! True + setFalse ? False + setTrue ? True
False = false ! False + setTrue ? True + setFalse ? False
Negate = false ? Negate1
Negate1 = true ? Negate2 + setTrue ! nil
Negate2 = setFalse ! nil
And = false ? And1
And1 = true ? And2 + setFalse ! nil
And2 = true ? setTrue ! nil + false ? setFalse ! nil
[True & Negate]
用量 0.0u 0.0s 0:00 109% 150+103k 0+0io 0pf+0w
TXL Regression Differencing
2c2
< TXL Pro-LS 2.4d8 (9.4.98) Copyright 1995-1998 Legasys Corp.
---
> TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
314c314
< 0.3u 0.0s 0:00 97% 359+781k 0+0io 0pf+0w
---
> 0.3u 0.0s 0:00 83% 350+773k 0+0io 0pf+0w
316c316
< TXL Pro-LS 2.4d8 (9.4.98) Copyright 1995-1998 Legasys Corp.
---
> TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
2970,2971c2970,2971
< 1.1u 0.1s 0:01 100% 400+1395k 0+0io 0pf+0w
< TXL Pro-LS 2.4d8 (9.4.98) Copyright 1995-1998 Legasys Corp.
---
> 1.2u 0.1s 0:01 98% 395+1369k 0+0io 1pf+0w
> TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
7039,7040c7039,7040
< 1.7u 0.1s 0:01 100% 413+1289k 0+0io 0pf+0w
< TXL Pro-LS 2.4d8 (9.4.98) Copyright 1995-1998 Legasys Corp.
---
> 1.7u 0.1s 0:01 100% 410+1275k 0+0io 0pf+0w
> TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
9787,9788c9787,9788
< 1.8u 0.1s 0:01 100% 413+1427k 0+0io 0pf+0w
< TXL Pro-LS 2.4d8 (9.4.98) Copyright 1995-1998 Legasys Corp.
---
> 1.7u 0.1s 0:01 98% 410+1431k 0+0io 0pf+0w
> TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
. . .
TXL Regression Differencing
32514c32532
< Preprocessor directives 58
---
> Preprocessor directives 58
32516c32534
< Declarations 91
---
> Declarations 91
...
15010c15010
< TXL Pro-LS 2.4d2 (9.12.97) Copyright 1995-1997 Legasys Corp.
---
> TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
27888c27888
< 8.1u 1.1s 0:09 99% 372+6375k 0+0io 11pf+0w
---
> 7.7u 0.4s 0:08 99% 421+6965k 0+0io 0pf+0w
27891c27891
< TXL Pro-LS 2.4d2 (9.12.97) Copyright 1995-1997 Legasys Corp.
---
> TXL Pro-LS 2.5d3b (22.7.98) Copyright 1995-1998 Legasys Corp.
27942c27942
< ... used 425 trees and 519 kids.
---
> ... used 423 trees and 519 kids.
41066c41066
< Used a total of 490839 trees (16%) and 998275 kids (22%).
---
> Used a total of 490837 trees (16%) and 998275 kids (22%).
...
Output Spacing Difference (Bug!)
Significant Performance Difference (But an Improvement)
Internal Diagnostic Difference
Regression Testing
• Advantages
– Previous functionality never accidentally lost
– Previously fixed bugs never reappear in production
– Virtually all accidental bugs are caught before deployment
– Virtually no unintentional changes in behaviour slip into production
– Users observe very high level of quality
Regression Testing
• Disadvantages
– Regression set must be maintained with a high degree of discipline and care
• At least as carefully as the software itself
– Establishing the baseline and regression testing harness requires significant effort
• But it pays off in ease of use later
• Bottom Line
– High-quality software shops all do it, because the difference in confidence and observed quality is easily worth it
Summary
• **Regression Testing**
– Ensure that existing functionality and behaviour is not broken by changes in new versions
– Maintain *regression set* of tests designed to exhibit existing functionality and behaviour
– Compare *observable artifacts* of each new version of software to previous version to ensure that differences are intentional
• **Then**
– Some material on inspection—enough for A4
– Security
|
{"Source-Url": "http://research.cs.queensu.ca/~joshuad/web327/Lectures/327_Lecture_19_regression_Fall_2018.pdf", "len_cl100k_base": 4242, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 51273, "total-output-tokens": 5823, "length": "2e12", "weborganizer": {"__label__adult": 0.0002440214157104492, "__label__art_design": 0.000354766845703125, "__label__crime_law": 0.0002899169921875, "__label__education_jobs": 0.0027675628662109375, "__label__entertainment": 5.239248275756836e-05, "__label__fashion_beauty": 0.00010222196578979492, "__label__finance_business": 0.0003333091735839844, "__label__food_dining": 0.0002300739288330078, "__label__games": 0.0004930496215820312, "__label__hardware": 0.0005354881286621094, "__label__health": 0.00018775463104248047, "__label__history": 0.00012755393981933594, "__label__home_hobbies": 7.849931716918945e-05, "__label__industrial": 0.00022733211517333984, "__label__literature": 0.0001958608627319336, "__label__politics": 9.781122207641602e-05, "__label__religion": 0.00024330615997314453, "__label__science_tech": 0.00634765625, "__label__social_life": 0.00010371208190917967, "__label__software": 0.02215576171875, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.00016999244689941406, "__label__transportation": 0.00018453598022460935, "__label__travel": 0.0001322031021118164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15588, 0.10575]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15588, 0.38173]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15588, 0.7899]], "google_gemma-3-12b-it_contains_pii": [[0, 68, false], [68, 269, null], [269, 353, null], [353, 674, null], [674, 1156, null], [1156, 1659, null], [1659, 2351, null], [2351, 2837, null], [2837, 3232, null], [3232, 3486, null], [3486, 4056, null], [4056, 4584, null], [4584, 5234, null], [5234, 5781, null], [5781, 6323, null], [6323, 6697, null], [6697, 7134, null], [7134, 7566, null], [7566, 7847, null], [7847, 8305, null], [8305, 8875, null], [8875, 9915, null], [9915, 10603, null], [10603, 11247, null], [11247, 12363, null], [12363, 13495, null], [13495, 14410, null], [14410, 14730, null], [14730, 15164, null], [15164, 15588, null]], "google_gemma-3-12b-it_is_public_document": [[0, 68, true], [68, 269, null], [269, 353, null], [353, 674, null], [674, 1156, null], [1156, 1659, null], [1659, 2351, null], [2351, 2837, null], [2837, 3232, null], [3232, 3486, null], [3486, 4056, null], [4056, 4584, null], [4584, 5234, null], [5234, 5781, null], [5781, 6323, null], [6323, 6697, null], [6697, 7134, null], [7134, 7566, null], [7566, 7847, null], [7847, 8305, null], [8305, 8875, null], [8875, 9915, null], [9915, 10603, null], [10603, 11247, null], [11247, 12363, null], [12363, 13495, null], [13495, 14410, null], [14410, 14730, null], [14730, 15164, null], [15164, 15588, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15588, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15588, null]], "pdf_page_numbers": [[0, 68, 1], [68, 269, 2], [269, 353, 3], [353, 674, 4], [674, 1156, 5], [1156, 1659, 6], [1659, 2351, 7], [2351, 2837, 8], [2837, 3232, 9], [3232, 3486, 10], [3486, 4056, 11], [4056, 4584, 12], [4584, 5234, 13], [5234, 5781, 14], [5781, 6323, 15], [6323, 6697, 16], [6697, 7134, 17], [7134, 7566, 18], [7566, 7847, 19], [7847, 8305, 20], [8305, 8875, 21], [8875, 9915, 22], [9915, 10603, 23], [10603, 11247, 24], [11247, 12363, 25], [12363, 13495, 26], [13495, 14410, 27], [14410, 14730, 28], [14730, 15164, 29], [15164, 15588, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15588, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
a4e017ac64478030e6afbea715c4be3aed9063c4
|
[REMOVED]
|
{"len_cl100k_base": 5969, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 27362, "total-output-tokens": 7898, "length": "2e12", "weborganizer": {"__label__adult": 0.0004091262817382813, "__label__art_design": 0.0008945465087890625, "__label__crime_law": 0.0005917549133300781, "__label__education_jobs": 0.0025501251220703125, "__label__entertainment": 0.00013327598571777344, "__label__fashion_beauty": 0.00026488304138183594, "__label__finance_business": 0.0004987716674804688, "__label__food_dining": 0.00048661231994628906, "__label__games": 0.0010232925415039062, "__label__hardware": 0.00098419189453125, "__label__health": 0.0010051727294921875, "__label__history": 0.0005664825439453125, "__label__home_hobbies": 0.0001647472381591797, "__label__industrial": 0.0008273124694824219, "__label__literature": 0.0007801055908203125, "__label__politics": 0.00042128562927246094, "__label__religion": 0.0006251335144042969, "__label__science_tech": 0.2386474609375, "__label__social_life": 0.0001748800277709961, "__label__software": 0.012359619140625, "__label__software_dev": 0.7353515625, "__label__sports_fitness": 0.00036454200744628906, "__label__transportation": 0.0007734298706054688, "__label__travel": 0.00029087066650390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34751, 0.02302]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34751, 0.58216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34751, 0.91466]], "google_gemma-3-12b-it_contains_pii": [[0, 2552, false], [2552, 5539, null], [5539, 8752, null], [8752, 11614, null], [11614, 13501, null], [13501, 16965, null], [16965, 18873, null], [18873, 22117, null], [22117, 25098, null], [25098, 27076, null], [27076, 28878, null], [28878, 31828, null], [31828, 34751, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2552, true], [2552, 5539, null], [5539, 8752, null], [8752, 11614, null], [11614, 13501, null], [13501, 16965, null], [16965, 18873, null], [18873, 22117, null], [22117, 25098, null], [25098, 27076, null], [27076, 28878, null], [28878, 31828, null], [31828, 34751, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34751, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34751, null]], "pdf_page_numbers": [[0, 2552, 1], [2552, 5539, 2], [5539, 8752, 3], [8752, 11614, 4], [11614, 13501, 5], [13501, 16965, 6], [16965, 18873, 7], [18873, 22117, 8], [22117, 25098, 9], [25098, 27076, 10], [27076, 28878, 11], [28878, 31828, 12], [31828, 34751, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34751, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
95e2dc4d5a3a0c9eb48f356681fedfbfbebcef1e
|
MODIFIED BINARY SEARCH TREES
by
Alon Itai* and Michael Rodeh**
Technical Report #182
August 1980
* Dept. of Computer Science, Technion-IIT, Haifa, Israel
** IBM Israel Scientific Center, Technion City, Haifa, Israel.
ABSTRACT
A degenerate node in a binary search tree is a node with only one child. It is shown that on the average, an n-node binary search tree has \((n+1)/3\) degenerate nodes. Suppose the insertion algorithm is modified as follows: Whenever a new node is to be inserted as a grandchild of a degenerate node, the new node, its parent and its grandparent are replaced by the 3-node full binary tree. This heuristic which is very easy to implement has the effect of reducing the average number of degenerate nodes to \((n+1)/7\) and the average height by a factor of \(6/7\), to \(\approx 1.188 \log_2 n\) which is only 18.8% more than the optimum.
1. INTRODUCTION
Following Knuth [K1], a binary tree is a set of nodes which is either empty, or consists of a root and two disjoint binary trees, called the left and right subtrees of the root. A leaf is a node both subtrees of which are empty. A node is degenerate if it has exactly one nonempty subtree and is complete if both its subtrees are empty.
Binary trees are very often used as a data structure to implement the operations search, insert and sometimes delete. Given \( n \) keys and an \( n \)-node binary tree, each key is associated with a distinct node. The inorder of the nodes corresponds to the natural order of the keys.
To search for a key \( k \), it is first compared to the key stored at the root. If they are equal then \( k \) is found; if \( k \) is smaller, then the search continues in the left subtree, otherwise it turns to the right subtree. If an empty tree is reached then the search is unsuccessful - \( k \) does not appear in the tree.
Each empty subtree corresponds to the interval of values between the keys of its inorder predecessor and successor. To insert a new key \( k \), first find the empty subtree corresponding to the interval of values to which \( k \) belongs. Then replace that subtree by a single node tree whose information field contains \( k \). The insertion time is dominated by the search time. In Section 2 several implementations of binary search trees are discussed. A method to delete keys is also given.
In the sequel, random variables are denoted by capital letters. Let the keys to be inserted \( K_1, K_2, \ldots, K_n \) be random variables. \( T_n \) - the tree obtained after \( n \) insertions into an initially empty tree - is also a random variable. The distribution of \( T_n \) depends on the \( K_i \)'s. Several probabilistic models will be discussed in Section 3, where it is shown that all these models yield the same distribution of trees.
As shown in [K2] the average number of nodes visited during a single unsuccessful search of $T_n$ is $2\ln(n+1) - 0.84556 + O(1/n) \approx 1.386 \log_2 n + O(1)$. In Section 4 we present, for completeness, an alternative derivation. It is also shown that both the average number of leaves and the average number of degenerate nodes is equal to $(n+1)/3$.
Consider the following heuristic to reduce the number of degenerate nodes: On insertion, if the parent and grandparent of a new node are degenerate, replace the 3-node subtree rooted at the grandparent by the full 3-node tree. See Figure 1 for an example.

The trees obtained by modified insertion are still ordinary search trees, only their distribution differs. To see the difference notice that some n-node binary trees are not obtained by modified insertions.
Let $M_n$ be a binary search tree obtained by $n$ modified insertions. In Section 5 it is shown that the average number of nodes visited by an unsuccessful search in $M_n$ is reduced by a factor of $6/7$ in comparison to regular binary tree (to $1.188 \log_2(n+1) - 0.54109 + O(1/n)$). This compares very well to the optimum binary search tree whose average search time is $\log_2 n + O(1)$. It is also shown that the average number of degenerate nodes is only $(n+1)/7$ and that the number of leaves is $3(n+1)/7$.
Technion - Computer Science Department - Technical Report CS0182 - 1980
We see that from a probabilistic point of view, modified insertions are worthwhile both in time and in space complexity.
As for the worst case, the height of modified binary search tree is bounded by \( n/2 \) while the maximum height of regular binary search trees is \( n-1 \). The maximal number of degenerated nodes is shown to be \( \frac{(n+1)}{3} \) while in regular binary trees it may be as large as \( n-1 \).
2. IMPLEMENTING BINARY TREES
Binary search trees are dynamic by nature, therefore, they are usually implemented by nodes which are allocated and freed at run time. Each node has three fields: an information field to store the key associated with the node and two pointer fields (\( \text{lchild} \) and \( \text{rchild} \)) which point to the left and right subtrees. If a subtree is empty, the corresponding pointer is \( \text{nil} \).
On insertion, a new leaf \( v \) is allocated, it replaces an empty subtree of a node \( p \) whose corresponding pointer is now made to point at \( v \). Deletions are somewhat trickier: let \( v \) be a node to be deleted and let \( p \) be its parent. If \( v \) is leaf, the pointer of \( p \) which pointed to \( v \) is set to \( \text{nil} \). If \( v \) is degenerate, \( p \) is made to point at \( v \)'s child. Finally, to delete a complete node, find the node \( v' \) which immediately succeeds \( v \) in inorder, and exchange the information fields of \( v \) and \( v' \). The node \( v' \) has at most one non-empty subtree, and thus can be deleted as discussed above. In all cases, as a result of deletion, some node is "freed" and may be reused.
If insertions and deletions followed a last-in-first-out discipline then a stack could be used to allocate nodes, thus a deletion will cause the top node to be removed. Note that we must be able to inspect values other than the top node, and change pointer fields.
Even when the last-in-first-out discipline does not hold, the nodes can be allocated on a stack: to remove a node from the middle of the stack, exchange its contents with the top node, this involves finding the parent of the top node and updating its pointer. Thus, the last-in-first-out allocation discipline is retained at the expense of an extra search.
An n-node binary tree contains 2n pointer fields, n+1 of which are null. Denoting the number of leaves by \( \ell \) and the number of degenerate nodes by \( d \), we have:
\[ n+1 = d + 2\ell. \]
It is wasteful in space to allocate the same amount of space both to leaves and to internal nodes. Thus, it might be advantageous to allocate two types of nodes: one for internal nodes and one for leaves. Each type of nodes can be made to adhere to the last-in-first-out allocation discipline. Thus, we have two stacks which in turn can be implemented by two consecutive chunks of memory, one starting from low memory and growing upwards and the other starting from high memory and growing downwards. Since the memory allocated to each stack is consecutive, given the address of a node it is easy to decide to which stack it belongs and consequently whether the node is a leaf or an internal node.
3. PROBABILISTIC MODELS FOR ORDERING PROBLEMS
When conducting a probabilistic analysis we must make some assumptions on the distribution of the data. The weakest assumptions (thus the most general model) are those made only on the relative order of the data, not on their specific values.
Many probabilistic analyses assume a uniform distribution model.
i.e., the inputs $X_1, X_2, \ldots$ are uniformly distributed random variables over some domain, usually the interval $[0,1]$. In this model assumptions are made on the values of the data.
Yao [Y] discusses symmetric models, in which all permutations of the input data are assumed to be equiprobable, thus the density of the pair $(0.5, 0.2)$ is equal to that of $(0.2, 0.5)$. The uniform distribution models are contained in the symmetric models (i.e., they satisfy the requirements for a model to be symmetric). Again, in symmetric models assumptions are made on specific values of the problem domain.
Other data dependent models discussed by Yao [Y] are the random insertion models: Given $n$ distinct inputs $X_1, \ldots, X_n$, let $\pi$ be the permutation such that $X_{\pi_1} < \ldots < X_{\pi_n}$. Then the next input $X_{n+1}$ is chosen such that the following $n+1$ events:
\[
X_{n+1} < X_{\pi_1}
\]
\[
X_{\pi_i} < X_{n+1} < X_{\pi_{i+1}} \quad i = 1, \ldots, n-1
\]
\[
X_{\pi_n} < X_{n+1}
\]
have equal probability. Note that random insertion models are not symmetric: The density of $(0.5, 0.2)$ is 1 while the density of $(0.2, 0.5)$ is 0.8.
We wish to define a more general family of models in which no assumption is made on specific values. To this end, let $(X_1, \ldots, X_n)$ be a sequence of $n$ distinct numbers. The ordering of $(X_1, \ldots, X_n)$ is the unique permutation $\pi$ which sorts the sequence, i.e., $X_{\pi_1} < X_{\pi_2} < \ldots < X_{\pi_n}$. Thus, $\pi = (3,1,2)$ is the ordering of the sequence $(0.7, 0.1, 0.4)$ as well as of the sequence $(0.9, 0.7, 0.8)$. A model is an equiprobable ordering model if for all permutations $\pi$ there is equal probability that $\pi$ is the ordering of a random sequence $(X_1, \ldots, X_n)$.
Technion - Computer Science Department - Technical Report CS0182 - 1980
The equiprobable ordering models contain the symmetric models and are, therefore, more general. As the following theorem shows, the random insertion models, which are history dependent by definition, also satisfy the equiprobable ordering requirements.
**Theorem 1**: Random insertion models are included in the equiprobable ordering models.
**Proof.** Let \( X_1, X_2, \ldots \) be random variables emitted by a source which fulfills the random insertion requirement. Let \( P_{\pi,i} \) denote the probability that the ordering of \( (\pi_1, \ldots, \pi_i) \) is equal to that of \( (X_1, \ldots, X_i) \).
We show by induction that \( P_{\pi,i} = 1/i! \).
**Base** \( i = 1 \). The only ordering is the permutation \((1)\), which sorts both \((X_1)\) and \((\pi_1)\). Therefore, \( P_{\pi,1} = 1 = 1/1! \).
**Induction hypothesis** \( P_{\pi,i-1} = 1/(i-1)! \).
**Induction step** Let \( PC_{\pi,i} \) denote the conditional probability that the orderings of \( (X_1, \ldots, X_n) \) and \( (\pi_1, \ldots, \pi_i) \) are equal, given that the ordering of \( (X_1, \ldots, X_{i-1}) \) is equal to that of \( (\pi_1, \ldots, \pi_{i-1}) \). By the random insertion hypothesis \( PC_{\pi,i} = 1/i \). However,
\[
P_{\pi,i} = PC_{\pi,i} \cdot P_{\pi,i-1}.
\]
Using \( PC_{\pi,i} = 1/i \), the induction hypothesis yields
\[
P_{\pi,i} = (1/i)(1/(i-1)!) = 1/i!.
\]
The containment relations between the various models are illustrated in Figure 2.
The information contained in the equiprobable ordering models suffices to analyze algorithms whose behavior depends only on the relative order, not on the specific values. This leads us to the following definition: An algorithm is an ordering algorithm if only comparisons of the data are conducted and the specific values are immaterial. Thus, heap-sort and binary-search are ordering algorithms, while bucket-sort and interpolation-search are not.
The behavior of an ordering algorithm depends only on the relative order of the data and on the probability of occurrence of the various orderings, i.e. the probabilistic model. The average behavior of an ordering algorithm is equal in two models which assign the same probabilities to the various orderings. We see that an average behavior analysis carried out in one of the above submodels is valid for all the others. For example, when analyzing the average behavior of binary search trees we may use the random insertion model instead of the uniform distribution model.
This observation generalizes that made by Yao [Y] concerning the distribution of random 2-3 trees.
4. REGULAR BINARY SEARCH TREES
We first analyze the characteristics of $T_n$ - the binary search tree obtained by regular insertions. As a matter of notation, if $f$ is a function of a random variable $X$ then $F(X)$ is the corresponding random variable and $E(X)$ is its expected value.
**Theorem 2:** The average number of leaves of a regular insertion binary tree is
$$\bar{L}(T_n) = \frac{n+1}{3} \quad (n > 1).$$
**Proof:** By induction
**Basis:** $n = 2$. $T_2$ is either $\circ$ or $\bigcirc$ so $\bar{L}(T_2) = L(T_2) = 1$.
**Induction Step:** Let $t_n$ be a specific $n$-node binary tree, and $t_{n+1}$ the tree obtained by a random insertion into $t_n$, i.e. by replacing a null pointer. The tree $t_n$ has $n+1$ null pointers, $2L(t_n)$ of which belong to leaves. If the new node is a child of a former leaf, the number of leaves does not change. This happens with probability $2L(t_n)/(n+1)$. Otherwise, the number of leaves increases by one. Therefore,
$$\bar{L}(t_{n+1}) = \frac{2L(t_n)}{n+1} L(t_n) + \left( 1 - \frac{2L(t_n)}{n+1} \right) (L(t_n) + 1)$$
$$= \frac{1}{n+1} \left( 2L(t_n)^2 + (n+1)L(t_n) + (n+1) - 2L(t_n)^2 - 2L(t_n) \right)$$
$$= 1 + \frac{n-1}{n+1} L(t_n).$$
Averaging over all binary trees yields:
$$\bar{L}(T_{n+1}) = 1 + \frac{n-1}{n+1} \bar{L}(T_n).$$
The desired result is obtained by substituting the induction hypothesis:
\[
\bar{L}(T_{n+1}) = 1 + \frac{n-1}{n+1} \frac{n+1}{3} = \frac{n+2}{3}.
\]
Since \(n+1 = d+2\ell\), the average number of degenerate nodes is
\[
\bar{D}(T_n) = n+1 - 2\bar{L}(T_n) = (n+1)/3.
\]
Consequently, if leaves are implemented by pointerless nodes, there remain \((n+1)/3\) null pointers on the average.
To analyze the average time, replace every null pointer of a binary tree by a new external node. In Figure 3 external nodes are illustrated as square boxes.

Let \(\text{ext}(t)\), the external path length, be the sum of the distances from the external nodes to the root of a binary tree \(t\).
**Lemma 1**: The average external path length of a regular binary search tree satisfies:
\[
\overline{\text{EXT}}(T_n) = 2 + (1 + 1/n)\overline{\text{EXT}}(T_{n-1}).
\]
**Proof**: Let \(t_{n-1}\) be an \((n-1)\)-node binary search tree and let \(k\) be a new key, the insertion of which replaces an external node at distance \(r\) from the root. The external path length of the new tree \(t_n\) is
\[ \text{ext}(t_n) = 2 + r + \text{ext}(t_{n-1}). \]
The average value of \( r \) is \( \frac{\text{ext}(t_{n-1})}{n} \). Thus, we have:
\[ \overline{\text{ext}(t_n)} = 2 + \frac{1 + 1/n}{n} \text{ext}(t_{n-1}). \]
Averaging over all binary search trees yields the desired formula. \( \square \)
Let \( H_n = 1 + \frac{1}{2} + \ldots + \frac{1}{n} \) denote the harmonic series of length \( n \). Thus,
\[ H_n = \ln n + \gamma + O(1/n) \]
where \( \gamma \approx 0.57722 \) is Euler's constant.
**Theorem 3:** [K3, pp.427]. The average number of nodes visited in an unsuccessful search of \( T_n \) is:
\[ \overline{\text{EXT}(T_n)} = 2(H_{n+1} - 1) = c_1 \log_2 (n+1) - c_2 + O(1/n) \]
where \( c_1 \approx 1.386 \) and \( c_2 \approx 0.84556 \).
**Proof:** Lemma 1 implies the following recursive equation
\[ \frac{\overline{\text{EXT}(T_n)}}{n+1} = \frac{2}{n+1} + \frac{\overline{\text{EXT}(T_n)}}{n-1}. \]
The desired formula is the solution to this equation. \( \square \)
5. **MODIFIED BINARY SEARCH TREES**
The possible values of \( M_n \) and their distribution for \( 1 \leq n \leq 6 \) are depicted in Figure 4. From each symmetry class only one tree is drawn.
**Lemma 2:** In any modified binary search the degenerate nodes occur only as parents of leaves.
**Proof:** By induction.
**Basis:** For \( n = 1 \) the lemma is vacuously true.
**Figure 4**
The Induction Step: An n-node modified binary tree $m_n$ is obtained by inserting a new key $k$ into some $(n-1)$-node modified binary search tree $m_{n-1}$. In $m_{n-1}$ the lemma holds by the induction hypothesis. The insertion of $k$ may replace a degenerate node by a complete one, or create a new degenerate node with a single child which is a leaf containing $k$. $\square$
Theorem 4: A modified binary search tree with \( n \) nodes may have at most \( (n+1)/3 \) degenerate nodes.
Proof: Let \( t_n \) be an \( n \)-node modified binary search tree. By Lemma 2, \( d = d(t_n) \leq \ell(t_n) \). Let \( t_{n-d} \) be the binary tree obtained from \( t_n \) by removing the leaves whose parents are degenerate. By the construction
\[
\ell(t_{n-d}) = \ell(t_n).
\]
Counting the number of nodes in \( t_{n-d} \) yields:
\[
n-d = 2\ell(t_{n-d}) - 1 = 2\ell(t_n) - 1.
\]
Substituting for \( d \) yields:
\[
\ell(t_n) \geq (n+1)/3
\]
and \( d(t_n) \leq (n+1) - 2\ell(t_n) \leq (n+1)/3 \). Figure 5 shows that this bound is exact. \( \square \)
Figure 5
Theorem 5: The average number of degenerate nodes in a modified binary search tree is
\[ \bar{D}(M_n) = \frac{n+1}{7}, \quad (n > 5). \]
Proof: By induction.
Basis: For \( n = 6 \), inspecting Figure 4 shows that \( \bar{D}(M_6) = 1 \).
Induction Step: Let \( M_n \) be a modified tree obtained by random insertion. The subtree rooted at a degenerate node \( v \) has one of the following shapes:
\[ \text{or} \]
By Lemma 2, the child of \( v \) is a leaf. Thus, in both cases there exist three null pointers. An insertion into such a subtree yields the full 3-node binary tree with no degenerate nodes. Thus, the number of degenerate nodes is decreased by one. This happens with probability \( \frac{3d(m_n)}{n+1} \). In all other cases the number of degenerate nodes increases by one. We get the following recursive formula:
\[ \bar{d}(m_{n+1}) = \frac{3d(m_n)}{n+1}(d(m_n) - 1) + \left(1 - \frac{3d(m_n)}{n+1}\right)(d(m_n) + 1) = 1 + \frac{n-5}{n+1} \bar{d}(m_n). \]
Averaging over all \( n \)-node modified binary search trees yields:
\[ \bar{D}(M_{n+1}) = 1 + \frac{n-5}{n+1} \bar{D}(M_n). \]
By the induction hypothesis we get:
\[ \bar{D}(M_{n+1}) = 1 + \frac{n-5}{n+1} \frac{n+1}{7} = \frac{n+2}{7}. \]
Corollary: \( \bar{L}(M_n) = 3(n+1)/7 \).
**Lemma 3:** The average external path length of a modified binary search tree satisfies:
\[
\overline{\text{EXT}}(M_{n+1}) = 2 + \left(1 + \frac{1}{n+1}\right) \overline{\text{EXT}}(M_n) - \frac{2\overline{D}(M_n)}{n+1}.
\]
**Proof:** Let \( m_n \) be an \( n \)-node modified binary search tree and \( v \) be a degenerate node whose child is \( u \). A modification at \( v \) occurs only when a node is inserted as \( u \)'s child. Thus, the probability of a modification is \( 2d(m_n)/(n+1) \).
A modification causes the external path length to decrease by one. Thus, as in Lemma 1, the new external path length is
\[
2 + \left(1 + \frac{1}{n+1}\right) \text{ext}(m_n)
\]
before the modification. Bringing the modification into account and averaging over all \( n \)-node modified binary trees yields the desired formula.
\[ \Box \]
**Theorem 6:** The average number of nodes visited in an unsuccessful search is
\[
\overline{\text{EXT}}(M_n)/(n+1) = c_3 \log_2(n+1) - c_4 + 0(1/n)
\]
where \( c_3 \approx 1.188 \) and \( c_4 \approx 0.54109 \).
**Proof:** First note that \( \overline{\text{EXT}}(M_6) = 21 \times 2/5 + 20 \times 3/5 = 20.4 \).
Next, substitute the value of \( D(M_n) \) from Theorem 5 in Lemma 3 to obtain:
\[
\overline{\text{EXT}}(M_n) = 2 + (1 + 1/n)\overline{\text{EXT}}(M_{n-1}) - (2/n) \cdot (n/7).
\]
Thus,
\[
\frac{\text{EXT}(M_{n})}{n+1} = \frac{12}{7} \cdot \frac{1}{n+1} + \frac{\text{EXT}(M_{n-1})}{n} \\
= \frac{12}{7} \left( \frac{1}{n+1} + \frac{1}{n} + \ldots + \frac{1}{8} \right) + \frac{\text{EXT}(M_{6})}{7} \\
= \frac{12}{7} (H_{n+1} - H_{7}) + \frac{102}{35} \\
= \frac{12}{7} H_{n+1} - \frac{75}{49} \\
= \frac{12}{7} \ln(n+1) + \frac{84\gamma - 75}{49} + O(1/n) \\
= 1.188 \log_{2}(n+1) - 0.54109 + O(1/n). \tag*{\square}
\]
A comparison of modified binary search trees and regular binary search trees is given in Table 1.
<table>
<thead>
<tr>
<th></th>
<th>modified binary search tree</th>
<th>regular binary search tree</th>
</tr>
</thead>
<tbody>
<tr>
<td>maximum number of degenerate nodes</td>
<td>(\frac{(n+1)}{3})</td>
<td>(n-1)</td>
</tr>
<tr>
<td>minimum number of leaves</td>
<td>(\frac{(n+1)}{3})</td>
<td>1</td>
</tr>
<tr>
<td>average number of degenerate nodes</td>
<td>(\frac{(n+1)}{7})</td>
<td>(\frac{(n+1)}{3})</td>
</tr>
<tr>
<td>average number of leaves</td>
<td>(\frac{3(n+1)}{7})</td>
<td>(\frac{(n+1)}{3})</td>
</tr>
<tr>
<td>average number of complete nodes</td>
<td>(\frac{3(n-4)}{7})</td>
<td>(\frac{(n-2)}{3})</td>
</tr>
<tr>
<td>average number of nodes visited in an unsuccessful search</td>
<td>(1.188 \log_{2}(n+1) - 0.54109 + O(1/n))</td>
<td>(1.386 \log_{2}(n+1) - 0.8455 + O(1/n))</td>
</tr>
<tr>
<td>average number of nodes visited in a successful search</td>
<td>(1.188 \log_{2}(n+1) - 2.54109 + O(1/n))</td>
<td>(1.386 \log_{2}(n+1) - 2.8455 + O(1/n))</td>
</tr>
</tbody>
</table>
\textbf{Table 1}
The last line of Table 1 is derived from the following relation between internal and external path lengths (the internal path length, denoted by \( \text{int} \), is the sum of the distance from the internal nodes to the root):
**Lemma 4:** [K1, p.400]. For every \( n \)-node binary tree \( t_n \)
\[
\text{ext}(t_n) = \text{int}(t_n) + 2n.
\]
6. **CONCLUSIONS**
Modified binary search trees compare favorably with regular binary trees, *search* is faster due to the smaller average internal path length. As to insertions the situation is unclear: On one hand, the average internal path length is smaller, but on the other hand modifications must be carried out (constant time) and to this end we must keep track of the parent of the degenerate node (time proportional to the distance of the node from the root).
An important advantage of modified binary search trees is the space savings. The wasted space is cut by 4/7 on the average.
Additional space may be saved by considering higher order modifications (such as the modification shown in Figure 6). Obviously, such modifications are more time consuming than the one considered in this paper.
Another extension is to conduct a probabilistic analysis for trees obtained from deletions as well as insertions, however, this case is considerably more difficult [K3].
REFERENCES
|
{"Source-Url": "http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/1980/CS/CS0182.pdf", "len_cl100k_base": 6766, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 37722, "total-output-tokens": 7910, "length": "2e12", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.0003726482391357422, "__label__crime_law": 0.0005121231079101562, "__label__education_jobs": 0.0006670951843261719, "__label__entertainment": 0.00010704994201660156, "__label__fashion_beauty": 0.00018477439880371096, "__label__finance_business": 0.0003559589385986328, "__label__food_dining": 0.0006265640258789062, "__label__games": 0.0009756088256835938, "__label__hardware": 0.0022430419921875, "__label__health": 0.0010461807250976562, "__label__history": 0.0004394054412841797, "__label__home_hobbies": 0.0001742839813232422, "__label__industrial": 0.0007071495056152344, "__label__literature": 0.0003452301025390625, "__label__politics": 0.0003216266632080078, "__label__religion": 0.0006761550903320312, "__label__science_tech": 0.174560546875, "__label__social_life": 0.00011175870895385742, "__label__software": 0.009674072265625, "__label__software_dev": 0.80419921875, "__label__sports_fitness": 0.0004687309265136719, "__label__transportation": 0.0007429122924804688, "__label__travel": 0.00029158592224121094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22828, 0.03683]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22828, 0.84637]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22828, 0.8673]], "google_gemma-3-12b-it_contains_pii": [[0, 221, false], [221, 870, null], [870, 2793, null], [2793, 4224, null], [4224, 6117, null], [6117, 7728, null], [7728, 9573, null], [9573, 11025, null], [11025, 12150, null], [12150, 13451, null], [13451, 14561, null], [14561, 15928, null], [15928, 16322, null], [16322, 17004, null], [17004, 18269, null], [18269, 19612, null], [19612, 21113, null], [21113, 22439, null], [22439, 22828, null]], "google_gemma-3-12b-it_is_public_document": [[0, 221, true], [221, 870, null], [870, 2793, null], [2793, 4224, null], [4224, 6117, null], [6117, 7728, null], [7728, 9573, null], [9573, 11025, null], [11025, 12150, null], [12150, 13451, null], [13451, 14561, null], [14561, 15928, null], [15928, 16322, null], [16322, 17004, null], [17004, 18269, null], [18269, 19612, null], [19612, 21113, null], [21113, 22439, null], [22439, 22828, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22828, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22828, null]], "pdf_page_numbers": [[0, 221, 1], [221, 870, 2], [870, 2793, 3], [2793, 4224, 4], [4224, 6117, 5], [6117, 7728, 6], [7728, 9573, 7], [9573, 11025, 8], [11025, 12150, 9], [12150, 13451, 10], [13451, 14561, 11], [14561, 15928, 12], [15928, 16322, 13], [16322, 17004, 14], [17004, 18269, 15], [18269, 19612, 16], [19612, 21113, 17], [21113, 22439, 18], [22439, 22828, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22828, 0.045]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
c5a313dd34e308a49355c42d98075f43600ca9ed
|
Program Logics for Certified Compilers
Andrew W. Appel
with Robert Dockins, Aquinas Hobor, Lennart Beringer, Josiah Dodds, Gordon Stewart, Sandrine Blazy, and Xavier Leroy
This is the preliminary manuscript of a book that will be published in late 2013, and will be citable as
Andrew W. Appel et al.,
Copyright © 2013 Andrew W. Appel
## Contents
Road map
Acknowledgments
1 Introduction
### I Generic Separation Logic
1 Hoare logic
2 Separation logic
3 Soundness of Hoare logic
4 Mechanized Semantic Library
5 Separation algebras
6 Operators on separation algebras
7 First-order separation logic
8 A little case study
9 Covariant recursive predicates
10 Share accounting
### II Higher Order Separation Logic
12 Separation Logic as a logic
13 From separation algebras to separation logic
14 Simplification by rewriting
15 Introduction to step-indexing
16 Predicate implication and subtyping
17 General recursive predicates
18 Case Study: Separation logic with first-class functions
<table>
<thead>
<tr>
<th>Chapter</th>
<th>Title</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>19</td>
<td>Data structures in indirection theory</td>
<td>131</td>
</tr>
<tr>
<td>20</td>
<td>Applying higher-order separation logic</td>
<td>138</td>
</tr>
<tr>
<td>21</td>
<td>Lifted Separation Logics</td>
<td>142</td>
</tr>
<tr>
<td>III</td>
<td><strong>Separation Logic for CompCert</strong></td>
<td>149</td>
</tr>
<tr>
<td>22</td>
<td>Verifiable C</td>
<td>150</td>
</tr>
<tr>
<td>23</td>
<td>Expressions, values, and assertions</td>
<td>156</td>
</tr>
<tr>
<td>24</td>
<td>The VST Separation Logic for C light</td>
<td>161</td>
</tr>
<tr>
<td>25</td>
<td>Typechecking for Verifiable C</td>
<td>181</td>
</tr>
<tr>
<td>26</td>
<td>Derived rules and proof automation for C light</td>
<td>190</td>
</tr>
<tr>
<td>27</td>
<td>Proof of a program</td>
<td>200</td>
</tr>
<tr>
<td>28</td>
<td>More C programs</td>
<td>213</td>
</tr>
<tr>
<td>29</td>
<td>Dependently typed C programs</td>
<td>222</td>
</tr>
<tr>
<td>30</td>
<td>Concurrent separation logic</td>
<td>227</td>
</tr>
<tr>
<td>IV</td>
<td><strong>Operational Semantics of CompCert</strong></td>
<td>237</td>
</tr>
<tr>
<td>31</td>
<td>CompCert</td>
<td>238</td>
</tr>
<tr>
<td>32</td>
<td>The CompCert memory model</td>
<td>242</td>
</tr>
<tr>
<td>33</td>
<td>How to specify a compiler</td>
<td>277</td>
</tr>
<tr>
<td>34</td>
<td>C light operational semantics</td>
<td>289</td>
</tr>
<tr>
<td>V</td>
<td><strong>Indirection Theory</strong></td>
<td>295</td>
</tr>
<tr>
<td>35</td>
<td>Higher-order Hoare logic</td>
<td>296</td>
</tr>
<tr>
<td>36</td>
<td>Higher-order separation logic</td>
<td>304</td>
</tr>
<tr>
<td>37</td>
<td>Case study: Lambda-calculus with references</td>
<td>308</td>
</tr>
<tr>
<td>38</td>
<td>Semantic models of predicates-in-the-heap</td>
<td>332</td>
</tr>
<tr>
<td>VI</td>
<td><strong>Semantic model and soundness of Verifiable C</strong></td>
<td>337</td>
</tr>
<tr>
<td>39</td>
<td>Separation algebra for CompCert</td>
<td>338</td>
</tr>
<tr>
<td>40</td>
<td>Share models</td>
<td>349</td>
</tr>
<tr>
<td>41</td>
<td>Juicy memories</td>
<td>360</td>
</tr>
<tr>
<td>42</td>
<td>Modeling the Hoare judgment</td>
<td>368</td>
</tr>
<tr>
<td>43</td>
<td>Modular structure of the development</td>
<td>376</td>
</tr>
</tbody>
</table>
Readers interested in the theory of separation logic (with some example applications) should read Chapters 1–21. Readers interested in the use of separation logic to verify C programs should read Chapters 1–6 and 8–30. Those interested in the theory of step-indexing and indirection theory should read Chapters 35–39. Those interested in building models of program logics proved sound for certified compilers should read Chapters 40–46, though it would be helpful to read Chapters 1–39 as a warm-up.
Chapter 1
Introduction
An exciting development of the 21st century is that the 20th-century vision of mechanized program verification is finally becoming practical, thanks to 30 years of advances in logic, programming-language theory, proof-assistant software, decision procedures for theorem proving, and even Moore’s law which gives us everyday computers powerful enough to run all this software.
We can write functional programs in ML-like languages and prove them correct in expressive higher-order logics; and we can write imperative programs in C-like languages and prove them correct in appropriately chosen program logics. We can even prove the correctness of the verification toolchain itself: the compiler, the program logic, automatic static analyzers, concurrency primitives (and their interaction with the compiler). There will be few places for bugs (or security vulnerabilities) to hide.
This book explains how to construct powerful and expressive program logics based on Separation Logic and Indirection Theory. It is accompanied by an open-source machine-checked formal model and soundness proof, the Verified Software Toolchain\(^1\) (VST), formalized in the Coq proof assistant. The VST components include the theory of separation logic for reasoning about pointer-manipulating programs; indirection theory for reasoning with “step-indexing” about first-class function pointers, recursive types, recursive types,
\(^1\)http://vst.cs.princeton.edu
recursive functions, dynamic mutual-exclusion locks, and other higher-order programming; a Hoare logic (separation logic) with full reasoning about control-flow and data-flow of the C programming language; theories of concurrency for reasoning about programming models such as Pthreads; theories of compiler correctness for connecting to the CompCert verified C compiler; theories of symbolic execution for implementing foundationally verified static analyses. VST is built in a modular way, so that major components apply very generally to many kinds of separation logics, Hoare logics, and step-indexing semantics.
One of the major demonstration applications comprises certified program logics and certified static analyses for the C light programming language. C light is compiled into assembly language by the CompCert\textsuperscript{2} certified optimizing compiler. [59] Thus, the VST is useful for verified formal reasoning about programs that will be compiled by a verified compiler. But Parts I, II, and V of this book show principles and Coq developments that are quite independent of CompCert and have already been useful in other applications of separation logics.
**Program Logics for Certified Compilers.** Software is complex and prone to bugs. We would like to reason about the correctness of programs, and even to prove that the behavior of a program adheres to a formal specification. For this we use program logics: rules for reasoning about the behavior of programs. But programs are large and the reasoning rules are complex; what if there is a bug in our proof (in our application of the rules of the program logic)? And how do we know that the program logic itself is sound—that when we conclude something using these rules, the program will really behave as we concluded? And once we have reasoned about a program, we compile it to machine code; what if there is a bug in the compiler?
We achieve soundness by formally verifying our program logics, static analyzers, and compilers. We prove soundness theorems based on foundational specifications of the underlying hardware. We check all proofs by machine, and connect the proofs together end-to-end so there are no gaps.
\textsuperscript{2}http://compcert.inria.fr
Definitions. A program consists of instructions written in a programming language that direct a computer to perform a task. The behavior of a program, i.e. what happens when it executes, is specified by the operational semantics of the programming language. Some programming languages are machine languages that can directly execute on a computer; others are source languages that require translation by a compiler before they can execute.
A program logic is a set of formal rules for static reasoning about the behavior of a program; the word static implies that we do not actually execute the program in such reasoning. Hoare Logic is an early and still very important program logic. Separation Logic is a 21st-century variant of Hoare Logic that better accounts for pointer and array data structures.
A compiler is correct with respect to the specification of the operational semantics of its source and its target languages if, whenever a source program has a particular defined behavior, and when the compiler translates that program, then the target program has a corresponding behavior. The correspondence is part of the correctness specification of the compiler, along with the two operational semantics. A compiler is proved correct if there is a formal proof that it meets this specification. Since the compiler is itself a program, this formal proof will typically be using the rules of a program logic for the implementation language of the compiler.
Proofs in a logic (or program logic) can be written as derivation trees in which each node is the application of a rule of the system. The validity of a proof can be checked using a computer program. A machine-checked proof is one that has been checked in this way. Proof-checking programs can be quite small and simple, so one can reasonably hope to implement a proof-checker free of bugs.
It is inconvenient to construct derivation trees “by hand.” A proof assistant is a tool that combines a proof checker with a user interface that assists the human in building proofs. The proof assistant may also contain algorithms for proof automation, such as tactics and decision procedures.
A certified compiler is one proved correct with a machine-checked proof. A certified program logic is one proved sound with a machine-checked proof. A certified program is one proved correct (using a program logic) with a machine-checked proof.
A static analysis algorithm calculates properties of the behavior of a program without actually running it. A static analysis is sound if, whenever it claims some property of a program, that property holds on all possible behaviors (in the operational semantics). The proof of soundness can be done using a (sound) program logic, or it can be done directly with respect to the operational semantics of the programming language. A certified static analysis is one that is proved sound with a machine-checked proof—either the static analysis program is proved correct, or each run of the static analysis generates a machine-checkable proof about a particular instance.
In Part I we will review Hoare logics, operational semantics, and separation logics. For a more comprehensive introduction to Hoare logic, the reader can consult Huth and Ryan [52] or many other books; for operational semantics, see Harper [45, Parts I & II] or Pierce [73]. For an introduction to theorem-proving in Coq, see Pierce’s Software Foundations[74] which also covers applications to operational semantics and Hoare logic.
The VST Separation Logic for C light is a higher-order impredicative concurrent separation logic certified with respect to CompCert. Separation Logic means that its assertions specify heap-domain footprints: the assertion \( (p \mapsto x) \ast (q \mapsto y) \) describes a memory with exactly two disjoint parts; one part has only the cell at address \( p \) with contents \( x \), and the other has only address \( q \) with contents \( y \), with \( p \neq q \). Concurrent Separation Logic is an extension that can describe shared-memory concurrent programs with Dijkstra-Hoare synchronization (e.g., Pthreads). Higher-order means that assertions can use existential and universal quantifiers, the logic can describe pointers to functions and mutex locks, and recursive assertions can describe recursive data types such as lists and trees. Impredicative means that the \( \exists \) and \( \forall \) quantifiers can even range over assertions containing quantifiers. Certified means that there is a machine-checked proof of soundness with respect to the operational semantics of a source language of the CompCert C compiler.
A separation logic has assertions \( p \mapsto x \) where \( p \) ranges over a particular address type \( A \), \( x \) ranges over a specific type \( V \) of values, and the assertion as a whole can be thought of as a predicate over some specific type of
“heaps” or “computer memories” \( M \). Then the logic will have theorems such as \((p \mapsto x) * (q \mapsto y) \vdash (q \mapsto y) * (p \mapsto x)\).
We will write down generic separation logic as a theory parameterizable by types such as \( A, V, M \), and containing generic axioms such as \( P * Q \vdash Q * P \). For a particular instantiation such as CompCert C light, we will instantiate the generic logic with the types of C values and C expressions.
Chapter 3 will give an example of an informal program verification in “pencil-and-paper” Separation Logic. Then Part V shows the VST tools applied to build a foundationally sound toolchain for a toy language, with a machine-verified separation-logic proof of a similar program. Part III demonstrates the VST tools applied to the C language, connected to the CompCert compiler, and shows machine-checked verification C programs.
**Figure 1.1** shows the client view of the VST Separation Logic for C light—that is, the specification of the axiomatic semantics. Users of the program logic will reason directly about CompCert values (integers, floats, pointers) and C-light expression evaluation. Users do not see the operational semantics of C-light commands, or CompCert memories. Instead, they use
the axiomatic semantics—the Hoare judgment and its reasoning rules—to reason indirectly about memories via assertions such as $p \Rightarrow x$.
The modular structure of the **client view** starts (at bottom left of Fig. 1.1) with the specification of the **C light** language, a subset of C chosen for its compatibility with program-verification methods. We have C values (such as integers, floats, and pointers); the abstract syntax of C light, and the mechanism of evaluating C light expressions. The client view treats statements such as assignment and looping *abstractly* via an axiomatic semantics (Hoare logic), so it does not expose an operational semantics.
At bottom right of Figure 1.1 we have the operators and axioms of Separation Logic and of Indirection Theory. At center are the assertions of our program logic for C light, which (as the diagram shows) make use of C-light expressions and of our logical operators. At top, the Hoare axioms for C light complete the specification of the program logic.
Readers primarily interested in using the VST tools may want to read Parts I through III, which explain the components of the client view.
The soundness proof of the VST Separation Logic is constructed by reasoning in the *model* of separation logic. Figure 1.2 shows the structure of the soundness proof. At bottom left is the specification of C-light operational semantics. We have a generic theory of safety and simulation for shared-memory programs, and we instantiate that into the “C light safety” theory.
At bottom right (Fig. 1.2) is the theory of separation algebras, which form models of separation logics. The assertions of our logic are predicates on the resource maps that, in turn, model CompCert memories. The word predicate is a technical feature of our Indirection Theory that implicitly accounts for “resource approximation,” thus allowing higher-order reasoning about circular structures of pointers and resource invariants.
We construct a semantic model of the Hoare judgment, and use this to prove sound all the judgment rules of the Separation Logic. All this is encapsulated in a Coq module called SeparationLogicSoundness.
Parts IV through VI explain the components of Figure 1.2, the semantic model and soundness proof of higher-order impredicative separation logic for CompCert C light.
The Coq development of the Verified Software Toolchain is available at vst.cs.princeton.edu and is structured in a root directory with several subdirectories:
compCert: A few files copied from the CompCert verified C compiler, that comprise the specification of the C light programming language.
sepcomp: Theory of how to specify shared-memory interactions of CompCert-compiled programs.
msl: Mechanized Software Library, the theory of separation algebras, share accounting, and generic separation logics.
veric: The program logic: a higher-order splittable-shares concurrent separation logic for C light.
floyd: A proof-automation system of lemmas and tactics for semiautomated application of the program logic to C programs.
progs: Applications of the program logic to sample programs.
veristar: A heap theorem prover using resolution and paramodulation.
A proof development, like any software, is a living thing: it is continually being evolved, edited, maintained, and extended. We will not tightly couple this book to the development; we will just explain the key mathematical and organizational principles, illustrated with snapshots from the Coq code.
Part I
Generic Separation Logic
Synopsis: Separation logic is a formal system for static reasoning about pointer-manipulating programs. Like Hoare logic, it uses assertions that serve as preconditions and postconditions of commands and functions. Unlike Hoare logic, its assertions model anti-aliasing via the disjointness of memory heaplets. Separation algebras serve as models of separation logic. We can define a calculus of different kinds of separation algebras, and operators on separation algebras. Permission shares allow reasoning about shared ownership of memory and other resources. In a first-order separation logic we can have predicates to describe the contents of memory, anti-aliasing of pointers, and simple (covariant) forms of recursive predicates. A simple case study of straight-line programs serves to illustrate the application of separation logic.
Part II
Higher Order Separation Logic
Synopsis: Instead of reasoning directly on the model (that is, separation algebras), we can treat Separation Logic as a syntactic formal system, that is, a logic. We can implement proof automation to assist in deriving separation-logic proofs.
Reasoning about recursive functions, recursive types, and recursive predicates can lead to paradox if not done carefully. Step-indexing avoids paradoxes by inducting over the number of remaining program-steps that we care about. Indirection theory is a kind of step-indexing that can serve as models of higher-order Hoare logics. Using indirection theory we can define general (not just covariant) recursive predicates.
Recursive data structures such as lists and trees are easily modeled in indirection theory, but the model is not the same one conventionally used, as it inducts over “age”—the approximation level, the amount of information left in the model—rather than list-length or tree-depth. A tiny pointer/continuation language serves as a case study for Separation Logic with first-class function-pointers, modeled in indirection theory. The proof of a little program in the case-study language illustrates the application of separation logic with function pointers.
SYNOPSIS: Verifiable C is a style of C programming suited to separation-logic verifications; it is similar to the C light intermediate language of the CompCert compiler. We show the assertion language of separation-logic predicates for specifying states of a C execution. The judgment form $\text{semax}$ of the axiomatic semantics relates a C command to its precondition postconditions, and for each kind of command there is an inference rule for proving its $\text{semax}$ judgments. We illustrate with the proof of a C program that manipulates linked lists, and we give examples of other programs and how they can be specified in the Verifiable C program logic. Shared-memory concurrent programs with Dijkstra-Hoare synchronization can be verified using the rules of concurrent separation logic.
SYNOPSIS: Specification of the interface between CompCert and its clients such as the VST Separation Logic for C light, or clients such as proved-sound static analyses and abstract interpretations. This specification takes the form of an operational semantics with a nontrivial memory model. The need to preserve the compiler’s freedom to optimize the placement of data (in memory, out of memory) requires the ability to rename addresses and adjust block sizes. Thus the specification of shared-memory interaction between subprograms (separately compiled functions, or concurrent threads) requires particular care, to keep these renamings consistent.
Part V
Indirection Theory
Synopsis: Indirection theory gives a clean interface to higher-order step indexing. Many different semantic features of programming languages can be modeled in indirection theory. The models of indirection theory use dependent types to stratify quasirecursive predicates, thus avoiding paradoxes of self-reference. Lambda calculus with mutable references serves as a case study to illustrate the use of indirection theory models.
When defining both Indirection and Separation one must take extra care to ensure that aging commutes over separation. We demonstrate how to build an axiomatic semantics with using higher-order separation logic, for the pointer/continuation language introduced in the case study of Part II.
Part VI
Semantic model and soundness of Verifiable C
Synopsis: To prove soundness of the Verifiable C separation logic, we first give a model of \texttt{mpred} as \texttt{pred(rmap)}, that is, predicates on resource maps. We give a model for permission-shares using trees of booleans. We augment the C light operational semantics with juicy memories that keep track of resources as well as “dry” values. We give a semantic model of the Hoare judgment, using the continuation-passing notion of “guards.” We use this semantic model to prove all the Hoare rules. Our model and proofs have a modular structure, so that they can be ported to other programming languages (especially in the CompCert family).
Part VII
Applications
SYNOPSIS: In Part III we showed how to apply a program logic interactively to a program, using tactics. Here we will show a different use of program logics: we build automatic static analyses and decision procedures as efficient functional programs, and prove their soundness using the rules of the program logic.
|
{"Source-Url": "http://vst.cs.princeton.edu/PLCC-sample.pdf", "len_cl100k_base": 4998, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 33739, "total-output-tokens": 5572, "length": "2e12", "weborganizer": {"__label__adult": 0.0004153251647949219, "__label__art_design": 0.0002199411392211914, "__label__crime_law": 0.00035262107849121094, "__label__education_jobs": 0.0004091262817382813, "__label__entertainment": 5.53131103515625e-05, "__label__fashion_beauty": 0.0001595020294189453, "__label__finance_business": 0.00018203258514404297, "__label__food_dining": 0.0004301071166992187, "__label__games": 0.0005741119384765625, "__label__hardware": 0.0007295608520507812, "__label__health": 0.0005464553833007812, "__label__history": 0.00015807151794433594, "__label__home_hobbies": 7.700920104980469e-05, "__label__industrial": 0.00036215782165527344, "__label__literature": 0.0003199577331542969, "__label__politics": 0.00027561187744140625, "__label__religion": 0.0004734992980957031, "__label__science_tech": 0.007617950439453125, "__label__social_life": 7.456541061401367e-05, "__label__software": 0.003040313720703125, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0003535747528076172, "__label__transportation": 0.0005626678466796875, "__label__travel": 0.00018656253814697263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23097, 0.02379]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23097, 0.47326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23097, 0.89196]], "google_gemma-3-12b-it_contains_pii": [[0, 174, false], [174, 412, null], [412, 1108, null], [1108, 3849, null], [3849, 4349, null], [4349, 5820, null], [5820, 8065, null], [8065, 10463, null], [10463, 12952, null], [12952, 14216, null], [14216, 16554, null], [16554, 16851, null], [16851, 17720, null], [17720, 18594, null], [18594, 19857, null], [19857, 20656, null], [20656, 21307, null], [21307, 22056, null], [22056, 22760, null], [22760, 23097, null]], "google_gemma-3-12b-it_is_public_document": [[0, 174, true], [174, 412, null], [412, 1108, null], [1108, 3849, null], [3849, 4349, null], [4349, 5820, null], [5820, 8065, null], [8065, 10463, null], [10463, 12952, null], [12952, 14216, null], [14216, 16554, null], [16554, 16851, null], [16851, 17720, null], [17720, 18594, null], [18594, 19857, null], [19857, 20656, null], [20656, 21307, null], [21307, 22056, null], [22056, 22760, null], [22760, 23097, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23097, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 23097, null]], "pdf_page_numbers": [[0, 174, 1], [174, 412, 2], [412, 1108, 3], [1108, 3849, 4], [3849, 4349, 5], [4349, 5820, 6], [5820, 8065, 7], [8065, 10463, 8], [10463, 12952, 9], [12952, 14216, 10], [14216, 16554, 11], [16554, 16851, 12], [16851, 17720, 13], [17720, 18594, 14], [18594, 19857, 15], [19857, 20656, 16], [20656, 21307, 17], [21307, 22056, 18], [22056, 22760, 19], [22760, 23097, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23097, 0.25]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
13ff385a992261ef0a27b5f4c24aaf7745eee638
|
Coordinating Vertical Elasticity of both Containers and Virtual Machines
Yahya Al-Dhuraibi1,2, Faiez Zalila1, Nabil Djarallah2 and Philippe Merle1
1Inria / University of Lille, France 2Scalair Company, France
Keywords: Cloud Computing, Container, Docker, Vertical Elasticity.
Abstract: Elasticity is a key feature in cloud computing as it enables the automatic and timely provisioning and deprovisioning of computing resources. To achieve elasticity, clouds rely on virtualization techniques including Virtual Machines (VMs) and containers. While many studies address the vertical elasticity of VMs and other few works handle vertical elasticity of containers, no work manages the coordination between these two vertical elasticities. In this paper, we present the first approach to coordinate vertical elasticity of both VMs and containers. We propose an auto-scaling technique that allows containerized applications to adjust their resources at both container and VM levels. This work has been evaluated and validated using the RUBiS benchmark application. The results show that our approach reacts quickly and improves application performance. Our coordinated elastic controller outperforms container vertical elasticity controller by 18.34% and VM vertical elasticity controller by 70%. It also outperforms container horizontal elasticity by 39.6%.
1 INTRODUCTION
Cloud computing is an attractive paradigm to many application domains in industry and academia. An enormous number of applications are deployed on cloud infrastructures. The workload of cloud applications usually varies drastically over time. Therefore, maintaining sufficient resources to meet peak requirements can be costly, and will increase the application provider's functional cost. Conversely, if providers cut the costs by maintaining only a minimum computing resources, there will not be sufficient resources to meet peak requirements and cause bad performance, violating Quality of Service (QoS) and Service Level Agreement (SLA) constraints. Cloud elasticity takes an important role to handle such obstacle. Cloud elasticity is a unique feature of cloud environments, which allows to provision/deprovision or reconfigure cloud resources, i.e., Virtual Machines (VMs) and containers (Al-Dhuraibi et al., 2017b). Cloud elasticity can be accomplished by horizontal or vertical scaling. Horizontal elasticity consists in adding or removing instances of computing resources associated to an application (Coutinho et al., 2015). Vertical elasticity consists in increasing or decreasing characteristics of computing resources, such as CPU, memory, etc. (Lakew et al., 2014).
VMs and containers are the main computing resource units in cloud computing. VMs are the traditional core virtualization construct of clouds. Containers are a new competitor, yet complementary virtualization technology to VMs. In this paper, we use Docker containers. Docker1, a recent container technology, is a system-level virtualization solution that allows packaging an application with all of its dependencies into standardized units for software deployment. While VMs are ultimately the medium to provision PaaS and application components at the infrastructure layer, containers appear as a more suitable technology for application packaging and management in PaaS clouds (Pahl, 2015). Containers can run on VMs or on bare OS. Running containers or different containerized applications in VM or cluster of VMs is an emerging architecture used by the cloud providers such as AWS EC2 Container Service (ECS), Google Cloud Platform, MS Containers, Rackspace, etc. The VMs are run by the hypervisors on the host. Our work manages resources for such architecture. Therefore, this paper addresses the combination of vertical elasticity of containers and vertical elasticity of VMs.
Many works (Baruchi and Midorikawa, 2011), (Dawoud et al., 2012), (Farokhi et al., 2015) handle the vertical elasticity of VMs, other works (Monsalve et al., 2015), (Paraiso et al., 2016) manage the resources of containers. However, no
1http://docker.io
attention was given to the coordination of both elasticities. In addition, these works do not take in consideration that the added resources to the VM are not detected by Docker daemons. We propose the first system powering the coordination between VM and container vertical elasticity. In this paper, we propose a controller coordinating container vertical elasticity with the hosting VM vertical elasticity. This approach autonomously adjusts container resources according to workload demand. Subsequently, we control VM resources if the hosted containers require more resources. Docker daemon does not automatically detect the “on-the-fly” or hot added resources at the VM level unless it is reinstalled or its dedicated cgroups modification. Our system enables Docker daemons to detect the added resources to the hosting VM, therefore, containers can make use of these resources. In addition to the scientific aspects listed in the below items, we have added this technical contribution. Our approach is evaluated by conducting experiments on the benchmarking application RUBiS (Cecchet et al., 2002). RUBiS is an implementation of an auction site similar to eBay, it is widely used in clouds to evaluate J2EE application servers performance and scalability. The results show that our approach improves application performance. It outperforms container vertical elasticity controller by 18.34% and VM vertical elasticity controller by 70%, it also outperforms container horizontal elasticity by 39.6%. The main contributions of this paper are:
1. An autonomous vertical elasticity system for both Docker containers and the hosting VMs. It allows to add/remove resources (i.e., CPU cores, memory) according to the workload demand.
2. We show that our combination of vertical elasticity of both VMs and containers is better than the vertical elasticity of VM only or the vertical elasticity of containers only, i.e., $V_{vm} \oplus V_{cont} > V_{vm}$, where ($\oplus$) and ($\oplus$) are the symbols for the logical AND and XOR operators, respectively. $V_{vm}$ denotes the vertical elasticity of VMs while $V_{cont}$ denotes the vertical elasticity of containers.
3. We show that our combination of vertical elasticity of both VMs and containers is better than the horizontal elasticity of containers, i.e., $V_{vm} \oplus V_{cont} > H_{cont}$, where $H_{cont}$ denotes the horizontal elasticity of containers.
The rest of this paper is organized as follows. Section 2 discusses the motivation towards this work. Section 3 describes the design and function of our coordinating vertical elasticity controller system. Section 4 presents the evaluation of our solution. In Section 5 we discuss some related works. Section 6 presents conclusion and future work.
2 MOTIVATION
There is a large amount of research on cloud elasticity, however, most of them are based on VMs. Some works highlight elasticity of containers and they are discussed in Section 6. With the varying application workload demand, container(s) on the host continues to scale up/down resources, thanks to our Docker controller which manages container resource allocation. The problem is that when containers have already allocated all resources from the host machine, the containerized application performance will be degraded. Therefore, to handle such obstacle and to add more resources, one of the following mechanisms should be used: horizontal elasticity, migration, or reconfiguration of the host machine (i.e., vertical elasticity). We experiment these mechanisms and show that the vertical elasticity is better when it is possible in terms of performance and configurations.
Horizontal Elasticity: Since horizontal elasticity consists in replicating the application on different machines, some applications such as vSphere and DataCore require additional licenses for each replica. These licenses could be very expensive. Besides, horizontal elasticity requires additional components such as load balancers and their reconfiguration. The initialization of an instance takes also a time during the boot process to be functional. These requirements are not needed for the vertical elasticity. In (Dawoud et al., 2012), they have mathematically and experimentally proved that the vertical elasticity is better than the horizontal elasticity. To verify this fact, they have used queuing theory (Sztrik, 2012). Although horizontal elasticity has many advantages including redundancy, being able to scale to almost any scale, and allowing load balancing over multiple physical machines, it requires additional components and reconfiguration. (Appuswamy et al., 2013) proves that vertical elasticity outperforms horizontal scaling in terms of performance, power, cost, and server density in the world of analytics, mainly in Hadoop MapReduce. In addition, horizontal elasticity is not a good choice for the stateful applications that require sticky sessions. Finally, coordinated vertical scaling is desired when there is enough capacity of the physical servers, horizontal scalability may still be needed, since vertical scalability is ultimately limited by the capacity of resources.
Migration: the other choice to have more resources is to migrate the container to another machine with more resources. To experiment this mechanism, we implement live migration technique for Docker containers. CRIU (Checkpoint/Restore, 2017) is used to achieve the procedure and migrate containers lively (Al-Dhuraibi et al., 2017a). CRIU is a Linux functionality that allows to checkpoint/restore processes, e.g., Docker containers. CRIU has the ability to save the state of a running process so that it can later resume its execution from the time of the checkpoint. We take many pre-dumps of the container while it is running, then a final dump for the memory page changes after the last pre-dump is taken (this time the Docker container freezes). While an efficient mechanism is used to transfer Docker containers, there is still downtime due to the migration when the container process is frozen. Table 1 shows migration down time for two small size applications (nginx, httpd). Network traffic overhead is not considered. Container migration is also risky for stateful applications such as RTPM media applications to lose sessions.
We tend to vertically adjust the cloud infrastructures, thus we use coordinated vertical elasticity between the containers and their host machine. Using such mechanism, elasticity actions are coordinated and performed quickly.
3 ELASTIC CONTROLLER
3.1 General Design
Our system adheres to the control loop principles of the IBM’s MAPE-K reference model (IBM, 2006). The control part of MAPE-K consists of many phases: Monitor, Analyze, Plan, and Execute. The managed components in this context are the infrastructure units KVM VMs and Docker containers, the containerized applications as well. We design elastic controllers to automatically adjust resources to the varying workload without violating QoS by growing or shrinking the amount of resources quickly on demand for both containers and their VMs. Figure 1 shows the general architecture of our controllers. The architecture design includes an elastic controller for Docker containers and another one for the hosting machine. The aim for the second controller is to allocate/de-allocate resources if containers residing on a virtual host machine require more/less resources than the amount of resources offered by that VM.
3.2 Components of the System
3.2.1 Monitoring Component
The monitoring component of Docker controller collects periodically current resource utilization and acquisition of every container on the host. The collected data can be (i) the resource utilization metrics such as CPU or memory current usage or (ii) the acquired resources such as memory size or CPU cores. This information is collected from the Docker daemon via its RESTful API and from the container cgroups filesystem directly. Our container controller monitors these cgroups each 4 seconds on an interval of 16 seconds, then the average values are reported. This monitoring data will be used in the reactive model in the elastic Docker controller to make elastic actions. Similarly, the host machine resource utilization and the amount of acquired resources are monitored periodically as shown in Table 2. The elasticity VM controller will use this data to provision/de-provision resources on the host machine. The values shown in Table 2 (thresholds, increase/decrease limits) are chosen following (Al-Dhuraibi et al., 2017a), (Dawoud et al., 2012), (Baresi et al., 2016) which are based on real-world best practices. We have noticed that the CPU and memory utilization values are sometimes fluctuating rapidly, which could be due to the nature of workload. Therefore, to avoid these oscillations, we measure CPU and memory utilization periodically on an interval of 16 seconds (as shown in Table 2), then we take the average values as the current utilization.
3.2.2 Docker Controller
Docker relies on cgroups to combine processes running in a container. Cgroups allow to manage the resources of a container such as CPU, memory, and network. Cgroups not only track and manage groups of processes but also expose metrics about CPU, memory, etc. Cgroups are exposed through pseudo-filesystems. In these files, Docker resources can be configured to have hard or soft limits. When soft limit is configured, the container can use all resources on the host machine. However, there are other parameters that can be controlled here such as CPU shares that determine a relative proportional weight that the container can access the CPU. Hard limits are set to
Coordinating Vertical Elasticity of both Containers and Virtual Machines

Figure 1: Coordinated elastic controllers between VMs and containers.
Table 2: System control parameters.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Docker containers</th>
<th>VMs</th>
</tr>
</thead>
<tbody>
<tr>
<td>Upper threshold</td>
<td>90%</td>
<td>90%</td>
</tr>
<tr>
<td>Lower threshold</td>
<td>70%</td>
<td>70%</td>
</tr>
<tr>
<td>Period</td>
<td>4 sec</td>
<td>1 min</td>
</tr>
<tr>
<td>Interval</td>
<td>16 sec</td>
<td>1 min</td>
</tr>
<tr>
<td>Breath-up/down</td>
<td>10/20 sec</td>
<td>20/40 sec</td>
</tr>
<tr>
<td>CPU adaptation</td>
<td>1 vCPU</td>
<td>1 vCPU</td>
</tr>
<tr>
<td>Memory adaptation</td>
<td>-128/+256</td>
<td>-256/+512</td>
</tr>
</tbody>
</table>
When the thresholds are hit and the logical conditions are met, the controller increases or decreases the resources with values shown in Table 2. For example, if the average memory utilization for the last 16 seconds is greater than the upper threshold (90%), then increase the memory size by 256MB, and wait 10 seconds (breath up duration) before effectuating another scaling action.
In Table 2, we notice that memory adaptation values (increase and decrease ratios) are different. The controller decreases memory size by a small amount in the scaling down process because the applications are sensitive to the memory resource, and this could lead to interrupt the functionality of the application. In addition, after each scaling decision, the controller waits a specific period of time (breath duration). Breath duration is a period of time left to give the system a chance to reach a stable state after each scaling decision. As shown in Table 2, we set two breath durations, breath-up and breath-down. Breath-up is smaller than breath-down to allow the system to scale-up rapidly to cope with burst workload. Breath-down is larger than breath-up duration in order to avoid uncertain scaling down action. Our elastic Docker controller manages all the containers.
give the container a specified amount of resources, Docker vertical elasticity controller can change these hard limits dynamically according to the workload. The CPU access can be scheduled either by using Completely Fair Scheduler (CFS) or by using Real-Time Scheduler (RTS) (Red Hat, Inc.). In CFS, CPU time is divided proportionately between Docker containers. On the other hand, RTS provides a way to specify hard limits on Docker containers or what is referred to as ceiling enforcement. Docker controller is integrated with RTS in order to make it elastic. When limits on Docker containers are set, the elasticity controller scales up or down resources according to demand. Once there is no limits set, it is hard to predict how much CPU time a Docker container will be allowed to utilize. In addition, as indicated, Docker can use all resources on the host machine by default, there is no control how much resources will be used, and the customer may not afford to pay the cost of such resources. The elastic controller of Docker containers adjusts memory and CPU vcores according to the application workload. This controller modifies directly the cgroup filesystems of the container to execute the elastic action (scaling up/down). The average CPU or memory usage values are calculated over fixed interval of time and compared against upper/lower thresholds (70% / 90%) as shown in Table 2.
residing on the virtual machine taking into consideration the available resources on that machine and the already allocated resources to the containers.
### 3.2.3 VM Controller
If containers allocate all resources on their hosting VM, they could reach an overload point of 100%. At that time the overload could cause errors in the workload execution since there is no free resources to provision. Therefore, our VM controller should intervene before such situation takes place. Likewise Docker containers, the hosting VM is monitored constantly and capacity is increased or decreased in relation to the VM reconfiguration policy involved in our VM controller. The VM controller performs vertical elasticity actions based on rules and real-time data captured by the monitoring system. As shown in Table 2, the monitoring component monitors the VM resource usage on an interval of one minute. It uses psutil library to get the resource metrics. The controller analyzes these collected data using its reactive model, it triggers its scaling decisions to increase or decrease VM resources, at the same time, it allows Docker engine to detect the new resources by updating cgroups of that Docker daemon. The values to increase/decrease memory, vCPUs are +512MB/-256MB, +1/-1, respectively.
### 3.3 Interactions Between Components
As shown in Figure 1, the VM controller can trigger elastic actions based on two cases: (i) when the VM resources utilization reaches certain thresholds, (ii) when it receives a demand from the Docker controller to increase or decrease resources. Here, the VM controller can increase resources without receiving a demand from the Docker controller if we suppose that there are other processes running on the VM alongside with containers. When the VM controller adds more vCPUs to the VM, the Docker engine does not detect these resources whether it uses hard or soft limits. Therefore, upon each scaling decision, the VM controller compares the resources on the VM and Docker engine, it then identifies the ids of the newly added vCPUs, then it updates the cgroups of Docker engine. Now, Docker engine can allocate these resources to containers. The coordination between the controllers is our major concern, we take the below scenario to illustrate a case of such coordination. Suppose that a VM has 3 vCPUs and three containers are deployed where hard limits are set and each container has 1 vCPU. If the first container usage is 100%, and the other two containers are idle (1 vCPU is 100%, 2 vCPUs are idle), the VM controller will try to decrease the vCPUs, but if it decreases the vCPUs, this will lead to destroy the container whose vCPU is withdrawn. Therefore, the coordination will prevent the VM controller to scale down, and the Docker controller will demand the VM controller to allocate more resources in order to give the first container more resources.
### 4 VALIDATION
#### 4.1 Experimental Setup
We evaluated our work using RUBiS (OW2, 2008), a well-known Internet application that has been modeled after the internet auction website eBay. Our deployment of RUBiS uses two tiers: application tier, a scalable pool of JBoss application servers that run Java Servlets, and a MySQL database to store users and their transactions. We performed all our experiments on Scalair\(^2\) infrastructure. Scalair is a private cloud provider company. We developed the experiments using the following technologies: (a) KVM version 1.5.3-105.el7_2.7 (x86_64), libvirt version 1.2.17, virt-manager 1.2.1, the number of VMs used and their characteristics will be described in the specific experiment subsections because we have used different configurations based on the objective of the experiment. (b) VMWare VCenter version 6.0. (c) Docker engine version 17.04.0-ce. (d) Kubernetes v1.5.2 (Brewer, 2015), the Kubernetes cluster consists of 3 machines. (e) ab (Apache HTTP server benchmarking tool) version 2.3 to generate workloads. The hardware specifications consist of 4 powerful servers: 2 HP ProLiant DL380 G7 and 2 HP Proliant XL170r Gen9. The experiments answer the following research questions (RQ):
- **RQ#1**: how can containers automatically use the hot added resources to their hosting VM?
- **RQ#2**: what is the efficiency of performing scaling decisions made by our coordinated controller?
- **RQ#3**: is our coordinated vertical elasticity of both VMs and containers better than vertical elasticity of VM only or vertical elasticity of containers only (i.e., \( V_{\text{cont}} - V_{\text{vm}} > V_{\text{vm}} \oplus V_{\text{cont}} \))?
- **RQ#4**: is our coordinated vertical elasticity of both VMs and containers better than horizontal elasticity of containers (i.e., \( V_{\text{cont}} > H_{\text{cont}} \))?
\(^2\)https://www.scalair.fr
4.2 Evaluation Results
We describe each experiment and analyze the results in response to the RQs.
RQ#1. In this experiment, we configure two VMs, each with Ubuntu Server 16.04.2 LTS. Initially, VM1 has 2 vCPUs with 2GB of RAM. We deploy RUBiS application inside two containers on VM1. The ab benchmark is installed on VM2, then we generate a workload to the RUBiS application (i.e., 600K requests, concurrency rate 200). The workload requests query RUBiS database to retrieve lists of products, categories, items, etc of the auction website. The difference between workloads is the intensity and concurrency levels. We let the default policy for Docker containers which allow them to use all the available resources. The VM controller is enabled, we register the response time when the workload requests are finished, it was 588.846 seconds. In the second case, we run the same workload, however in this case, we enabled our coordinating controller and we set limits to Docker containers that will be reconfigured by the container controller to accommodate the charge and the response time was 487.4 seconds when the workload is finished. Based on these results, we conclude the following findings:
• The response time is high in the first case because Docker engine does not detect the added resources at the VM level. VM controller has added one vCPU to the VM (the total of CPUs moves to 3 on VM1), however, the two containers used only two CPUs, the third vCPU is idle because containers do not detect automatically the added resources.
• In the second case, the response time becomes smaller, thanks to our coordinated controller which allows containers to demand more resources and subsequently update the Docker engine with the added resources.
• The combined controller augments performance by 20.8% in this experiment. However if the workload increases, the coordinated controller will accommodate resources in contrary to the first case where the containers can not use more that the initially allocated 2 vCPUs.
RQ#2. In this evaluation, we measure the execution time of elastic actions. Elastic action is the process of adding or removing resources (CPU or memory) to a container, a KVM VM, or a VMware VM. We repeat the experiment eleven times for each resource (CPU or memory) on each target (i.e., container, KVM VM, VMware VM), and each time the action consists of 15 scaling up or down actions. During the experiments, the resources experience different stress workloads. We execute elastic actions and we measure the time they take to resize the resource, and then the median and variance is calculated. We take these measures to illustrate the efficiency of our approach to execute auto-scaling actions and to show the differences between the different virtualization units and technologies. We computed the average execution time, median time, and variance for Containers, KVM and VMware VM respectively: (0.010s, 0.009s, 0.000004), (3.29, 3.02s, 2.97) and (47.58, 44.14s, 45.44).
Based on the these values, we conclude:
• The average execution time is close to median time which indicates that the execution of the elastic actions are stable.
• The elastic actions performed in containers are faster than resizing KVM VM or VMware VM. There is no comparison between containers adaptation and hypervisors, the containers adapt more quickly to the reconfiguration changes while it takes more time to execute scaling actions against hypervisors. The VMware hypervisor manged by VCenter takes more time. High workloads lead to slow execution of elastic actions, particularly in VMware, i.e., why the variance is high.
RQ#3. This experiment provides a comparison among vertical elasticity of containers (V_{cont}), vertical elasticity of VMs (V_{vm}), and our proposed approach, coordinating elasticity of both containers and VM (V_{vm}, V_{cont}), in terms of performance, i.e., the execution time of workloads and mean response time of concurrent requests. We run three scenarios in this experiment, each scenario has its specific configuration. Five workloads drive each scenario. The experiment runs on 4 VMs: VM1, VM2, VM3, and VM4. VM4 used to generate the ab benchmark. In the first scenario (scenario1), RUBiS application is deployed on two containers in VM1 which has 3vCPUs, 2GB of RAM, initially, each Docker container has 1vCPU and 512MB of RAM. We enabled the elastic container controller (which will allow to use the resources available on the hosting VM) and it is named ElasticDocker controller. We measure the total execution time and the mean response time of concurrent requests for each workload as shown in Figure 2. In scenario2, we deploy RUBiS application on one VM (VM2) and its database in another VM (VM3). The VMs have 1vCPU and 1GB of RAM each. We enabled the vertical VM controller to adjust resources according to workload demand and then register the total execution time and the mean response time of concurrent requests for each workload as shown in Figure 2. In scenario3, we use the same configuration as in scenario1, except that we enabled our coordinating controller which controls elasticity of con-
tainers on the VM, and if there are no enough resources, it will add resources to the VM level. In Figure 2, the red color represents scenario1, the green color represents scenario2 and the blue color represents scenario3. Based on the analysis of this experiment, we concluded the following findings:
- In scenario1, the average total execution time, and the mean response time across concurrent requests for the five workloads is 443.7 seconds and 0.91 ms, respectively. Similarly, the average total execution time for the five workloads in scenario2 and scenario3 is 1383.4 seconds and 362.1 seconds, and the mean response time across concurrent requests is 3.1 ms and 0.76 ms, respectively.
- The combined vertical elasticity (scenario3) outperforms the container vertical elasticity (scenario1) by 18.34% and the VM vertical elasticity (scenario2) by more than 70%. However, if more workloads are being added to the scenario1, it will not handle them because the available resources will be consumed and performance will be degraded. This demonstrates that the equation $V_{cont} > V_{vm}$ is true.
RQ#4. The aim of this experiment is to provide a comparison between horizontal elasticity of containers and our coordinating vertical elasticity of VMs and containers. We use Kubernetes horizontal elasticity. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. To achieve the experiment, we use Kubernetes version v1.5.2. Our deployment of RUBiS on Kubernetes uses three tiers: a load-balancer (we use Kubernetes service to perform this role), a scalable pool of JBoss application servers, and a MySQL database. Kubernetes platform is deployed on 3 nodes running CentOS Linux 7.2. RUBiS is deployed in two containers, in addition to a load balancer. Then, we set the Kubernetes Horizontal Pod Autoscaling (HPA) to scale RUBiS containers based on rule-based thresholds. We use the same thresholds used in scenario3 in the previous section (in RQ#3). We generate two workloads to both our coordinated controller and Kubernetes cluster. The total execution time across all concurrent requests are measured for each workload as shown in Figure 3. According to these results, we conclude the following findings:
- The total execution time for the workloads is 340.66 seconds when our elastic controller is used, while it is 475.58 seconds when Kubernetes HPA is used. The execution time is longer when Kubernetes is used due to the slow Kubernetes integrated monitoring system (Heapster).
- Our combined vertical elasticity outperforms the horizontal elasticity by 39.6% according to the results of this experiment.
- This proves the equation $V_{vm} > V_{cont}$ is true.
5 RELATED WORK
We present works related to elasticity particularly the vertical elasticity of both VMs and containers. For the VM vertical elasticity, there are some works which focus on CPU resizing, e.g. (Lakew et al., 2014) and (Dawoud et al., 2012), while others concentrate on memory resizing, e.g., (Baruchi and Midorikawa, 2011) as well as combination of both such as the work of (Farokhi et al., 2015). (Monsalve et al., 2015) proposed an approach that controls CPU shares of a container, this approach uses CFS scheduling mode. Nowadays, Docker can use all the CPU shares if there is no concurrency by other containers. (Paraiso et al., 2016) proposed a tool to ensure the deployability and the management of Docker containers. It allows synchronization between the designed containers and those deployed. In addition, it allows to manually decrease and increase the size of container resources. (Baresi et al., 2016) proposed horizontal and vertical autoscaling technique based on a discrete-time feedback controller for VMs and containers. This novel framework allows resizing the container in high capacity VM, however, it does not control VM in response to container workload. It triggers containers to scale out horizontally to cope with workload demand. In addition, the application requirements and metadata must be precisely defined to enable the system to work. It also adds overhead by inserting agents for each container and VM. (Al-Dhuraibi et al., 2017a) describe an approach that manages container vertical elasticity, and when there is no
more resources on the host, they invoke live migration. Kubernetes and Docker Swarm are orchestration tools that permit container horizontal elasticity. They allow also to set limit on containers during their initial creation. The related works either trigger horizontal elasticity or migration to another high capacity machines. Our proposed approach supports automatic vertical elasticity of both containers and VMs, at the same time, container controller invokes VM controller to trigger scaling actions if there is no more resources on the hosting machine. Our work is the first one that explores the coordination between vertical elasticity of containers and VMs.
6 CONCLUSION
This paper proposes a novel coordinated vertical elasticity controller for both VMs and containers. It allows fine-grained adaptation and coordination of resources for both containers and their hosting VMs. Experiments demonstrate that: (i) our coordinated vertical elasticity is better than the vertical elasticity of VMs by 70% or the vertical elasticity of containers by 18.34%. (ii) our combined vertical elasticity of VMs and containers is better than the horizontal elasticity of containers by 39.6%. In addition, the controller performs elastic actions efficiently. We plan to experiment this approach with different classes of applications such as RTMP to verify if same results will be obtained with the predefined thresholds. Our future work also comprises the integration of a proactive approach to anticipate future workloads and reacts in advance. Furthermore, we plan to address hybrid elasticity or what we called diagonal elasticity: integrating both horizontal and vertical elasticity.
ACKNOWLEDGEMENTS
This work is supported by Scalair company (scalair.fr) and OCCIware (www.occiware.org) research project.
REFERENCES
Red Hat, Inc. Managing system resources on Red Hat Enterprise Linux. .
|
{"Source-Url": "http://www.scitepress.org/Papers/2018/66524/66524.pdf", "len_cl100k_base": 6984, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26509, "total-output-tokens": 8501, "length": "2e12", "weborganizer": {"__label__adult": 0.00030612945556640625, "__label__art_design": 0.0005035400390625, "__label__crime_law": 0.0003612041473388672, "__label__education_jobs": 0.0010900497436523438, "__label__entertainment": 0.00015079975128173828, "__label__fashion_beauty": 0.0001741647720336914, "__label__finance_business": 0.001140594482421875, "__label__food_dining": 0.0003552436828613281, "__label__games": 0.0006690025329589844, "__label__hardware": 0.002777099609375, "__label__health": 0.0007691383361816406, "__label__history": 0.00041866302490234375, "__label__home_hobbies": 0.0001494884490966797, "__label__industrial": 0.00066375732421875, "__label__literature": 0.00032401084899902344, "__label__politics": 0.0003228187561035156, "__label__religion": 0.0004220008850097656, "__label__science_tech": 0.43115234375, "__label__social_life": 0.00014579296112060547, "__label__software": 0.0626220703125, "__label__software_dev": 0.494384765625, "__label__sports_fitness": 0.00022399425506591797, "__label__transportation": 0.0006661415100097656, "__label__travel": 0.0002739429473876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36568, 0.03687]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36568, 0.44288]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36568, 0.88341]], "google_gemma-3-12b-it_contains_pii": [[0, 4093, false], [4093, 9255, null], [9255, 13797, null], [13797, 17208, null], [17208, 22003, null], [22003, 27170, null], [27170, 31475, null], [31475, 36568, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4093, true], [4093, 9255, null], [9255, 13797, null], [13797, 17208, null], [17208, 22003, null], [22003, 27170, null], [27170, 31475, null], [31475, 36568, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36568, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36568, null]], "pdf_page_numbers": [[0, 4093, 1], [4093, 9255, 2], [9255, 13797, 3], [13797, 17208, 4], [17208, 22003, 5], [22003, 27170, 6], [27170, 31475, 7], [31475, 36568, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36568, 0.08824]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
5486f108a56b099cd0757b7f7b67d1eeda8bc6e7
|
Open Archives Initiative – Object Reuse and Exchange
Technical Committee Meeting, January 11, 12 2007
Report on the Technical Committee Meeting, January 11, 12 2007
Edited by: Carl Lagoze & Herbert Van de Sompel
1 Venue
Butler Library, Columbia University, New York City
2 Final Agenda
Thursday, January 11
<table>
<thead>
<tr>
<th>Time</th>
<th>What</th>
<th>Details</th>
<th>Who</th>
</tr>
</thead>
<tbody>
<tr>
<td>9:00-9:30</td>
<td>Welcome</td>
<td>Logistics, Motivation, Agenda Overview, Goals</td>
<td>Herbert and Carl</td>
</tr>
<tr>
<td>9:30-10:30</td>
<td>Round-table introductions</td>
<td>Who are you? How does this relate to your experience/job/background? What constituencies do you represent?</td>
<td>TC members</td>
</tr>
<tr>
<td>10:30-10:45</td>
<td>Break</td>
<td></td>
<td></td>
</tr>
<tr>
<td>10:45-12:30</td>
<td>Problem Overview</td>
<td>White Paper</td>
<td>Herbert and Carl</td>
</tr>
<tr>
<td>12:30-13:30</td>
<td>Lunch</td>
<td>Provided</td>
<td></td>
</tr>
<tr>
<td>13:30-15:30</td>
<td>Feedback (1)</td>
<td>Prepared remarks about white paper & ORE Problem space</td>
<td>TC members</td>
</tr>
<tr>
<td>15:30-15:45</td>
<td>Break</td>
<td></td>
<td></td>
</tr>
<tr>
<td>15:45-17:00</td>
<td>Feedback (2)</td>
<td>Prepared remarks about white paper & ORE Problem space</td>
<td>TC members</td>
</tr>
</tbody>
</table>
Friday, January 12
<table>
<thead>
<tr>
<th>Time</th>
<th>What</th>
<th>Details</th>
<th>Who</th>
</tr>
</thead>
<tbody>
<tr>
<td>9:30-10:00</td>
<td>Review of Thursday & Planning for Friday</td>
<td>Progress Disagreements How to proceed?</td>
<td>Herbert and Carl (with TC members)</td>
</tr>
<tr>
<td>10:00-12:15 (with break)</td>
<td>Reaching Consensus (1)</td>
<td>Problem Statement Scoping Target Participants</td>
<td>All</td>
</tr>
</tbody>
</table>
3 Attendees: Introductions, Affiliations, Activities and Interests
**TIM DI LAURO**
- John Hopkins University, Library Digital Programs/Digital Knowledge Center of the Sheridan Libraries
- Active interest in: interface between data and services, big science data repositories, humanities data repositories, digital manuscripts, entity matching, digital preservation
- Ongoing IMLS project with National Virtual Observatory regarding publications that include data
**DAVE FULKER**
- University Corporation for Atmospheric Research
- Co-founder of NSDL
- Founder of UNIDATA effort regarding exchangeable scientific data objects
- Active interest in: real-time data, data exchange with appropriate semantics, data visualization
**TONY HAMMOND**
- Nature Publishing Group (NPG), New Technology Group
- 10 years of experience in STM publishing
- Active involvement in specifying NISO OpenURL Standard, info URI
- Active involvement in efforts related to deploying the publishing infrastructure for the digital age including: identifiers (DOI, info URI), multiple resolution of DOIs, OpenURL (NISO standard and pre-standard), SRU, RSS
- Active involvement in NPG’s Connotea, and Open Text Mining (OTMI) efforts
**PETE JOHNSTON**
- EduServ Foundation
- Proxy for Andy Powell
- Digital Library and eLearning environments
- Active involvement in efforts related to deploying the publishing infrastructure for the digital age including DCMI
- Previously worked at UKOLN
• Strong interest in semantic web, web architecture and their application to scholarly and learning repositories
**Richard Jones**
- Imperial College, Institutional Repository effort
- Previously at University of Bergen (Norway) and Edinburgh University
- Active in DSpace community as committer and member of DSpace Architectural Review Group
- Active interest in Electronic Thesis and Dissertation, moving objects across repositories, linking data, workflow
- Member of JISC Common Repository Interface Group
**Carl Lagoze**
- Cornell University, Computing and Information Science
- Active involvement in specifying OAI-PMH
- Active involvement in other interoperability efforts related to web information systems including Dienst, Dublin Core, Fedora, and the ABC metadata ontology.
- Ongoing research in new forms of scholarly communication including the use of machine learning methods to recognize compound objects on the web and analysis of hybrid social/bibliographic networks.
**Peter Murray**
- OhioLINK
- Large-scale system for statewide higher education access to A&I databases, full-text collections, catalogues
- Statewide hosted digital content repository
- Active in Fedora community
- Active interest in digital preservation, workflow, versioning
**Michael Nelson**
- Old Dominion University, Computer Science Department
- Previously responsible for NASA’s Digital Library efforts
- Active involvement in specifying OAI-PMH
- mod_oai project to bring the power of the OAI-PMH to Web servers
- Ongoing research efforts in the realm of digital preservation
**Ray Plante**
- University of Illinois at Urbana-Champaign, National Center for Supercomputer Application, National Virtual Observatory
- Supporting astronomical research over the network: publishing data, resource registries, discovery of services, embedding data products in publications
**Robert Sanderson**
- University of Liverpool, Department of Computer Science
- Active involvement in specifying SRU/W
• Active involvement in efforts related to deploying the scholarly communication infrastructure for the digital age including: SRU/W, unAPI, OAI-PMH, OpenURL
• Member of UK National Centre for Text Mining
• Close collaboration with San Diego Supercomputer Center
• Active interest in: text and data mining, XML, information retrieval, GRID computing, digital manuscripts
**Herbert van de Sompel**
- Los Alamos National Laboratory, Research Library, Digital Library Research & Prototyping Team
- Active involvement in specifying OAI-PMH, OpenURL (NISO standard and pre-standard), info URI, MPEG-21 DID and DII
- Active involvement in efforts related to deploying the scholarly communication infrastructure for the digital age including: OAI-PMH, info URI, OpenURL, SFX linking server
- Long-standing interest in helping establish the technical foundations for scholarly communication in the digital age
- Ongoing research efforts in the realm of digital preservation (NDIIP), digital scholarly communication (Pathways), alternative quality assessments of scholarly communication units (MESUR), repository and repository federation architecture (aDORe)
**Simeon Warner**
- Cornell University, Computing and Information Science
- Long-standing involvement with arXiv.org
- Strong interest in making arXiv an integrated part of the global research environment
- Active involvement in specifying OAI-PMH
- Involved in projects related to OAI-ORE: NSF Pathways (scholarly communication as a cross-repository workflow) and remote submission to arXiv (from CNRS France)
**Jeff Young**
- OCLC Research
- Active involvement in efforts related to deploying the publishing infrastructure for the digital age including: registries (OpenURL, info URI), identifiers (info URI), OAI-PMH, OpenURL, SRU, RSS
- Developer of some of the most widely used toolkits for OAI-PMH and OpenURL deployment
4 Meeting Results
4.1 Intellectual property and OAI-ORE work
The TC made two decisions regarding the results of the OAI-ORE work:
1. All public documents will be covered under a Creative Commons license
2. The group made a handshake agreement that there would be no attempt to patent results from the collective OAI-ORE effort
4.2 Definition of OAI-ORE Objectives
Develop, identify, and profile extensible standards and protocols to allow repositories, agents, and services to interoperate in the context of use and reuse of compound digital objects beyond the boundaries of the holding repositories.
4.3 Compound Digital Objects
Digital content with multiple components that may vary on multiple axes including:
- Content (semantic) types including:
- Text, image, video, audio
- Datasets
- Simulations
- Software
- Dynamic knowledge representations
- Machine readable chemical structures
- Bibliographic and other types of metadata
- Media types including:
- IANA registered MIME types
- Other type registries such as the Global Digital Format Registry (GDFR)
- Network locations including content from:
- Institutional repositories
- Scientific data repositories
- Social networking sites
- General web
- Relationships including:
- Lineage
- Versions
- Derivations
See Figure 1 for a sample (and simple) compound digital object. It depicts an imaginary object from arXiv.org from which several views are available. Informally a view can be considered an alternate presentation of the content or meaning of the digital object. In the case of this example these views are the article in various formats, a splash page in HTML, and Dublin Core metadata.
Figure 2 provides a more elaborate and complete depiction of the same object, which now contains a component. Informally a component can be considered a subpart of the main object; for example, a dataset, or a chapter. In addition, the figure shows a relationship (for example, a citation) to an external object. Note that the component is logically within the boundary, it is a part, of the primary object, whereas the cited object is outside the logical boundary.
From here onwards, we refer to both components and views of a compound digital objects as members of the compound digital object.
Figure 1: A sample (simple) compound digital object
Figure 2: A more complete picture of a compound digital object
4.4 Scope of Use Cases and Applications
The standards and protocols endorsed, profiled or defined by OAI-ORE are intended to facilitate use and reuse of these compound digital objects and their components in the context of workflows supporting research and learning, while supporting notions of reference-ability, longevity, integrity, certification, and reproducibility that are foundations of scholarly communication. These workflows include a variety of services and applications that:
- facilitate discovery of these objects,
- reference (link to) these objects (and their members),
- obtain a variety of disseminations of these objects,
- aggregate and disaggregate these objects,
- enable processing by automated agents
4.5 Target adopters of OAI-ORE standards and protocols
**Systems that manage content including:**
- Institutional repositories
- Research-group and managed personal (ePortfolio) repositories
- Discipline-oriented repositories
- Publisher repositories
- Dataset repositories
- Cultural heritage repositories, including digitized museum and art collections
- Learning object repositories
- Digital and digitized text and manuscript collection management systems
**Systems that consume that managed content including:**
- All the aforementioned systems because, in many cases, those systems also ingest content from other systems and/or provide services over the content they manage.
- Search engines
- Specialized/discipline-specific
- General web applications
- Productivity tools including
- Authoring tools
- Citation management
- Indexing and abstracting services
- Aggregators
- Collaborative environments
- Object-based social network applications
- Data processing applications including
- Data mining
- Text mining
- Scientific analysis tools
- Graph analysis applications including
- Link checkers
- Object-based Citation checkers
- Preservation services and other data management services
- Research assessment services
- Report generation
• Workflow tools
4.6 Proposed use cases
Over the next month the members of the technical committee will collaboratively (using a Wiki) develop a set of use cases against which the protocols and specifications will be tested. These use cases should be constructed as follows:
• One paragraph that describes a usage scenario and motivation from the perspective of an end user without describing the technical details of how the scenario is implemented. We note that each use case will inevitably require applications, registries, and services that extend beyond the scope of the OAI-ORE work.
• Commentary on how the protocols and specifications developed by OAI-ORE will facilitate the implementation of the use case.
The draft use cases and the TC members responsible for fleshing them out are as follows:
• Find, collect, analyze, relate, and publish data-oriented scholarly objects – Dave Fulker, Ray Plante
• Find, collect, analyze, relate, and publish text-oriented scholarly objects – Rob Sanderson, Tony Hammond
• Preservation of compound digital objects – Tim DiLauro, Michael Nelson
• Remote submission of compound digital objects – Simeon Warner, Jeff Young, Richard Jones
• Citation management – Herbert Van de Sompel, Tony Hammond
• Object equivalence recognition (de-deduping) to aid resource discovery – Pete Johnston, Andy Powell
• Graph-based quality assessment with eScience focus – Carl Lagoze, Peter Murray, Ray Plante
4.7 Relationship to web architecture
The TC spent considerable time analyzing the components of web architecture – URIs, resources, representations (see Figure 3) – and its implementation via HTTP. The analysis was based on a general consensus that results of OAI-ORE should align and not conflict with web architecture and should use that architecture as a foundation for the standards and protocols developed. As much as possible the OAI-ORE standards should be a specialization of existing web architecture concepts with the goal of meeting the requirements of the target adopters of OAI-ORE standards in the context of the defined use cases.
In the remainder of this report the usage of the terms resource and representation will be restricted to their definition in the web architecture (see http://www.w3.org/TR/webarch/).
Our analysis of the web architecture led to consensus around the following observations:
• The graph described by the web architecture document contains two types of nodes:
- resources:
▪ These are first-class objects with a standalone identity (URI)
They can be the target of links (or references), and the links may be typed to indicate the nature of the relationships between source and target. The following considerations apply for link typing:
- It is not widely adopted/exploited in general Web applications;
- Controlled vocabularies to define link types are not widely adopted;
- The manner in which to express link types is specific to document formats such as HTML and XML that have their own link tags.
- representations:
- These are second-class objects that are identified only via the resource that they represent. A representation is the result of applying a service to an identifier of a resource. Since there is one-to-many relationship between a resource and its representations, the representations have no unique identity. As a result:
- There is no means to link (or reference) a representation.
- They are only accessible through protocol negotiation.
- Although specific document formats (e.g. HTML, XML) express a notion of composite documents, the web architecture does not itself address the question of how to describe the composition of a compound digital object that aggregates a number of resources in multiple content types. Specifically, it does not address the following:
- What are the boundaries of a compound digital object, where that boundary contains a finite aggregation of members and a finite set of relationships among those members?
- What are the types of relationships between the aggregated members of a compound digital object?
---
**Figure 3: Web architecture (see [http://www.w3.org/TR/webarch/](http://www.w3.org/TR/webarch/))**
4.8 Expressing compound digital objects using Web architecture concepts
As a result of this analysis we agreed on the following points that frame the requirements of the OAI-ORE work in relation to the design features of the web architecture.
A key requirement of the OAI-ORE scenarios for use and reuse of compound digital objects is the ability to unambiguously identify and reference both the compound digital object and its components (e.g., dataset as sub-object as in Figure 2; chapters of a book; sections of a paper). A further requirement is the ability to unambiguously identify and reference “views” of these components (i.e. the pdf version or the ps version of a journal article). As a result, a compound digital object, its components, and these "views" must be modeled as resources (i.e. they must be identified by URIs) if they need to be available for re-use. Error! Reference source not found. Figure 4 contrasted with Figure 5 illustrates the need for this. In the former, the views are representations, without unique identity. In the latter, the views have been broken out to have a one-to-one correspondence to identifiable resources. Note that in both figures, the notion of the boundary of the compound digital object is not expressed.

Figure 4: Compound digital object modeled according to Web architecture; specific views not reference-able; boundary of compound digital object not expressed
It follows that the compound digital objects that are the subject of the OAI-ORE effort must be bounded aggregations of resources and their relationships. This aggregation must itself be a first-class identifiable object, because the aggregation corresponds to a logical digital object, which should be reference-able, linkable, etc. Therefore, it must be rooted by an identifying resource, which we refer to as the ORE resource (see Figure 6).
The URI of an ORE resource serves as the access point for service requests upon the aggregation. A specific service request on this URI returns a representation that describes the members of the aggregation (including the ORE resource) and their relationships. In the remainder of this document, we refer to this representation as the ORE representation (see Figure 6), and to the aggregation described by an ORE representation as an ORE aggregation.
Figure 5: Compound digital object modeled according to Web architecture; specific views reference-able; boundary of compound digital object not expressed
Figure 6: Compound digital object modeled according to Web architecture; specific views referencable; boundary of compound digital object expressed via an ORE representation.
Specifically, ORE aggregations form a sub-class of the set of possible aggregations of resources. This sub-class has the following distinguished properties:
- **An ORE aggregation has a boundary**: A key requirement of the OAI-ORE scenarios for use and reuse of compound digital objects is the ability to describe the boundaries – the finite set of resources and relationships – that correspond to compound digital objects. Note that these relationships internal to the boundaries of the ORE aggregation should have defined types that are specifically of a sub-class of all possible relationships - *intra-aggregation relationships* (Figure 7). Therefore OAI-ORE must define a standardized model that can describe this boundary and that can be instantiated for ORE resources.
- **The resources in an ORE aggregation may have relationships to resources external to the aggregation**: A key requirement of the OAI-ORE scenarios for use and reuse of compound digital objects is the ability to express relationships between the members of a compound digital object (the resources in the ORE aggregation) and resources that are external to the aggregation. These relationships external to the boundaries of the ORE aggregation should have defined types that are specifically of a sub-class of all possible relationships - *inter-aggregation relationships* (Figure 7). Therefore the model defined by the OAI-ORE must describe the typed relationships between resources in an ORE aggregation and external resources.
**Figure 7**: Compound digital object modeled according to Web architecture; specific views referencable; boundary of compound digital object expressed via an ORE representation; inter and intra object relationships.
Representations should not explicitly be included in an OAI-ORE model for compound digital objects. Of course the end product of any web transaction applied to a resource is a representation. Since ORE service requests, described later are a sub-class of web transactions, they ultimately return representations.
4.9 A model for aggregating web resources and expressing their relationships
In order to fulfill the OAI-ORE requirements enumerated above, we tentatively define the following aspects of the OAI-ORE work:
I) Define a model (referred from here on as the ORE Model) that describes an ORE aggregation – a finite set of resources and the relationships among the resources of that aggregation – and the relationships between that ORE aggregation and its member resources and resources that are external to the ORE aggregation. The aspects of that model and the aggregations that it describes are as follows:
A) The model formally describes a connected sub-graph with nodes that are resources, with optional typing (e.g. content-type, media-type, etc.), and edges that are intra-aggregation relationships between the resources that fall into two classes:
1) hasPart that defines a containment relationship.
(i) Examples are chapters of a book, sections of a research paper (e.g., abstract, introduction, etc.)
(ii) This relationship is recursive; i.e., the target resource of a hasPart relationship can be the source of additional relationships.
2) hasView that defines a relationship whereby the target resource is a presentation or view of the source resource.
(i) Examples are alternate formats of a scholarly paper (e.g., pdf, word, etc.) or a structured metadata (e.g., Dublin Core) presentation of that paper modeled as autonomous resources.
(ii) The target of the hasView relationship is always a leaf node; i.e., it can never be the source of further relationships.
B) The connected graph has the following additional characteristics:
1) It may be a sub-graph of a larger connected graph since multiple digital objects may be logically aggregated within another digital object via the hasPart relationship.
2) It is a rooted graph with the root being a distinguished node that is an ORE resource with the following characteristics:
(i) All other resources (nodes) in the sub-graph that is the respective ORE aggregation can be reached by following the directional relationships (hasPart and hasView) from ORE resource.
(ii) The URI of that ORE resource is the URI of the logical unit that is the compound digital object.
(iii) The ORE resource provides access to instances (serializations) of the model through OAI-ORE services as defined in the next section.
C) The ORE Model defines one other relationship, hasRelationshipTo, which expresses relationships between resources within the ORE aggregation (confined by the hasPart and hasView relationships) and resources external to the aggregation. We expect that
relationship will be specialized due to application or community based needs to accommodate semantics such as lineage, derivation, citation, etc.
II) Define a format that can be used to serialize instances of the ORE Model. This format can be defined using schema mechanisms such as XML schema, OWL, etc.
III) Define a mechanism to associate instances of the model with ORE resources, and thereby provide the basis for services upon the defined aggregations of resources.
Figure 8 shows the compound digital object of the previous pictures expressed according to the above preliminary ideas regarding the ORE Model. Figure 9 shows an initial rendering of the ORE Model.

**Figure 8:** Compound digital object modeled according to (preliminary) ORE Model. Specific views referencable; intra (hasPart, hasView) and inter (hasRelationshipTo) relationships.
4.10 Defining OAI-ORE services
OAI-ORE services are transactions that exchange instances of the model described above. These instances are associated with ORE resources that, as described in the previous section, are the access point for operations on an aggregation of web resources. We define three classes of transactions that form the basis of the OAI-ORE service framework:
- **Harvest**: A request for a batch of instances that correspond to the ORE model from a set of ORE Resources. One typical application of this service is by robots from search engines.
- **Obtain**: A request for an instance that corresponds to the ORE Model from a specific ORE Resource. Typically this transaction will initiate an access transaction for the compound digital object or parts thereof.
- **Register**: A request to add new nodes or relationships to an ORE aggregation. Typically this will take the form of adding a new compound digital object to a collection or repository.
Figure 9: A preliminary rendering of the ORE Model
5 Plans and Action Items
- Set up Wiki for collaborative work – assigned to Tim DiLauro for completion by January 19th
- Set up Connotea group for sharing of resources/citations – assigned to Tony Hammond for completion by January 19th
- Everyone will set up accounts and send user name to Tony
- All cites will be tagged oaiore
- Fleshed out use cases to be completed by February 2nd
- UML for proposed ORE model by February 13th
- Conference call – either Feb 12 or 13 for two hours starting at 11:00 GMT-5
- Discuss use cases
- Model review
- Begin appropriate technologies discussion
- Two (monthly) additional conference calls leading up to next meeting
- Next meeting – two day meeting somewhere in the span of May 28-30, 2007.
- Start implementation details
|
{"Source-Url": "https://www.openarchives.org/ore/documents/OAI-ORE-TC-Meeting-200701.pdf", "len_cl100k_base": 5501, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36488, "total-output-tokens": 6071, "length": "2e12", "weborganizer": {"__label__adult": 0.0004184246063232422, "__label__art_design": 0.0017213821411132812, "__label__crime_law": 0.0008549690246582031, "__label__education_jobs": 0.04913330078125, "__label__entertainment": 0.0002818107604980469, "__label__fashion_beauty": 0.0003938674926757813, "__label__finance_business": 0.001895904541015625, "__label__food_dining": 0.0005130767822265625, "__label__games": 0.0006208419799804688, "__label__hardware": 0.0011205673217773438, "__label__health": 0.0007963180541992188, "__label__history": 0.002170562744140625, "__label__home_hobbies": 0.0003736019134521485, "__label__industrial": 0.0009284019470214844, "__label__literature": 0.0015687942504882812, "__label__politics": 0.0009927749633789062, "__label__religion": 0.000701904296875, "__label__science_tech": 0.390869140625, "__label__social_life": 0.0009717941284179688, "__label__software": 0.10235595703125, "__label__software_dev": 0.439453125, "__label__sports_fitness": 0.0004184246063232422, "__label__transportation": 0.0006871223449707031, "__label__travel": 0.0005354881286621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26548, 0.00989]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26548, 0.57991]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26548, 0.89468]], "google_gemma-3-12b-it_contains_pii": [[0, 2345, false], [2345, 3809, null], [3809, 5832, null], [5832, 8042, null], [8042, 10007, null], [10007, 10123, null], [10123, 12120, null], [12120, 14653, null], [14653, 16302, null], [16302, 18816, null], [18816, 19146, null], [19146, 20874, null], [20874, 23833, null], [23833, 24747, null], [24747, 25771, null], [25771, 26548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2345, true], [2345, 3809, null], [3809, 5832, null], [5832, 8042, null], [8042, 10007, null], [10007, 10123, null], [10123, 12120, null], [12120, 14653, null], [14653, 16302, null], [16302, 18816, null], [18816, 19146, null], [19146, 20874, null], [20874, 23833, null], [23833, 24747, null], [24747, 25771, null], [25771, 26548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26548, null]], "pdf_page_numbers": [[0, 2345, 1], [2345, 3809, 2], [3809, 5832, 3], [5832, 8042, 4], [8042, 10007, 5], [10007, 10123, 6], [10123, 12120, 7], [12120, 14653, 8], [14653, 16302, 9], [16302, 18816, 10], [18816, 19146, 11], [19146, 20874, 12], [20874, 23833, 13], [23833, 24747, 14], [24747, 25771, 15], [25771, 26548, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26548, 0.05426]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
ec6c3390b3179b3cfc4a1ce503e56cd9d7409b68
|
SPIR-V Shader Size Reduction Using spirv-opt
Greg Fischer, LunarG Inc.
December 13, 2017
Summary
This paper discusses how code transformations available as passes in spirv-opt can be used to reduce the size of SPIR-V shaders for the Vulkan graphics API. It provides a sample recipe of the options and describes the specific passes so that users of spirv-opt can best decide how to adjust the recipe and reduce the size of their SPIR-V shaders.
Introduction
Since the inception of SPIR-V, there has been an interest in optimizing its code, particularly with the goal of reducing its size. Understandably, the raw SPIR-V emitted from the frontend glslangValidator is verbose, particularly with regard to function scope variables and their loads and stores. Significant opportunities for size reduction come from 1) eliminating these loads and stores, 2) eliminating dead code due to branches with constant conditionals and 3) eliminating replicated accesses to uniform variables, including images and samplers.
We have introduced passes to spirv-opt that use classic code optimization techniques to address these inefficiencies in a SPIR-V module. Combined with spirv-remap, which we use to remove module-level dead types and functions, these passes can reduce raw SPIR-V size by over 60%, and bring SPIR-V sizes within 40% of DX Byte Code.
These passes have been initially designed to work with SPIR-V modules for graphics APIs such as Vulkan and OpenGL. Such shaders use logical addressing. Modules with physical addressing, such as those for the OpenCL API will regrettably not derive much benefit at this time. The scope of these passes was restricted to speed implementation and delivery of their benefits to the graphics community. The structured control flow of shaders and the simplified memory accesses of logical addressing simplify these passes. Optimization of kernels is left for future work.
There are several other features which these passes do not support: please refer to the Limitations section of this document for more information. Running these passes on modules with unsupported features will cause the pass to return silently without changing the module.
The -Os option
For those developers who wish derive the benefits of SPIR-V size reduction without specifying individual passes, the -Os option is available. It invokes a recommended set of passes for size reduction. One advantage of this option is that developers will always get the “latest and greatest” passes and recipe. Another advantage is that developers will not need to continually update their build scripts as new passes and recipes become available. Those developers that wish to have finer control over which passes are invoked are invited to continue reading.
A Recipe
The following is one suggested recipe of spirv-opt passes to reduce the size of a graphics SPIR-V module:
--inline-entry-points-exhaustive
--convert-local-access-chains
--eliminate-local-single-block
--eliminate-local-single-store
--eliminate-insert-extract
--eliminate-dead-code-aggressive
--eliminate-dead-branches
--merge-blocks
--eliminate-local-single-block
--eliminate-local-single-store
--eliminate-local-multi-store
--eliminate-insert-extract
--eliminate-dead-code-aggressive
--eliminate-common-uniform
Some passes expose optimization opportunities for other passes, and the recipe orders the passes to take this into account. Performing exhaustive inlining first is an important case in point.
Note that the order of spirv-opt pass options controls the order that the passes are applied and repeated options cause a repeated application of the pass.
This recipe may be somewhat overkill for some shaders, but is meant to show how all the passes might effectively fit together. Some shaders may require fewer passes; some may require more.
* Execution of spirv-opt with this recipe should ideally be followed with execution of “spirv-remap --strip all --dce all” to remove debug instructions and module-level dead types and functions.
Pass Descriptions
We will now discuss each pass individually to help users make adjustments to the recipe above.
Note that each pass is applied to all entry point functions and recursively to all functions called by the entry point functions after the pass has been applied to them until all functions in all the entry point function call trees have been processed.
Exhaustive Inlining (--inline-entry-points-exhaustive)
Shaders often are broken into separate subroutines to increase modularity. But these subroutine calls create a barrier to data flow analysis and subsequent optimizations. One method for eliminating these barriers is through exhaustive inlining of function calls in entry point functions. While exhaustive inlining can cause code size increase, in practice the benefits of increased analysis and optimizations have outweighed these costs.
The inlining of a function in SPIR-V is generally straightforward. One exception is functions with early return. The structured control flow of SPIR-V shaders does not allow for the branch out of a conditional that would be required. For the moment, this is implemented by creating a one-trip loop around the called function, replacing the early return with a branch to the outer loop’s merge block, which is permissible in SPIR-V. The only time this doesn’t work is when early returns are already inside of a loop. For the moment, the inliner will not inline such functions. The resulting code will be correct, but its optimization will be curtailed by the remaining function call. Improving this situation is left to future work.
Local Variable Access Chain Conversion (--convert-local-access-chains)
Shaders often have code that packs data into and unpacks data from structures, typically across function calls. After inlining, this copying code is a significant opportunity for optimization.
This pass converts all local variable access chain loads and stores with constant indices into their equivalent load or store combined with an insert or extract. For example, the access chain load:
\[
%20 = \text{OpAccessChain } \%\_ptr\_\text{Function}_\text{v4float} \%s0 \%\text{int}_1 \\
%21 = \text{OpLoad } \%\text{v4float} \%20
\]
is converted to:
\[
%24 = \text{OpLoad } \%S\_t \%s0 \\
%25 = \text{OpCompositeExtract } \%\text{v4float} \%24 1
\]
and the access chain store:
\[
%19 = \text{OpAccessChain } \%\_ptr\_\text{Function}_\text{v4float} \%s0 \%\text{int}_1 \\
\text{OpStore } \%19 \%18
\]
Is converted to:
\[
%22 = \text{OpLoad } \%S\_t \%s0 \\
%23 = \text{OpCompositeInsert } \%S\_t \%18 \%22 1 \\
\text{OpStore } \%s0 \%23
\]
Since stores are converted to a longer instruction sequence, this conversion is done for local variables that are only accessed through such loads and stores and are thus guaranteed to ultimately be optimized away. Many of the passes in this discussion restrict their optimizations to such variables.
Among the reasons for this conversion is to allow dataflow analysis to concentrate on one form of composite reference, inserts and extracts, and allow it to ignore interaction with access chains. Another reason is that extracts have the desirable property that a group of loads from a single composite object can share a single load operation. Likewise, a sequence of inserts to the same composite object ultimately can share a single load and a single store. This allows for many loads and stores to be easily eliminated during single block local variable elimination, discussed next.
Local Store/Load Elimination - Single Block (--eliminate-local-single-block)
Elimination of local store and load instructions is a significant opportunity for SPIR-V size reduction, but elimination of loads and stores is also a de facto method for value propagation, so elimination of stores and loads aids analysis and optimizations that are dependent on values, such as dead branch elimination.
General elimination of local variables and their loads and stores across an entire function requires a complex and expensive algorithm. It is therefore often beneficial in compile time to eliminate some load and stores (and possibly their variable) with simpler and cheaper algorithms first. Eliminating stores and loads within a single block allows control flow analysis to be ignored.
This pass eliminates store/load and load/load pairs to the same local variable in the same block. It optimizes only direct loads and stores of variables. If a store is not live at the end of the block, it will be deleted.
For example, the store-load sequence in:
OpStore %v %14
%15 = OpLoad %v4float %v
OpStore %gl_FragColor %15
could be optimized to:
OpStore %gl_FragColor %14
And the load-load sequence in:
%31 = OpLoad %v4float %v
OpStore %32 %31
%33 = OpLoad %v4float %v
OpStore %34 %33
would be optimized to:
%31 = OpLoad %v4float %v
OpStore %32 %31
OpStore %34 %31
Access chain loads and stores are not optimized and may actually inhibit optimization. This optimization is therefore heavily dependent on the access chain conversion described above. Function calls can also inhibit dataflow analysis and optimization, so inlining beforehand is also highly recommended.
Local Store/Load Elimination - Single Store (--eliminate-local-single-store)
Like the single block load/store elimination above, this pass also optimizes a simple, specific case: a local variable, stored to only once. All loads in the same function that the store dominates can simply be replaced with the store’s value.
Access chain loads and stores are not optimized and may inhibit optimization, so access chain conversion is recommended beforehand. Function calls also may inhibit dataflow analysis and optimization, so inlining is also recommended beforehand.
Insert/Extract Elimination (--eliminate-insert-extract)
After access chain conversion and store/load removal, sequences similar to the following may appear:
```
%20 = OpCompositeInsert %S_t %18 %19 0
...
%22 = OpCompositeInsert %S_t %21 %20 1
...
%24 = OpCompositeInsert %S_t %23 %22 2
...
%26 = OpCompositeExtract %v4float %24 1
%29 = OpFMul %28 %26
```
These sequences are typical in shaders which pack and then unpack a composite object across a function call that has been inlined.
Extracts such as the one above may be simply replaced with the corresponding inserted value in the insertion sequence. For example, the Extract and FMul above can be replaced with:
```
%29 = OpFMul %28 %21
```
Similar to store/load elimination, insert/extract elimination both reduces code size and aids analysis through propagation of values.
Dead Branch Elimination (--eliminate-dead-branches)
It is possible for a shader to contain significant sections of code that are never executed because they are control dependent on a conditional branch whose value is always false. After inlining, store/load and insert/extract elimination, some such opportunities may be exposed as constant boolean values are propagated through the shader into conditional branch instructions.
This pass finds conditional branches on constant boolean values, converts the conditional branch into the correct unconditional branch and eliminates all possible resulting dead code. For example:
...
OpBranchConditional %true %21 %22
%21 = OpLabel
OpStore %v %14
OpBranch %20
%22 = OpLabel
OpStore %v %16
OpBranch %20
%20 = OpLabel
%23 = OpLoad %v4float %v
...
Would be replaced with:
...
OpBranch %21
%21 = OpLabel
OpStore %v %14
OpBranch %20
%20 = OpLabel
%23 = OpLoad %v4float %v
...
Such dead code elimination, besides reducing the number of instructions, simplifies control flow and thus creates additional opportunities for analysis and optimization. For example, a local variable assigned twice before dead branch elimination might only be assigned once after dead branch elimination, creating an additional opportunity for single store load/store elimination. So repetition of passes may be beneficial, depending on the shaders.
Block Merge (--merge-blocks)
After dead branch elimination, sequences of single blocks are often left, such as the “after” sequence in the dead branch elimination section above. The Block Merge pass cleans up such sequences, creating a single block from them. Specifically, this pass searches for a first block with a branch to a second block that has no other predecessors. When this is found, the first and second blocks can be combined into a single block. For example, the final sequence in the dead branch section above becomes:
```
...
OpStore %v %14
%23 = OpLoad %v4float %v
...
```
Besides eliminating instructions, this has the benefit of creating new opportunities for single block store/load elimination as it moves instructions previously in different blocks into the same block. For example, in the case above, executing single block store/load elimination would delete the load, replace %23 everywhere with %14, and the store could potentially be removed as well.
Local Store/Load Elimination - Multiple Store (--eliminate-local-multi-store)
This pass is used to eliminate all remaining local variables that are only accessed directly with loads and stores. Variables with access chain references are not optimized. Thus, this phase is most effective when it follows exhaustive inlining and access chain conversion.
The algorithm tracks each variable and its stored value through the program. If at any point multiple values for a single variable reach a block, and a phi operation is generated, which merges the multiple values into one value and that value is used for the variable from that point until it is assigned again or another merge point is reached. If the variable is loaded, the load is deleted and the loaded value is replaced with the stored value. All stores of candidate variables are finally deleted.
For example, the following sequence:
```
...
OpBranchConditional %22 %24 %25
%24 = OpLabel
%27 = OpVectorTimesScalar %v4float %26 %float_0_5
OpStore %v %27
OpBranch %23
%25 = OpLabel
%29 = OpFAdd %v4float %28 %18
OpStore %v %29
OpBranch %23
%23 = OpLabel
%30 = OpLoad %v4float %v
OpStore %gl_FragColor %30
```
would be changed to:
```
...
OpBranchConditional %22 %24 %25
%24 = OpLabel
%27 = OpVectorTimesScalar %v4float %26 %float_0_5
OpBranch %23
%25 = OpLabel
%29 = OpFAdd %v4float %28 %18
OpBranch %23
%23 = OpLabel
%31 = OpPhi %v4float %27 %24 %29 %25
OpStore %gl_FragColor %31
```
Note all stores and loads of %v have been removed and the phi value is stored to gl_FragColor.
As stated earlier, this pass will run more efficiently if local loads and stores that can be eliminated with simpler passes are eliminated. It will also run more efficiently if any dead control flow is eliminated.
Currently this pass will generate phi functions which are not used. It is therefore beneficial to run a dead code elimination pass after this pass. Avoiding these unused phi functions is left for future work.
Dead Code Elimination - Aggressive (--eliminate-dead-code-aggressive)
This pass, also known as ADCE, detects and deletes instructions in a function that are not used in computing any output value from that function. It does this by marking as live all the function’s output instructions, that is, all instructions that directly make changes outside the scope of the function. It then iteratively marks as live all instructions that these instructions use until no more instructions are marked live. All remaining instructions are dead and can be deleted. For example, consider the following sequence with dead variable %dv:
...
%17 = OpLoad %v4float %Dead
%18 = OpExtInst %v4float %1 Sqrt %17
OpStore %dv %18
%19 = OpLoad %v4float %v
OpStore %gl_FragColor %19
OpReturn
Assuming that the store to gl_FragColor is the only output instruction, there is no live load of %dv, thus its store instruction (and the instructions that it uses) are not marked live. So they are considered dead and deleted:
...
%19 = OpLoad %v4float %v
OpStore %gl_FragColor %19
OpReturn
This version of dead code elimination is particularly good at removing dead def-use cycles. One version of these cycles is generated by the local access chain conversion pass. An example of such a def-use cycle is:
%19 = OpLoad %s_t %s
%20 = OpCompositeInsert %s_t %18 %19 0
%22 = OpCompositeInsert %s_t %21 %20 1
%24 = OpCompositeInsert %s_t %23 %22 2
OpStore %s %s %24
After insert/extract elimination and all extracts are eliminated, these insert sequences remain. Neither the single block or single store store/load elimination passes will eliminate the store and thus the insert sequence remains. However, the ADCE algorithm will detect that this cycle is not used as part of any output computation and will delete it. For this reason, ADCE should be run after insert/extract elimination to remove these dead cycles.
Common Uniform Elimination (--eliminate-common-uniform)
Loads of uniform values are a potential source of redundant code and thus create an opportunity for additional size reduction.
Often uniform values are packed into composite objects and they are loaded using access chains. This pass first converts uniform access chain loads into loads and extracts. Just as was true for local variables, this form allow loads from the same composite object to be shared and repeating loads to be eliminated, which is done next. If the first load does not dominate all remaining loads, it is hoisted to the nearest dominating block. Finally, common extracts are shared and the repeat extracts are eliminated.
For example, the following uniform loads:
%34 = OpAccessChain %_ptr_Uniform_float %u %int_0
%35 = OpLoad %float %34
OpStore %o0 %35
%36 = OpAccessChain %_ptr_Uniform_float %u %int_1
%37 = OpLoad %float %36
OpStore %o1 %37
...
%40 = OpAccessChain %_ptr_Uniform_float %u %int_1
%41 = OpLoad %float %40
OpStore %o2 %41
will be converted to:
%50 = OpLoad %U_t %u
%51 = OpCompositeExtract %float %50 0
OpStore %o0 %51
%52 = OpCompositeExtract %float %50 1
OpStore %o1 %52
...
OpStore %o2 %52
Two forms of common uniform loads are the images and samplers used in texture references. However, these cannot be removed from the blocks that contain the texture reference. So this pass does a special traversal just to eliminate these common uniform loads within a single block.
This pass is fairly independent from the other passes. It does not depend on any of them and they do not depend on it.
Limitations
Besides kernels and physical addressing, there are a few other features that are not currently supported and will cause these passes to return silently without making changes.
Most passes currently do not support the extension KHR_variable_pointers. While not as big of an effort, most passes also currently do not support OpGroupDecorate, and --convert-local-access-chains and --eliminate-common-uniform do not support modules that contain non-32-bit integers.
These limitations exist because of cost/benefit calculation and the desire to make these passes available as soon as possible for the restricted functionality. Support may be added in the future as priorities change.
Testing and Issues
While reasonable effort has been made to test this code, bugs are possible and the quality of this code is dependent on users trying it and reporting issues. Issues can be reported here.
Future Work
A version of inlining has been requested which only inlines very small functions and functions that are only called one time. This assures that the final size of the function is no larger than the original size.
Several other optimizations could be beneficial in reducing SPIR-V size. Constant folding could allow more constant branches to be detected. Common Subexpression Elimination could be beneficial for some shaders. These are left to future work.
At some point we should subsume spir-remap’s module-level dead type and dead function elimination into spir-opt.
Acknowledgements
Thanks to Dan Ginsburg and Valve for their support and assistance. Thanks also to David Neto and his team at Google for the spir-opt infrastructure, which gave this effort a good head start, and for their thorough and detailed reviews of this code. Thanks also to Dan Baker at Oxide Games for assistance with sample shaders.
|
{"Source-Url": "https://www.lunarg.com/wp-content/uploads/2017/12/SPIR-V-Shader-Size-Reduction-Using-spirv-opt_v1.1-1.pdf", "len_cl100k_base": 4509, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 78568, "total-output-tokens": 5305, "length": "2e12", "weborganizer": {"__label__adult": 0.0003736019134521485, "__label__art_design": 0.0003159046173095703, "__label__crime_law": 0.0002911090850830078, "__label__education_jobs": 0.00014400482177734375, "__label__entertainment": 6.371736526489258e-05, "__label__fashion_beauty": 0.00014078617095947266, "__label__finance_business": 0.00011593103408813477, "__label__food_dining": 0.0003368854522705078, "__label__games": 0.001129150390625, "__label__hardware": 0.0011091232299804688, "__label__health": 0.00025582313537597656, "__label__history": 0.00016319751739501953, "__label__home_hobbies": 5.4717063903808594e-05, "__label__industrial": 0.0003733634948730469, "__label__literature": 0.00013899803161621094, "__label__politics": 0.0001996755599975586, "__label__religion": 0.0003879070281982422, "__label__science_tech": 0.006500244140625, "__label__social_life": 4.744529724121094e-05, "__label__software": 0.0052337646484375, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.00033783912658691406, "__label__transportation": 0.00032639503479003906, "__label__travel": 0.0001779794692993164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20342, 0.05975]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20342, 0.7371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20342, 0.88546]], "google_gemma-3-12b-it_contains_pii": [[0, 447, false], [447, 2760, null], [2760, 4018, null], [4018, 5614, null], [5614, 7520, null], [7520, 8886, null], [8886, 10600, null], [10600, 12007, null], [12007, 13847, null], [13847, 15005, null], [15005, 16920, null], [16920, 18513, null], [18513, 20342, null]], "google_gemma-3-12b-it_is_public_document": [[0, 447, true], [447, 2760, null], [2760, 4018, null], [4018, 5614, null], [5614, 7520, null], [7520, 8886, null], [8886, 10600, null], [10600, 12007, null], [12007, 13847, null], [13847, 15005, null], [15005, 16920, null], [16920, 18513, null], [18513, 20342, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20342, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20342, null]], "pdf_page_numbers": [[0, 447, 1], [447, 2760, 2], [2760, 4018, 3], [4018, 5614, 4], [5614, 7520, 5], [7520, 8886, 6], [8886, 10600, 7], [10600, 12007, 8], [12007, 13847, 9], [13847, 15005, 10], [15005, 16920, 11], [16920, 18513, 12], [18513, 20342, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20342, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
523acaa00eafbfb3bfda677736f4e85a7cd6cc96
|
AN OVERVIEW OF FRAGMENTATION DESIGN FOR DISTRIBUTED XML DATABASES
Kok-Leong Koong, Su-Cheng Haw and Lay-Ki Soon
Faculty of Computing and Informatics,
Multimedia University, 63100 Cyberjaya, Malaysia
ABSTRACT
XML is a standard of data exchange between web applications such as in e-commerce, e-learning and other web portals. The data volume has grown substantially in the web and in order to effectively retrieve or store these data, it is recommended to be physically or virtually fragmented and distributed into different nodes. Basically, fragmentation design contains of two parts: fragmentation operation and fragmentation method. There are three different kinds of fragmentation operation: Horizontal, Vertical and Hybrid, determines how the XML should be fragmented. The aim of this paper is to give an overview on the fragmentation design consideration.
KEYWORDS
XML Database, Distributed Design, Fragmentation Distributed XML
1. INTRODUCTION
XML is a semi-structured, self-describing and human-readable document. A native XML document is stored in a plain text format and thus it can be easily processed by any applications and systems. XML and HTML are both subset of Standard Generalized Markup Language (SGML) [1]. And, HTML is commonly used in web environment. It makes XML a good option for data exchange in web environment. Thus, XML has started to become a standard of data exchange between applications and systems. It has been extensively used in web environment and data exchange between web applications. However, as the nature of XML, it is also commonly used in standalone applications to store metadata or application data.
The emerging of smart phone and tablet market has generated big volume of data and it grown exponentially in every minute. This gigantic volume of data also has been named as Big Data. The cohesiveness between these data is low as data might or might not be related to each other. Thus, XML is a good choice to be used to handle these data. However, large volume data will be only effective to be stored and retrieved in distributed model as it can be making used of the parallelism processing.
There are three main advantages on distributed large database. First of all, a distributed system may require multiple normal specification computer system rather than a very high specification computer system. Thus, it will lower the cost but sustain the high performance on the distributed database. Secondly, it also increased scalability. There is always a boundary for a database to
expand within a single computer system. When it is design to be distributed, the database can expand beyond a single computer system. Thirdly, it will increase the availability. Normally, distributed design database will be replicated. This will make the database more resistance to the failure of a single computer system [2]. Thirdly, it will increase the performance of the database system as it used parallelism processing to store and retrieve data from the database system [3]. Distributed design of database normally includes three basic steps: fragmentation, allocation and replication [4]. Nevertheless, the focus on this paper is on fragmentation. Fragmentation is a process of divide database into smaller fragments. Fragmentation contains two steps: determine a fragmentation model to be used and select a method or an algorithm to use for the fragmentation. In the first step, it determines what structure or model of fragmentation to be used. It can be horizontal, vertical or mixed. In the second step, it determines how the data should be fragmented into fragments. It also sometimes refers to fragmentation method or technique.
The rest of the paper is organized as follows. Section 2 outlines the factors driven to distribution database. Section 3 gives an overview on fragmentation models, followed by Section 4, which discusses on the fragmentation methods. Section 5 presents our discussion. Finally, Section 6 concludes the paper.
2. FACTORS DRIVEN TO DISTRIBUTION
Main driving forces for having distributed database include:
- Lower costs: having distributed architectures made of hundreds of PC computers proves to be much cheaper and even more powerful the one mainframe systems serving hundred terminals
- Increased scalability: adding a new network node is the easiest way to respond to extensibility needs of the company,
- Increased availability: by replicating data over several network nodes data becomes closer to the end user and more resistant to system failures,
3. FRAGMENTATION MODEL
3.1. Fragmentation Model for Traditional Databases
There are three basic types of fragmentation models in traditional databases such as relational database and object oriented database. There are horizontal, vertical and mixed [5].
In the relational database, horizontal fragmentation referring a fragmentation database at the record, row or tuple level [3, 6]. To illustrate the scenario, assume a simple relational database contains the following fields in each record: name, gender, address, phone, income and tax_id. There are 56,000 records are stored in a single table for recording 56,000 person data (Table 1). A simple horizontal fragmentation might result into the first node storing the first 28,000 records and the second node storing the last part of 28,000 records. The structure of the fragmentation will be look similar to Figure 1.
Table 1. Sample data of Person Table
<table>
<thead>
<tr>
<th>name</th>
<th>gender</th>
<th>address</th>
<th>phone</th>
<th>income</th>
<th>tax_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>xxx</td>
<td>x</td>
<td>xxxx</td>
<td>xxxx</td>
<td>xxxx</td>
<td>xxxx</td>
</tr>
<tr>
<td>yyy</td>
<td>y</td>
<td>yyyy</td>
<td>yyyy</td>
<td>yyyy</td>
<td>yyyy</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Ooo</td>
<td>o</td>
<td>oooo</td>
<td>oooo</td>
<td>oooo</td>
<td>oooo</td>
</tr>
</tbody>
</table>
On the other hand, vertical fragmentation referring a fragmentation database by grouping fields or attributes of records. Using the previous relational database shown on Table 1, vertical fragmentation will split this database by grouping fields such that it might group name, gender, address, and phone fields and store in first node, while income and tax_id into the second node. The fragmentation structure will be look similar to Figure 2.
The mixed or hybrid is a combination of both horizontal and vertical fragmentation. It can be split horizontally then vertically or vice versa. Using the same relational example, a mixed can first split horizontally by grouping records that belong to particular level of income. Then, split further on the current records by splitting name, gender, address, and phone on other node and the rest of the data income and tax_id on other node.
Object oriented database is totally different from relational database. The data is stored in object form and can be illustrated in a hierarchical or tree format. Fragmentation in object oriented has increased complexity of its hierarchical structure, methods or properties within an object [7]. In term of structure, XML is quite similar to object oriented database. Fragmentation in object oriented share the same fragmentation model like relational database aside the complication involved in object oriented database. It can be fragmented in horizontal, vertical or mixed. Figure 3 and Figure 4 shown the concept how object oriented database can be fragmented into horizontal and vertical model respectively.
3.2. Fragmentation Model for XML Databases
In general, there are only three types of fragmentation models: horizontal, vertical and mixed in XML distributed design [6]. As relational database and object oriented database has started to develop distributed design earlier than XML, the fundamental concept in XML fragmentation also referencing to these two databases. Initially, XML is introduced to run in a single machine. However, as the size of the data grow substantially and it needs to be distributed form in order to achieve better performance on retrieving and storing the data.
Generally, the fragmentation models can be broadly classified into Horizontal, Vertical and Hybrid. The following subsections briefly explain each model.
3.2.1. Horizontal Fragmentation
In XML, horizontal fragmentation can be achieved by selection. Selection is based on the predefined conditions on splitting the fragments. A horizontal fragment \( f_i \) is determined by the selector operator \( \sigma \) of predicates \( p \) over collection of elements \( E \) in a homogeneous XML document. It can be written so that \( f_i = E(\sigma_p) \). Assume we have a XML document constructed according to the relational database stated in the previous section. If the simple selection predicate of \( p_1 \) such that /person/employee/income to be income level less than or equal 5000 and \( p_2 \) to be income level more than or 5000, thus fragments will be written as \( f_1 = E(\sigma_{p_1}) \) and as \( f_2 = E(\sigma_{p_2}) \).
From Figure 5, employee elements with the of name Wong Wei Wei and Lee Jia Fong will be then split and stored as a new XML document in node 1 as first fragment and the rest of the elements of employee will stored in node 2 as a new XML document.
After the operation, node 1 and node 2 may have DTD like <!DOCTYPE person (employee*)> and <!ELEMENT employee (name, gender, contacts, income, tax_id)>.
Horizontal fragmentation is recommended when the query criteria is based on particular attribute that used as selection predicate to fragmenting the XML database. In this scenario, horizontal fragmentation may reduce the transportation cost and processing time as the data is determined in a specific distributed note. Moreover, horizontal fragmentation can easily transport data between sites to improve system performance [8].
Using the same XML database in this document as an example, we use /person/employee/income as the attribute for selection predicate to fragment the database horizontally. Assume this XML database has been fragmented into 5 nodes with income level as the selection predicate of the following categories: 0-999, 1000-1999, 2000-2999, 3000-3999, 4000 and above. If a query searching for a person detail information with income level of 3000-3999, these data can be obtain by querying the fourth distributed nodes thus the query will be able to locate the data in minimum time and retrieve the data with lease processing time.
3.2.2. Vertical Fragmentation
Vertical fragmentation can be achieved by projection. It will split the data structure into smaller parts as particular selected child elements will be split and stored as fragment in other node. A vertical fragment \( f_i \) is determined by the projection operator \( \pi \) by path selection \( \rho \) over collection of element \( E \) in a homogeneous XML document. It can be written so that \( f_i = E(\pi_{\rho}) \). If the path selection \( \rho \) is /employee/contact, all the children elements under this tree path will be split and stored in other node. In this case, fragment \( f_1 = E(\pi_{\rho}) \) represents all contact elements in the XML document will be split and stored in node 2. And, the remaining elements will be stored in node 1.
After the operation, node 1 may have a DTD like `<!DOCTYPE person (employee*)>` and `<!ELEMENT employee (name, gender, income, tax_id)>`. And, node 2 may have an DTD like `<!DOCTYPE contacts (contact*)>` and `<!ELEMENT contact(address,phone)>`. In order to create reference link between these two nodes, at least one reference attribute is required for the element that will able to refer back to elements that resided in other node or site [9].
Vertical fragmentation is a kind of affinity-based fragmentation. As opposed to horizontal fragmentation, this type of fragmentation does not encourage transportation of data from node to node which will trade off flexibility to affinity [8].
Assume a particular employee data is needed with a provided phone contact as search criteria. First the contact elements in the node 2 that match search criteria will be searched. If this entry found, the reference attribute will be used to access the employee data in node 1.
### 3.2.3. Hybrid Fragmentation
Hybrid fragmentation or sometimes also referring to mixed fragmentation uses both horizontal and vertical fragmentation by taken advantages of both models. It operates in the way where a horizontal fragmentation will be implemented to split the document into horizontal fragment and then further fragmented from these fragment by implementing vertical fragmentation.
A hybrid fragment $f_i$ is determined by the horizontal and vertical fragmentation implemented. It is depend on how you would like to implement the hybrid into the XML document. It can be split horizontally then vertically or vice versa.
Assume you would like to do it horizontally then vertically. First fragment the document horizontally and called this fragment $f_x$. Thus, $f_x = E(\sigma_{pi})$ and from these $f_x$ fragments we further fragmented them vertically such that the hybrid fragment $f_i = f_x(\pi_{\rho i})$.
Assume we use income level as the selection condition in horizontal fragmentation, and vertically fragment further with the path /employee/contact as previous example. There will be 4 hybrid fragments generated for 4 nodes.
After the operation, node 1 and node 2 may have DTD like `<!DOCTYPE person (employee*)>` and `<!ELEMENT employee (name, gender, income, tax_id)>`. Node 3 and node 4 may have DTD like `<!DOCTYPE contacts (contact*)>` and `<!ELEMENT contact(address,phone)>`. It will look exactly like in vertical fragmentation as it final operation is using vertical fragmentation. However, each node contains only two records instead of four records using vertical fragmentation.
Hybrid fragmentation is the combination of horizontal and vertical fragmentation which getting advantages of both fragmentations. In the above scenario, the search can be limited only to particular income level. At the same time, data can be also obtained from vertical fragments by contact element and then with the reference link to the particular employee.
4. FRAGMENTATION METHODS
Fragmentation model only define the fragmentation structure in distributed design. Fragmentation method is required to determine how the data should be fragmented (horizontally,
Vertically or hybrid). Fragmentation by arbitrary cutting document into fragments horizontally, vertically, or hybrid may not necessarily effectively improve the query performance. Thus, some fragmentation methods have been introduced. These proposed methods have their own advantages and disadvantages against different scenarios. These methods can be grouped into four categories: structure and size, query and cost, predicate and holes, and fillers (for streamed data).
4.1. Structure and Size
Fragmentation of XML documents can be fragmented based on structure and size of the document. The structural information about the document can be obtained from the document schemata (DTD or XML Schema). The structure information can also be obtained by transverse the XML document. There is an advantage of this fragmentation method which balanced the load of site or nodes processing power. And this will lead to more effectively uses of resource and improve query performances.
Skewed query processing problem is a well-known problem in distributed design. It merely indicated an imbalance in loading on particular distributed node against other nodes. And, this method of fragmentation can resolve particular skewed query processing problem as the fragment is properly distributed according to the structure and size of the document.
To fragment documents using structure and size method, first of all, the document is required to be parsed. In other words, map the XML document into a tree structure. This parser is either tree-based or event-based. A tree-based parser may consume memory resources as it transverse the whole document and save all the relationship and node of these nodes in the memory. DOM is a tree-based parser. On the other hand, event-based consumed less memory. It does not construct a large tree in memory as it only scan particular element, attribute, content sequence in an XML document[10].
In structure and size method, event-based parser is used to construct vertex/node list, structural information. After obtaining this information, the document then fragmented accordingly.
A typical example using this method on horizontal fragmentation can be achieved by determining a threshold size of the each fragment. Then, transverse throughout the XML document by determine the size of a single level child node horizontally. If the size of the child node including its descendants is smaller than the threshold size then continue on the next sibling child and so on until reaching the threshold size. These child nodes then will be created as a fragment and store in a distributed node or site as illustrated in figure 6. This scenario is vulnerable to skewed query process problem if particular fragment loading is much higher than other fragments.
Angela et al. proposed a simple top-down heuristics fragmentation method called SimpleX[11]. In order to fragment document using this method, three criteria are required to determine before hand: tree-width constraint, tree-depth constraint and tree-size constraint. These criteria will restrict the size of fragment. Fragment is determined when transverse down from the root element to the leaf elements (top-down). Fragment will be decided upon sub tree size that fulfils the tree-size, tree-width and tree-depth constraint. Then, structure histogram is constructed to evaluate how efficient is the fragmentation generated.
4.2. Query and Cost
Fragmentation of XML document can also be fragmented based on XML queries. The most common criteria to determine the fragment using this method are query frequency and the cost of query.
Leykun et al. proposed vertical fragmentation model based on queries. In their approach, two components are required to be set up for the fragmentation: Most frequently used queries with their frequencies, Element Usage Matrix (EUM) and Element Affinity Matrix (EAM) [12].
In this proposed method, it will analyze the total data access in the distributed system to determine the most frequently used queries and its frequencies. A matrix then will be constructed based on the elements access and queries. After EUM, another matrix called EAM is constructed. This matrix illustrated the relationship between elements against the queries requested. Finally, Grouping Heuristic Module is used to group elements and Splitting Heuristic Module will determine the fragment point for the fragmentation.
Ma et al., however, proposed method using heuristic to effectively fragment the XML document in horizontal fragmentation model. This method contains four steps. First of all, a horizontal fragmentation is constructed based on simple selection predicate. A query tree or query plan is build on this distribution design. From the query tree, the total execution query costs have to be determined. The query cost is the summation of storage costs and transportation costs. Storage cost is a measure of time in retrieving data from secondary storage. And, transportation cost is a measure in time for transverse time on XML document at different sites. Finally, the minimum total query cost will determine how the document should be fragmented [13, 14] Sven et al. proposed simplified cost model that work similar to previous method. The query processing cost model is based on the size estimation on the query results and query processing costs to determine the fragmentation of XML document [8].
4.3. Predicates
Predicates are commonly used in horizontal fragmentation model. There are two types of predicates: simple selection predicates and normal selection predicates.
The simple selection predicate takes the form of $path \, \theta \, v$. $\theta$ is the comparison operator which belong to the subset of {$<$, $>$, $=$, $\neq$, $\leq$, $\geq$, ...}. $path$ is the path expression in XML and $v$ is the value [15].
Predicate in relational database is differed from XML. In relational database, predicate indicate value of the fields. However, predicate in XML is indicated by path expression. In the previous example, predicate in relational database can be stated as income >= 5000. In XML, it then express in the form of /person/employee/income >= 5000.
4.4. Holes and Fillers
Holes and Fillers is a fragmentation method uses in Ad-hoc fragmentation. Ad-hoc fragmentation is fragmentation model for stream data. It does not required document schema for document fragmentation. Fragment in this model is fragmented and mark with special identifier for reconstruction later [4].
XFrag is the framework used in holes and fillers fragmentation method. In this method, the original document is break into smaller part (fillers) and one or more holes resided in filler with special tag and contains ID to corresponding filler.
5. DISCUSSION
Structure and size fragmentation method will fragment document according to the defined structure and size of XML document. The advantage of this method will distribute the content evenly across the distributed platform. However, it does not mean effectiveness in query processing response time. Skewed query processing problem is a common problem in this fragmentation method if the query processing concentrating only on particular site or distributed nodes.
The advantage of query and cost method is the most efficient method but the fragmentation cost is higher than other two methods.
Simple selection predicate is the most fundamental fragmentation method. It works fine in fragmented large XML document into smaller pieces to reduce search time and processing power on large XML document. However, it does not work efficiently compare to the query based methods.
6. CONCLUSIONS
There are pros and cons on different fragmentation models and fragmentation methods. However, heuristic is a method that can improve the query performance by study the usage of the distributed XML database. A horizontal fragmentation based on simple selection predicate method can be improved by study the query cost. According to the study to create a better fragmentation that will greatly optimize the query performance [14]. Another example of optimizing performance with its top-down heuristic is the SimpleX on structure and size fragmentation method [11].
With XML becoming the dominant standard for data exchange between various systems and databases on the Web, distributed XML is becoming crucial. In this paper, we have reviewed the types of fragmentation operations and fragmentation methods. As the result, we have also suggested the grouping for the fragmentation method.
ACKNOWLEDGEMENTS
This work is supported by funding of Fundamental Research Grant Scheme, from the Ministry of Higher Learning Education (MOHE).
REFERENCES
AUTHORS
Kok-Leong Koong received his Bachelor in Computer Science and Master in Business Administration in University of Central Oklahoma, U.S.A. in 1995. He is currently lecturer of Department of Information Sciences and Computing Studies in New Era University College. His major area researches are XML Databases, E-commerce, web application and computer network.
Associate Professor Dr. Su-Cheng Haw’s research interests are in XML Databases and instance storage, Query processing and optimization, Data Modeling and Design, Data Management, Data Semantic, Constraints & Dependencies, Data Warehouse, E-Commerce and Web services.
Dr. Lay-Ki Soon received her Ph.D in Engineering (Web Engineering) from Soongsil University Korea in 2009. She is currently a Senior Lecturer in Faculty of Computing and Informatics, Multimedia University. Her research interests relate to Web science, which include Web crawling, Web data mining and social network analysis. She is involved in numerous research projects funded by Malaysian government and also Japan International Cooperation Agency (JICA).
|
{"Source-Url": "http://airccj.org/CSCP/vol5/csit53504.pdf", "len_cl100k_base": 4851, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24704, "total-output-tokens": 6062, "length": "2e12", "weborganizer": {"__label__adult": 0.0003540515899658203, "__label__art_design": 0.0006914138793945312, "__label__crime_law": 0.0004582405090332031, "__label__education_jobs": 0.004817962646484375, "__label__entertainment": 0.00011199712753295898, "__label__fashion_beauty": 0.00021588802337646484, "__label__finance_business": 0.000885009765625, "__label__food_dining": 0.0003743171691894531, "__label__games": 0.0004382133483886719, "__label__hardware": 0.001201629638671875, "__label__health": 0.0007219314575195312, "__label__history": 0.00049591064453125, "__label__home_hobbies": 0.0001417398452758789, "__label__industrial": 0.0006690025329589844, "__label__literature": 0.0006375312805175781, "__label__politics": 0.0003445148468017578, "__label__religion": 0.000518798828125, "__label__science_tech": 0.2266845703125, "__label__social_life": 0.00018680095672607425, "__label__software": 0.0270233154296875, "__label__software_dev": 0.73193359375, "__label__sports_fitness": 0.0002114772796630859, "__label__transportation": 0.0006279945373535156, "__label__travel": 0.0002135038375854492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26254, 0.02641]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26254, 0.73588]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26254, 0.88688]], "google_gemma-3-12b-it_contains_pii": [[0, 2540, false], [2540, 5418, null], [5418, 6666, null], [6666, 7379, null], [7379, 11149, null], [11149, 14098, null], [14098, 14302, null], [14302, 17712, null], [17712, 19713, null], [19713, 22519, null], [22519, 25797, null], [25797, 26254, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2540, true], [2540, 5418, null], [5418, 6666, null], [6666, 7379, null], [7379, 11149, null], [11149, 14098, null], [14098, 14302, null], [14302, 17712, null], [17712, 19713, null], [19713, 22519, null], [22519, 25797, null], [25797, 26254, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26254, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26254, null]], "pdf_page_numbers": [[0, 2540, 1], [2540, 5418, 2], [5418, 6666, 3], [6666, 7379, 4], [7379, 11149, 5], [11149, 14098, 6], [14098, 14302, 7], [14302, 17712, 8], [17712, 19713, 9], [19713, 22519, 10], [22519, 25797, 11], [25797, 26254, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26254, 0.05714]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
9417924ba72a58d8a7d67cc6d17edbde2cf6193f
|
[REMOVED]
|
{"Source-Url": "https://www.researchgate.net/profile/Marcos_Sacristan/publication/234815594_Imhotep_an_approach_to_user_and_device_conscious_mobile_applications/links/00b7d5231bdf8ed827000000.pdf", "len_cl100k_base": 7842, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29276, "total-output-tokens": 8684, "length": "2e12", "weborganizer": {"__label__adult": 0.0003750324249267578, "__label__art_design": 0.0007357597351074219, "__label__crime_law": 0.00028014183044433594, "__label__education_jobs": 0.0009407997131347656, "__label__entertainment": 9.22083854675293e-05, "__label__fashion_beauty": 0.00025653839111328125, "__label__finance_business": 0.0002161264419555664, "__label__food_dining": 0.00036716461181640625, "__label__games": 0.0008835792541503906, "__label__hardware": 0.00525665283203125, "__label__health": 0.0009813308715820312, "__label__history": 0.00033593177795410156, "__label__home_hobbies": 0.00012254714965820312, "__label__industrial": 0.00035691261291503906, "__label__literature": 0.00030732154846191406, "__label__politics": 0.00020420551300048828, "__label__religion": 0.0005154609680175781, "__label__science_tech": 0.065185546875, "__label__social_life": 8.058547973632812e-05, "__label__software": 0.01483154296875, "__label__software_dev": 0.90673828125, "__label__sports_fitness": 0.0003085136413574219, "__label__transportation": 0.00047707557678222656, "__label__travel": 0.00019931793212890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38667, 0.02695]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38667, 0.37194]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38667, 0.90042]], "google_gemma-3-12b-it_contains_pii": [[0, 3053, false], [3053, 8561, null], [8561, 12131, null], [12131, 16329, null], [16329, 20532, null], [20532, 24721, null], [24721, 27343, null], [27343, 30102, null], [30102, 32245, null], [32245, 34886, null], [34886, 38667, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3053, true], [3053, 8561, null], [8561, 12131, null], [12131, 16329, null], [16329, 20532, null], [20532, 24721, null], [24721, 27343, null], [27343, 30102, null], [30102, 32245, null], [32245, 34886, null], [34886, 38667, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38667, null]], "pdf_page_numbers": [[0, 3053, 1], [3053, 8561, 2], [8561, 12131, 3], [12131, 16329, 4], [16329, 20532, 5], [20532, 24721, 6], [24721, 27343, 7], [27343, 30102, 8], [30102, 32245, 9], [32245, 34886, 10], [34886, 38667, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38667, 0.12889]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
9d04e44c5a04a29c35b8fbcbdcce49bae93697cc
|
Bottom-up Parsing
Basic Idea:
- Scan the input string from left to right.
- Try to construct a parse tree starting at the bottom (i.e., the leaves) and working towards the root.
Shift-reduce parsing:
Basic Idea: Apply a sequence of “reductions” to transform the input string to the start symbol of the grammar.
reduction: replace a substring matching the RHS of a production by the LHS.
Example: Consider the grammar
\[
S \rightarrow aABe \\
A \rightarrow Abc \\
A \rightarrow b \\
B \rightarrow d
\]
Input:
\[
\sim abbcde \\
\sim aABcde \\
\sim aAde \\
\sim aABe \\
\sim S
\]
Handles
Intuition: A handle of a string \( s \) is a substring \( \alpha \) s.t.:
1. \( \alpha \) matches the RHS of a production \( A \rightarrow \alpha \); and
2. replacing \( \alpha \) by the LHS \( A \) represents a step in the reverse of a rightmost derivation of \( s \).
Example: Consider the grammar
\[ S \rightarrow aABe \\
A \rightarrow Abc \mid b \\
B \rightarrow d \]
The rightmost derivation for the input `abbcde` is:
\[ S \Rightarrow aABe \Rightarrow aAde \Rightarrow aAbcde \Rightarrow abbcde. \]
The string `abbcde` can be reduced in two ways:
1. `aAbcde` \(\leadsto\) `aAde`; and
2. `aAbcde` \(\leadsto\) `aAbcBe`.
But (2) is not in a rightmost derivation, so `AAbc` is the only handle.
Handles: cont’d
Definition: A handle of a right-sentential form \(\gamma\) is
1. a production \(A \rightarrow \beta\), and
2. a position in \(\gamma\) where \(\beta\) may be found and replaced by \(A\) to produce the previous sentential form in a rightmost derivation of \(\gamma\).

The handle \(A \rightarrow \beta\) in \(\alpha\beta\omega\)
Stack Implementation of Shift-Reduce Parsing:
Data Structures:
- the stack, its bottom marked by $\$, initially empty.
- the input string, its right end marked by $\$, initially $w$.
Action:
repeat
1. Shift zero or more input symbols onto the stack, until a handle $\beta$ is on the top of the stack.
2. Reduce $\beta$ to the LHS of the appropriate production.
until ready to accept.
Acceptance: When the stack contains the start symbol and the input is empty.
Example:
Grammar: $S \rightarrow aABe$
$A \rightarrow Abc | b$
$B \rightarrow d$
Input string: abbcde
<table>
<thead>
<tr>
<th>Stack (→)</th>
<th>Input</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>$$</td>
<td>abbcde$</td>
<td>shift</td>
</tr>
<tr>
<td>$a$</td>
<td>bbcde$</td>
<td>shift</td>
</tr>
<tr>
<td>$ab$</td>
<td>bcdes</td>
<td>reduce by $A \rightarrow b$</td>
</tr>
<tr>
<td>$aA$</td>
<td>bcdes</td>
<td>shift</td>
</tr>
<tr>
<td>$aAb$</td>
<td>cdes</td>
<td>shift</td>
</tr>
<tr>
<td>$aA\beta$</td>
<td>des</td>
<td>reduce by $A \rightarrow Abc$</td>
</tr>
<tr>
<td>$aA$</td>
<td>des</td>
<td>shift</td>
</tr>
<tr>
<td>$aA\beta$</td>
<td>e$</td>
<td>reduce by $B \rightarrow d$</td>
</tr>
<tr>
<td>$aAB$</td>
<td>e$</td>
<td>shift</td>
</tr>
<tr>
<td>$aA\beta$</td>
<td>$$</td>
<td>reduce by $S \rightarrow aABe$</td>
</tr>
<tr>
<td>$S$</td>
<td>$$</td>
<td>accept</td>
</tr>
</tbody>
</table>
Conflicts during Shift-Reduce Parsing:
1. Can’t decide whether to shift or to reduce ("shift-reduce conflict").
Example: “dangling else”:
\[
\text{Stmt} \rightarrow \text{if } \text{Expr} \text{ then } \text{Stmt} \mid \\
\text{if } \text{Expr} \text{ then } \text{Stmt else} \text{Stmt} \mid \ldots
\]
2. Can’t decide which of several possible reductions to make ("reduce-reduce conflict").
Example:
\[
\text{Stmt} \rightarrow \text{id (params)} \mid \text{Expr := Expr} \mid \ldots
\]
\[
\text{Expr} \rightarrow \text{id (params)}
\]
Given the input \(A(I, J)\) the parser doesn’t know whether it’s a procedure call or an array reference.
---
LR Parsing
- Bottom-up.
- LR(k) parser:
- Scans the input L-to-R.
- Produces a Rightmost derivation.
- Uses \(k\)-symbol lookahead.
**Advantages**:
- Very general and flexible.
- Efficiently implemented.
- Parses a large class of grammars.
- Detects errors as soon as possible.
**Disadvantages**:
- Difficult to implement by hand for typical programming language grammars.
(Use tools such as yacc or bison.)
---
**Schematic of an LR Parser**:

- The driver program is the same for all LR parsers (SLR(1), LALR(1), LR(1), ...): only the parsing table changes.
The stack holds strings of the form
\[ s_0 X_1 s_1 X_2 s_2 \cdots X_m s_m \]
where \( s_m \) is on top, the \( s_i \) are “states”, and \( X_i \) are grammar symbols.
The configuration of an LR parser is given by a pair (stack contents, unexpanded input).
A configuration \( (s_0 X_1 s_1 \cdots X_m s_m, \ a_i a_{i+1} \cdots a_n) \) represents the right-sentential form
\[ X_1 \cdots X_m a_i a_{i+1} \cdots a_n \]
The sequence of symbols \( X_1 \cdots X_m \) on the parser stack is called a viable prefix of the right-sentential form.
---
**LR Parse Tables**
- The parsing table consists of two parts: a parsing action function, and a goto function.
- For a given configuration of the parser, the next move is determined by the parse table entry
\[ \text{action}(s_m, a_i) \]
where \( s_m \) is the topmost state on the stack, and \( a_i \) is the next input symbol.
- An action table entry can be of four types:
1. shift \( s \), where \( s \) is a state.
2. reduce by a grammar production \( A \rightarrow \beta \).
3. accept
4. error
LR Parsing: cont’d
Suppose the parser configuration is
\[ (s_0 X_1 s_1 \cdots X_m s_m, a_i \cdots a_n$). \]
- if \( \text{action}[s_m, a] = \text{shift} \) then the parser executes a \text{shift} move. The new configuration is
\[ (a_0 X_1 \cdots X_m s_m \text{ pushed } a_i \cdots a_n$). \]
- if \( \text{action}[s_m, a] = \text{reduce} \) then \( A \rightarrow \beta \) then the parser does a \text{reduce} move. The new configuration is
\[ (a_0 X_1 \cdots X_m \text{ overwrite } s_m = r \text{ new } A \beta \text{ new } a_i \cdots a_n$). \]
where
- \( r \) = length of \( \beta \); and
- \( s = \text{goto}[s_{m-r}, A] \).
- if \( \text{action}[s_m, a] = \text{accept} \) then parsing is done.
- if \( \text{action}[s_m, a] = \text{error} \) the parser calls an error recovery routine.
5.2. Finite Automata to recognize Viable Prefixes
**Definition**: An LR(0) *item* of a grammar $G$ is a production of $G$ with a dot $\cdot^*$ added at some position in the RHS.
**Example**: The production $A \rightarrow aAb$ gives the items
- $A \rightarrow \cdot aAb$
- $A \rightarrow a\cdot Ab$
- $A \rightarrow aA\cdot b$
- $A \rightarrow aAb\cdot$
**Intuition**: An item $A \rightarrow \alpha \cdot \beta$ denotes;
- we have seen a string derivable from $\alpha$; and
- we hope to see a string derivable from $\beta$.
---
**Overall Goal**: Given a grammar with start symbol $S$,
- Construct an *augmented grammar* by adding a new start symbol $S'$ and production $S' \rightarrow S$;
- Starting with the item $S' \rightarrow \cdot S$, recognize the viable prefix $S' \rightarrow S\cdot$.
Viable Prefix DFA
1. closure:
Definition: If I is a set of items for a grammar G, then closure(I) is the set of items constructed as follows:
repeat
1. add every item in I to closure(I);
2. if A → α.Bβ is in closure(I) and B → γ is a production of G, then add B → γ to closure(I).
until no new item can be added to closure(I).
Intuition: If A → α.Bβ is in closure(I) then we hope to see a string derivable from B in the input. So if B → γ is a production of G, then we should hope to see a string derivable from γ in the input. Hence, B → γ is in closure(I).
Viable Prefix DFA – cont’d:
2. goto:
Definition: If I is a set of items for a grammar G and X a grammar symbol, then goto(I, X) is the set of items
\[
\text{closure}\left(\{A \rightarrow \alpha X \cdot \beta \mid A \rightarrow \alpha X \beta \in I\}\right).
\]
Intuition:
- A set of items I corresponds to a state.
- If A → α.Xβ ∈ I then
- we’ve seen a string derivable from α; and
- we hope to see a string derivable from Xβ;
• now suppose we see a string derivable from $X$; the resulting state should be one in which:
– we’ve seen a string derivable from $\alpha X$; and
– we hope to see a string derivable from $\beta$;
• The item corresponding to this is $A \rightarrow \alpha X \cdot \beta$.
---
**Constructing the Viable Prefix DFA for LR(0) Items**
- Given a grammar $G$ with start symbol $S$, construct the augmented grammar by adding a special production
$S' \rightarrow S$
where $S'$ does not appear in $G$.
- Algorithm for constructing the canonical collection of LR(0) items for an augmented grammar $G'$:
```
begin
C := \{\text{closure}(S' \rightarrow S')\};
repeat
for each set of items $I \in C$ do
for each grammar symbol $X$ do
if $\text{goto}(I, X) \neq \emptyset$ then
add $\text{goto}(I, X)$ to $C$;
fi
fi
until no new set of items can be added to $C$;
return $C$;
end
```
5.3. Constructing an SLR(1) Parse Table
1. Given a grammar \( G \), construct the augmented grammar \( G' \) by adding the production \( S' \rightarrow S \).
2. Construct \( C = \{I_0, \ldots, I_n\} \), the set of states of the viable prefix DFA for \( G' \).
3. State \( i \) is constructed from \( I_i \), with parsing action determined as follows:
(a) \( A \rightarrow \alpha a \beta \in I_i \), \( a \) a terminal, \( goto(I_i, a) = I_j \); set \( action[i, a] = \text{shift } j \).
(b) \( A \rightarrow \alpha \in I_i, A \neq S' \); for each \( a \in \text{FOLLOW}(A) \), set \( action[i, a] = \text{reduce } A \rightarrow \alpha \).
(c) \( S' \rightarrow S \in I_i \): set \( action[i, \$] = \text{accept } \).
4. goto transitions are constructed as follows: for each nonterminal \( A \), if \( \text{goto}(I_i, A) = I_j \), then
\( \text{goto}[i, A] = j \).
5. All entries not defined by the above steps are made error.
If there are any multiply defined entries, then \( G \) is not SLR.
6. Initial state of the parser: that constructed from
\( I_0 \sim S' \rightarrow *S \).
<table>
<thead>
<tr>
<th>ACTION</th>
<th>GOTO</th>
</tr>
</thead>
<tbody>
<tr>
<td>+</td>
<td>Id</td>
</tr>
<tr>
<td>S0</td>
<td>$,S3</td>
</tr>
<tr>
<td>S1</td>
<td>$,S7</td>
</tr>
<tr>
<td>S2</td>
<td>$,#3</td>
</tr>
<tr>
<td>S3</td>
<td>$,#4</td>
</tr>
<tr>
<td>S4</td>
<td>$,S3</td>
</tr>
<tr>
<td>S5</td>
<td>$,S7</td>
</tr>
<tr>
<td>S6</td>
<td>$,#5</td>
</tr>
<tr>
<td>S7</td>
<td>$,S3</td>
</tr>
<tr>
<td>S8</td>
<td>$,#2</td>
</tr>
</tbody>
</table>
\( S \) - SHIFT \( R \) - REDUCE \( S# \) - Next State
Follow(\( S' \)) \( \rightarrow \{ \$ \} \)
Follow(\( E \)) \( \rightarrow \{ +, \), \$ \}
Follow(\( T \)) \( \rightarrow \{ +, \), \$ \}
\( #1 \ S' \rightarrow E \)
\( #2 \ E \rightarrow E + T \)
\( #3 \ E \rightarrow T \)
\( #4 \ T \rightarrow id \)
\( #5 \ T \rightarrow ( E ) \)
The LR Parsing Algorithm
begin
set ip to point to the first symbol of the input w$;
while TRUE do
let s be the state on top of the stack,
a the symbol pointed at by ip;
if action[s, a] = shift $s$ then
push a then $s$ on top of the stack;
advance ip to the next input symbol;
else if action[s, a] = reduce $A \rightarrow \beta$ then
pop $2 + |\beta|$ symbols off the stack;
let $s'$ be the state now on top of the stack;
push A then goto[A, s'] on top of the stack;
else if action[s, a] = accept then return;
else error();
fi
end
Stack
| $S_0$ | id + id $|$ | Action |
|-------|-------------|--------|
| $S_0$ | id $|$ | shift, $S_3$ |
| $S_0$ | $T$ $S_2$ | reduce, $T$ + id |
| $S_0$ | $E$ $S_1$ | reduce, $E$ + $T$ |
| $S_0$ | $E$ $S_1$ + $S_7$ | shift, $S_7$ |
| $S_0$ | $E$ $S_1$ + $S_7$ id $S_3$ | reduce, $T$ + id |
| $S_0$ | $E$ $S_1$ + $S_7$ $T$ $S_8$ | reduce, $E$ + $T$ |
| $S_0$ | $E$ $S_1$ | accept |
Limitations of SLR Parsing
Cannot handle many “reasonable” grammars, e.g.:
\[ S \rightarrow R \mid L \rightarrow R \]
\[ L \rightarrow \ast R \mid \text{id} \]
\[ R \rightarrow L \]
The SLR parse table contains a state
\[ I = \{ S \rightarrow L \rightarrow R, R \rightarrow L, \ast \} \]
which causes a shift/reduce conflict on ‘=’, since ‘=’ is in \text{FOLLOW}(L).
**Problem**: For an input
\[ \ast \text{id} = \text{id} \]
we want to remember enough “left context” after seeing \ast to make the right shift/reduce decision. SLR cannot do this adequately.
5.4. LR(1) Parsing
**Idea**: Extend SLR parsing to incorporate lookahead.
**LR(1) Item**:
- Of the form \[ A \rightarrow \alpha \beta . a \], where a is a terminal or is the endmarker $.$.
- The lookahead has no effect on items of the form \[ A \rightarrow \alpha \beta . a \], where \( \beta \neq \varepsilon \).
- For items of the form \[ A \rightarrow \alpha . a \], reduce only if the next symbol is a.
**Note**: For an item of the form \[ A \rightarrow \alpha \beta . a \], \( a \in \text{FOLLOW}(A) \). But there may be \( b \in \text{FOLLOW}(A) \) for which there is no item \[ A \rightarrow \alpha \beta . b \].
LR(1) Parsing: closure and goto Functions
1. closure(I):
begin
S := I;
repeat
for each item [A → α·Bβ,a] ∈ I,
each production B → γ,
each terminal b ∈ FIRST(βa)) do
add [B → γ,a,b] to S;
until no new item can be added to S;
return S;
end
2. goto(I,X):
begin
J = {A → αX·β,a | [A → αXβ,a] ∈ I};
return closure(J);
end
Constructing the Viable Prefix DFA for LR(1) Items
- Given: An augmented grammar G'.
- Algorithm:
begin
C := {closure({S' → S,S})};
repeat
for each set of items I ∈ C do
for each grammar symbol X do
if goto(I,X) ≠ 0 then
add goto(I,X) to C;
until no new set of items can be added to C;
return C;
- Note: The set of items construction is essentially the same as for the SLR(1) case.
Constructing an LR(1) Parse Table
1. Given a grammar $G$, construct the augmented grammar $G'$ by adding the production $S' \rightarrow S$.
2. Construct $C = \{I_0, \ldots, I_n\}$, the viable prefix DFA for $G'$.
3. State $i$ is constructed from $I_i$, with parsing action determined as follows:
(a) $[A \rightarrow \alpha \cdot a \beta, b] \in I_i$, $a$ a terminal, $goto(I_i, a) = I_j$: set $action[i, a] = shift_j$.
(b) $[A \rightarrow \alpha \cdot \beta] \in I_i, A \neq S'$: set $action[i, a] = reduce A \rightarrow \alpha$.
(c) $[S' \rightarrow S \cdot \$, $\in I_i$: set $action[i, \$] = accept.$
4. goto transitions are constructed as follows: for each nonterminal $A$, if $\text{goto}(I_i, A) = I_j$ then $\text{goto}[i, A] = j$.
5. All entries not defined by the above steps are made \textit{error}.
If there are any multiply defined entries, then $G$ is not LR(1).
6. Initial state of the parser: that constructed from
$I_0 \sim [S' \rightarrow \varepsilon, S, \$].$
---
**LR(1) vs. SLR(1):**
- LR(1) more powerful, can handle a strictly larger class of grammars than SLR(1).
- The parse tables for LR(1) become very large — may be impractical for realistic grammars.
- A compromise between parsing power and table size that is commonly used is seen in LALR parsers.
An LALR parser can be thought of as an LR(1) parser, some of whose states have been merged into a single state. This can be done in many (but not all) cases without causing problems.
The parsers generated by tools such as yacc and bison are LALR.
5.4.3. LALR(1) Parsing
Observation: Every SLR grammar is an LR(1) grammar, but the LR(1) parser usually has many more states than the SLR parser. Many of these states differ only on the lookahead token. But the lookahead token does not play any role except on reductions.
Definition: The core of a set of LR(1) items $I$ is
$$\text{core}(I) = \{J \mid [J,a] \in I \text{ for some } a\}$$
I.e., core($I$) is the set of first components of $I$.
Example: Suppose
$$I = \{[A \rightarrow e, a], [A \rightarrow e, b], [B \rightarrow e, c]\}$$
Then,
$$\text{core}(I) = \{A \rightarrow e, B \rightarrow e\}$$
Merging sets of LR(1) Items
- If sets of items with the same core are merged, the parser behaves essentially as before. However, some redundant reductions might be done before an error is detected.
- core($\text{goto}(I,X)$) depends only on core($I$), so goto's of merged sets may themselves be merged.
- Suppose we take a set $C_0$ of sets of LR(1) items for a given grammar, and merge those sets of items that have the same core to get a set $C_1$ of sets of LR(1) items.
LR(1) parse table construction using $C_1$ will not introduce any new shift/reduce conflicts compared to $C_0$.
However, this can introduce new reduce/reduce conflicts.
Example of reduce/reduce conflicts due to merging:
Consider the grammar given by
\[
\begin{align*}
S' & \rightarrow S \\
S & \rightarrow aAd | bBd | aBe | bAe \\
A & \rightarrow c \\
B & \rightarrow e
\end{align*}
\]
Merge
Contains reduce-reduce conflict
→ not LALR(1)
|
{"Source-Url": "http://www.cs.ucr.edu/~gupta/teaching/152-17/BottomUp.pdf", "len_cl100k_base": 5508, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 25462, "total-output-tokens": 6492, "length": "2e12", "weborganizer": {"__label__adult": 0.0003037452697753906, "__label__art_design": 0.00030612945556640625, "__label__crime_law": 0.00026917457580566406, "__label__education_jobs": 0.0005354881286621094, "__label__entertainment": 6.622076034545898e-05, "__label__fashion_beauty": 0.00012934207916259766, "__label__finance_business": 0.00011217594146728516, "__label__food_dining": 0.000308990478515625, "__label__games": 0.00048422813415527344, "__label__hardware": 0.0006985664367675781, "__label__health": 0.0003223419189453125, "__label__history": 0.00018537044525146484, "__label__home_hobbies": 8.016824722290039e-05, "__label__industrial": 0.0003726482391357422, "__label__literature": 0.00027179718017578125, "__label__politics": 0.00023365020751953125, "__label__religion": 0.0005078315734863281, "__label__science_tech": 0.01187896728515625, "__label__social_life": 7.206201553344727e-05, "__label__software": 0.004817962646484375, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.0002789497375488281, "__label__transportation": 0.0003516674041748047, "__label__travel": 0.00016748905181884766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16715, 0.01135]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16715, 0.63063]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16715, 0.69046]], "google_gemma-3-12b-it_contains_pii": [[0, 868, false], [868, 1685, null], [1685, 2844, null], [2844, 3685, null], [3685, 4151, null], [4151, 5213, null], [5213, 6011, null], [6011, 6812, null], [6812, 7812, null], [7812, 8742, null], [8742, 9471, null], [9471, 10642, null], [10642, 11628, null], [11628, 12819, null], [12819, 13626, null], [13626, 14242, null], [14242, 15182, null], [15182, 16444, null], [16444, 16663, null], [16663, 16715, null]], "google_gemma-3-12b-it_is_public_document": [[0, 868, true], [868, 1685, null], [1685, 2844, null], [2844, 3685, null], [3685, 4151, null], [4151, 5213, null], [5213, 6011, null], [6011, 6812, null], [6812, 7812, null], [7812, 8742, null], [8742, 9471, null], [9471, 10642, null], [10642, 11628, null], [11628, 12819, null], [12819, 13626, null], [13626, 14242, null], [14242, 15182, null], [15182, 16444, null], [16444, 16663, null], [16663, 16715, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16715, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16715, null]], "pdf_page_numbers": [[0, 868, 1], [868, 1685, 2], [1685, 2844, 3], [2844, 3685, 4], [3685, 4151, 5], [4151, 5213, 6], [5213, 6011, 7], [6011, 6812, 8], [6812, 7812, 9], [7812, 8742, 10], [8742, 9471, 11], [9471, 10642, 12], [10642, 11628, 13], [11628, 12819, 14], [12819, 13626, 15], [13626, 14242, 16], [14242, 15182, 17], [15182, 16444, 18], [16444, 16663, 19], [16663, 16715, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16715, 0.0977]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
68687edf466d1c416905c552bb07cdf0929d543e
|
The EASE Actor Development Environment
Paul Scerri and Nancy E. Reed
Real-time Systems Laboratory
Department of Computer and Information Science
Linköping University, S-581 83 Linköping, Sweden
pausc, nanre@ida.liu.se
October 1999
Abstract
In interactive simulations it is often desirable to have intelligent actors playing the roles of humans. Drawing on a wide range of previous work this paper presents a system that is intended to reduce some of the difficulties involved in the development of actors. We present a system called EASE (End-user Actor Specification Environment) that provides tools and methods to support end user development of intelligent actors. The tools support the whole development process from design to testing. The EASE actor architecture is a multi-agent system where a process of contract making and negotiation between agents determines the actions of the actor.
1 Introduction
In modern, complex, interactive simulations it is often highly desirable to have intelligent actors playing the roles of humans. The actors’ task is difficult - sensing the (simulated) environment, choosing a course of action that flexibly and intelligently follows designer intentions and sending appropriate commands back to the environment, all in real-time. The actors’ reasoning may need to be very complex taking into account a variety of factors including the current situation, a variety of concurrent, potentially conflicting goals, team members, opponents, previous actions, resource constraints and so on [Tambe et al., 1998b].
In order for the simulation in which the actors are embedded to be useful the actors must usually act in a human-like manner. Often knowledge of precisely how an actor should act will be expert knowledge hence it is desirable to have domain experts, as opposed to actor experts, specifying actor behavior.
A plethora of recent work has resulted in architectures for actors with a wide range of abilities. However in many cases the actor architectures are difficult to use, support only an adhoc development process and provide little support for
reuse. Virtually no architecture allows domain experts to directly specify actor behavior. This in turn leads to an expensive, frustrating development process [Jennings et al., 1998, Nwana, 1999, Wooldridge and Jennings, 1998]. It is clearly desirable to improve the process of creating intelligent actors.
Drawing on a wide range of previous work this paper presents a system that is intended to reduce some of the difficulties involved in the development of useful, complex actors. The system, called EASE (Enduser Actor Specification Environment), is a group of tools and an associated methodology for the development of complex, intelligent actors. The system provides support for all stages of development from design through testing to reuse. In particular the system is intended to represent the first step toward putting actor development capabilities into the hands of the domain experts. By providing structure and support for a simple, rapid development process along with an accessible actor architecture, EASE provides the basis for end user development.
1.1 Overview of EASE
Within EASE an actor specification consists of a hierarchy of agents where each agent is responsible for some aspect of the overall actor behavior (see Figure 1). Each agent takes into account only its specific task and is hence fairly simple. Below an agent in the hierarchy are other agents that perform parts of its behavior.
At runtime an actor's specification is turned into a multi-agent system where overall actor behavior is determined by a continuous process of contract making and negotiation between agents. Agents form a hierarchy of contracts then agents at the bottom of the hierarchy negotiate amongst themselves over the actual output of the actor.
On top of the multi-agent actor architecture EASE enforces a methodology for actor development that covers all stages of development, from design through to reuse. EASE also provides tool support for the implementation task which allows completely graphical development, achievable by non-programmers. The tools have been designed to make reuse as simple as possible, primarily by enforcing strict modularity. To make the process of testing as quick and painless as possible integrated tool support exists for quickly inspecting and debugging actors at runtime. The development aids with the EASE system combined with an underlying powerful agent runtime engine allow relatively inexperienced users to create useful actors for complex simulation environments.
1.2 Target Domain
The initial target domain for this system is simulated aircraft pilots. The TACSI air-combat simulator [Aircraft, 1995], developed at Saab AB, is used for both training of human pilots and testing of new systems. In this domain new actor behavior is often required, either to test new systems or to train pilots. It is desirable that the engineers and pilot trainers that actually use the simulator can define the behavior of the pilot actors. These people are well educated and
accustomed to using computers but not necessarily AI/actor experts. Many of the scenarios involved are very similar increasing the desirability of easily reusing parts of existing specifications. The pilot actors need to appear to be intelligent and act realistically in a very complex environment. The actuators for the actor, i.e. the aircraft controls, are extremely complex and allow many degrees of freedom [Tambe et al., 1995a].
1.3 Related Work
Recently there has been a lot of interest in development methodologies for agents. Examples include methodologies for Belief Desire Intention (BDI) agents [Kinny and Georgeff, 1996], for behavior based agents [Bryson, 1998], for distributed multi-agent systems [Bussman, 1998], for safety critical multi-agent systems [Bouchebra et al., 1998] and a more general method for a range of agent oriented systems [Wooldridge et al., 1999].
Development environments, often including substantial graphical support, exist for building actors or agents for a variety of domains. For example the Bond system for collaborative network agents [Böhm and Marinescu, 1999], Jackal for agent based communication infrastructure [Cost et al., 1999], Zeus for distributed agents [Nwana et al., 1999], the Icon Modelling Tool (IMT) for mobile agents [Falchuk and Karmouch, 1998] and MissionLab[MacKenzie, 1996] for robotics. None of these tools are suited for creating complex actors for simulation environments.
Systems do exist for developing actors for interactive simulations. The AgentSheets system, for example, is designed to allow users with very limited computing experience, often children, to develop fairly complex actors [Repensning, 1993]. AgentSheets agents, however, are greatly restricted in the type of sensing and acting they can do. KidSim is also designed to allow very inexperienced users to develop intelligent actors [Smith et al., 1997]. Like Agentsheets, KidSim is very restrictive in the possible behavior and types of environment.
EASE fits into a niche between the systems mentioned above. EASE provides a structured development methodology to improve the process of building actors. However, somewhat in contrast to the methodologies above, the emphasis of this process is to build the systems quickly and easily. Like AgentSheets, EASE is not intended to be used only by actor or programming experts. However the increased abilities of EASE actors, as compared to those of AgentSheets, means that EASE users will need to be more experienced than AgentSheets users. In many ways EASE attempts to meet the same goals for simulation actor development as MissionLab does for robot programming.
2 Multi-agent Decision Making
In this section the functioning of the multi-agent system that controls an actor is explained. In the following section the development process for such a multi-agent system is described.
There are two main types of agent, referred to as managers and engineers, in the multi-agent system of an actor. The agents are arranged hierarchically into a forest of trees. Engineers will be the leaf nodes of the trees, i.e. at the bottom of the hierarchies, while all other nodes are managers. Connections between the nodes are in the form of contracts.
Managers type agents are charged with more abstract tasks. To achieve their tasks manager agents contract other agents, either engineers or other managers, to fulfill specific parts of the task. A manager may have contracts with zero or more agents at any time and may break existing contracts or make new contracts over time. For example one manager agent may be responsible for a particular patrol mission. The manager could contract a sequence of agents one for taking off, one for flying through each of the patrol’s waypoints and one for landing the aircraft.
Contracts form the only connections between managers and other agents in the system. The contracts always form the agents into a strict hierarchy. At design time a designer hardcodes the contracts a manager should make to handle a particular situation, i.e. there is no service brokering or negotiation to find an appropriate agent. An agent will not refuse a contract request from a manager for a new contract, in fact at the implementation level a new agent is created specifically to service the contract.
At runtime, if for some reason an agent is unable to fulfill its contract, either temporarily or permanently, it will inform the manager that contracted it. The contracting manager may then choose another course of action, if it has one, or
report the failure to the manager that contracted it.
The designer will specify one or more agents to be started at runtime. The specified agents will be the only agents not contracted by another agent at runtime. Generally the starting agents will form the tops of hierarchies of agents that in turn control the actor.
A special type of manager agent, called a list manager, is associated with a type of thing in the environment. For every instance of the type sensed by the actor the list manager will contract an agent and assign it to the specific instance. For example a list manager responsible for avoiding midair collisions will contract a specific agent to avoid each particular midair obstacle. Each contracted agent needs only concern itself with the specific obstacle it was assigned.
The other main type of agent (a manager was the first) is an engineer. Engineers are at the bottom of the agent hierarchies. Engineer agents negotiate with other engineer agents over the output of the actor, i.e. the commands the actor sends to the environment. Each engineer "argues" for output values that best fulfill the task it has been contracted to achieve.
Each degree of freedom of the actor is associated with a factory, e.g. a simulated pilot may have factories for aircraft heading and speed. In this way the complex negotiations over the actors' behavior is split into a number of simpler negotiations. A factory continuously suggests possible output values to all engineers that have registered interest in its output. The interested engineers respond with their satisfaction with the suggested output value. At regular intervals the factory sends to the environment the output value most acceptable to the interested engineers.
For example a factory for the heading of an aircraft may have two interested engineers. One of the engineers is responsible for avoiding a particular aircraft and another is responsible for getting to a particular waypoint (see Figure 2). The avoidance engineer will be satisfied with any heading suggestion that leads to avoiding the other aircraft, and be more satisfied with headings that result in comfortably avoiding the aircraft. The waypoint agent will be satisfied with headings that result in generally heading towards the waypoint and happier with those that result in heading directly toward the waypoint. Hopefully the factory can find a heading that satisfies both the engineers.
The process of negotiation is continuous. Because the environment will be changing as the negotiation proceeds so may engineers preferences for different output values also change. When new engineers are contracted or when existing engineers are stopped the factory negotiation simply continues on, albeit with the new set of agents. The continuous nature of the negotiation means that it is an anytime algorithm [Zilberstein, 1996] and hence is suitable for (soft) real-time actors.
The negotiation mechanism allows multiple high level tasks to be attended to simultaneously without a designer having explicitly considered the interactions between the tasks. Each manager, responsible for one high level task, simply contracts appropriate agents to perform pieces of its overall task. At the bottom of the subsequent hierarchies the engineers associated with the different tasks
This mission manager has contracted an agent to fly to waypoint A
An engineer created to avoid particular aircraft
Agent negotiation (abstracted - of course)
Go at about 76 degrees
Don’t go between 60 and 80 degrees
GO DOWN!
Its OK to go down (we are high)
Output to actor actuators
Heading
81°
Altitude
Go down
Figure 2: A snapshot of the multi-agent actor controller
negotiate amongst themselves and the factories to find a solution that achieves all, or as many as possible of the managers’ tasks. The negotiation process integrates the high level tasks without designer intervention.
The earlier example, with two engineers, raises the question of what happens when the factory cannot find an output value that satisfies all of the interested agents. When not all engineers can be satisfied the priority of the engineers is taken into account.
The priority of an agent is a function of three factors. The first factor is the intrinsic priority of the agent. The intrinsic priority is a static value based on the type of task the agent has, e.g. a safety critical ground avoidance agent has a higher intrinsic priority than a fuel conservation agent. The second factor of the priority is referred to as the organizational priority. The organizational priority is defined by the role the agent has in the overall system. For example the aforementioned fuel conservation agent will have a high organizational priority when contracted by a manager responsible for leaving hazardous territory. The final factor of an agent’s priority is referred to as environmental priority. The environmental priority is dynamic and varies according to the significance of the agent given the current environmental circumstances. For example a fuel conservation agent may have high environmental priority when the remaining fuel is low and the aircraft is far from base.
Although most significant for engineers, all agents have a priority. For managers the priority system, in particular the organizational factor, means that higher priority managers get more say in low level negotiations, i.e. a higher priority manager will have engineers lower down in their hierarchy with high organizational priorities and hence more say in negotiations.
3 Actor Development
EASE supports the whole development process, from design to testing to reuse. Many of the development stages are explicitly supported by tools while for other stages of development a particular methodology is advocated. In this section the development process with EASE is explained in detail with particular emphasis on the tool support provided.

The development of an actor (see Figure 3) begins with the design of trees of tasks and behaviors that describe the overall behavior of the actor. Behaviors, continuous aspects of the actor's overall behavior, e.g. ground avoidance, are mixed freely with tasks, time bounded aspects of the actor's overall behavior, e.g. flying to a waypoint. The design need not consist of a single tree but may be a forest of trees. The tops of trees represent high level abstract behaviors of the actor. Further down the tree are more specific aspects of the actor's overall behavior (see Figure 1). There is no timing or sequencing information at this stage, there is simply a breakdown of overall functionality into pieces.
The next stage of development is to assign an agent to each node in the task forest. The agent will be performing the task assigned to that node. Leaf nodes will be assigned engineers and internal nodes, managers. Branches between nodes are replaced by contract specifications. The internal behavior of the agents enforces appropriate sequencing of tasks.
The assignment of agents to nodes is done in two steps. Firstly, existing libraries of agents are consulted to find agents that can be reused. Often reusable agents would have been created for other, similar scenarios. In most cases if an agent is found to match a non-leaf node in the behavior forest a whole agent hierarchy will be found for the node's branch. Usually an agent can be imported without change, however some tuning may be required later to adjust the priorities of the imported agents to the new specification. The amount of tuning should be minimal and be required only at the top of the imported agent hierarchy.
When appropriate agents cannot be found in existing libraries they are created in the core of the EASE system – the agent specification tool (see Figure 4). To specify an agent the designer specifies a name, intrinsic priority, environmental priority function, a state machine for controlling behavior and any contracts or factory assignments the agent will have.
The name of the agent can be any text, hopefully describing the intended functionality of the agent. Next the designer needs to specify whether the agent is a manager or an engineer. The intrinsic priority is set via a slider and should reflect the designer's assessment of the importance of the functionality of the agent when considered in isolation. The environmental priority function is defined via the function specification system (described below). The function maps actor sensor readings to values representing the importance of the agent in the particular situation, e.g. sensor readings indicating proximity to the ground should map to high values for an agent responsible for ground avoidance.
The state machine for an agent provides the mechanism by which the decision making of an individual agent is defined. (Having state machines as the only decision making mechanism is a current specification system limitation, rather than an important aspect of the overall system.) The state machines are standard single level Moore state-machines. State transition conditions are defined with the function specification system. If the agent is an engineer, for each state in the state machine the engineer is assigned to negotiate with a specific factory (i.e. over a particular degree of freedom). The function specification system is used, once again, to define the function that the engineer
Figure 4: The Agent Specification subsystem
uses to determine the acceptability of a factory suggestion. Alternatively, if the agent is a manager, for each state the designer specifies the contracts the manager should make in each state. Specifying a contract consists of selecting which agent should be contracted and instantiating any parameters associated with the contracted agent (e.g., the waypoint to fly to).
Once an actor has been partially or fully specified it can be tested in the target simulation environment. In order to support an iterative process of testing and incrementally expanding or improving actor behavior a number of graphical interfaces display interactively and in real-time the status of the agent’s reasoning system.
The main testing interface shows all the currently active agents and their current status. The interface allows the designer to see whether the right agents are being created and whether the agents seem to be reacting to the environment in the desired manner, e.g., by creating appropriate contracts of arguing for reasonable output values. Problems such as state transitions being taken at the wrong time or wrong contracts being entered into can be observed with the tool. The designer can return to the agent specification system and make appropriate changes if problems are observed.
If the behavior of the actor is not as expected, but the correct agents seem to be active and in the correct states, the designer can use another tool to view the status of negotiations (see Figure 5). This tool gives a real-time view of the output of a factory and the satisfaction of each of the interested engineers to the factories suggestions. The designer may observe that some agents, due possibly, to overly high priority are getting too much say in a negotiation. Going back to the specification the intrinsic priority of the agent (or the agent that contracted it) could be lowered or the environmental priority function for the agent (or the agent that contracted it) changed. Alternatively it may be observed that an agent seems to be calculating incorrect values. To investigate further the designer can pop up another window that allows snapshots of the details of function calculations to be displayed. If the designer finds problems here they can return to the function specification system to rectify the problem.
The cycle of specification and testing will be repeated until the required actor behavior is achieved. The final stage of the development process is making the newly created specification available for reuse. However, reuse requires no further effort! New actor specifications can use parts of the current specification in a completely black box fashion.
3.1 Function Specification
The function specification system allows a designer to specify the different functions that an agent uses (see Figure 6). The requirements on the function specification system are quite imposing. Potentially extremely complex functions need to be specified, e.g., the activation function of an avoid enemy aircraft agent would take into account a wide range of factors about the relative positions of the other aircraft, in a way that does not require professional programmers and at the same time encourages reuse. The function specification system is based on
The agents being shown
Vertical Axis shows level of satisfaction
Horizontal axis varies over possible headings
Negotiation points
One of the avoid agents is very unsatisfied with headings in this region
The agent for getting to a waypoint wants heading values in this region
Sampled negotiation points
Negotiated output
Figure 5: The Negotiation Debugger shows the current status of the negotiations.
the idea of a spreadsheet. The spreadsheet paradigm was chosen because of the successful way non-computing professionals can develop complex functionality with a spreadsheet despite not being able to program.
The function specification system does not look like a conventional spreadsheet (the cells are not laid out in grid, rather they appear in a tree structure) and has some features that do not appear in conventional spreadsheets, most importantly parameterizable cells. Because cells are parameterizable so are functions. In turn because functions are parameterizable so are agents. Parameterizable functions and agents lead to high levels of reuse, as well as reducing the specification size. The other major difference to a spreadsheet is that values and functions are not entered as text rather they are created by selecting options from lists. This removes the possibility of specification errors due to syntax or type problems.

Figure 6: The Function Specification subsystem
4 Discussion
In a series of famous papers Brooks argues strongly against the dominant ways that AI practitioners went about building intelligent systems [Brooks, 1991a, Brooks, 1991b]. Rather than complex, monolithic systems incapable of interacting with the real world Brooks advocates a behavior-based approach where the overall behavior of a situated actor is broken horizontally into smaller pieces
of behavior. It is argued that actors should be built by creating simple behaviors then incrementally "subsuming" the existing behaviors with more complex behaviors.
Brooks’ behavior-based idea is extremely promising. An intuitive breakdown of overall actor behavior and a low risk incremental development process seem immediately possible. For users unaccustomed to building actors, in particular domain experts, the behavior-based paradigm potentially offers much especially in terms of having an actor specification that matches an intuitive breakdown of the actors’ task. However subsumption and behavior-based systems have not delivered all they have promised. One of the key reasons seems to be that the subtle interactions between individual behaviors means that the complexity of adding new behaviors to an actor soon becomes overwhelming [Bryson, 1998].
Recently a variety of different approaches have been taken to reduce the complexity of the interactions or, at least provide methods for allowing developers to handle the complexity better. The methods either organize behaviors differently, e.g. [Parker, 1998, Blumberg and Galvez, 1995], or combine the outputs of the behaviors in a different way, e.g. [Yen and Pfluger, 1995, Pirjanin, 1998, Riekki, 1998, Rosenblatt, 1997]. With the EASE actor architecture we are taking this trend one step further.
A behavior in a behavior-based system is an agent in EASE. By elevating behaviors to the status of agents the interactions between behaviors simplify in the same way that “agentifying” other complex systems simplifies the interactions between subsystems. In effect, using agents instead of behaviors makes behaviors “active” rather than passive entities. The interaction between agents can then be strictly controlled, through contracts and negotiations, and more easily understood – drastically reducing the complexity of the effects on overall behavior due to subtle interactions. The reduction in the amount of subtle interactions between behaviors should lead to an increase in the level of actor complexity that a designer can be reasonably expected to develop.
The EASE multi-agent system, being an extension of behavior-based ideas, shares many properties with “standard” behavior-based systems. EASE lies very much towards the reactive end of a reactive-deliberative scale. Agents are generally very simple, mapping sensor input to appropriate contracts or negotiation strategies. However this need not be the case, agents could potentially be more complex doing any type of planning or reasoning. The overall behavior of an actor is an emergent result of the agents’ interactions with others and with the world.
The potentially conflicting requirements of a usable development environment and genuinely useful actors imply an underlying actor architecture with the following characteristics:
- Actor specifications with abstract, loosely coupled specification elements; and
- A runtime engine that combines the specification elements in a powerful and flexible manner.
The former characteristic addresses usability. It is likely to be difficult to build a development system that “compiles” a user specification into an actor unless there is a fairly close correspondence between the specification and runtime mechanisms (in the same way there is a close correspondence between constraint programming or imperative programming specifications and their respective runtime mechanisms.) Hence it seems reasonable to assume that the actor architecture will need to have features that support enduser development. Abstraction is a mechanism used in all kinds of specification problems to allow a designer to break problems into manageable pieces. Loose coupling between specification elements is a desirable feature of a program design the reduces complexity, makes for more readable specifications and encourages reuse.
The latter characteristic, i.e. a powerful computational engine, addresses the problem of creating genuinely useful actors. Intuitively the more flexibly and powerfully a specification can be interpreted by the runtime engine the more useful behavior will be observed from a specification of a certain size and complexity. (The same phenomena can be observed with constraint and imperative programming. The constraint runtime engine is far more powerful than an imperative one leading to smaller and simpler constraint programs for some problems.) For example, if an actor runtime engine simply chooses the first applicable situation-action rule from some list the overall behavior for even a large specification is likely to be fairly simple. At the other end of the spectrum if the runtime engine takes the specification, supplements it with common sense reasoning and previous experience then invokes sophisticated planning routines
Figure 7: The result of a simulation run in TACSI.
the observed behavior of the actor will probably be fairly sophisticated. It follows that the EASE runtime engine should be as powerful as possible.
The actor architecture underlying EASE fulfills the two characteristics. The hierarchical structure and “single-mindedness” of the individual agents provides the loose coupling and abstraction desired. The multi-agent runtime engine provides the powerful reasoning system required to make useful actors.
Figure 7 shows the output from a simulation run in TACSL. There is one aircraft controlled by EASE. Notice the way the aircraft smoothly integrates getting to the waypoint and avoiding approaching aircraft. The flight path is reminiscent of obstacle avoidance paths for behavior based robots. Although this example is very simple it goes someway toward illustrating that the multi-agent system can produce some reasonable behavior. The jury is still out on whether the emergent behavior can produce usefully realistic behavior. More work needs to be done to determine precisely the strengths and weaknesses of the approach.
Both major aspects of EASE, namely the underlying computational engine and the overlying specification process, have been designed by looking at existing systems and attempting to improve modularity. Our previous experience developing agents suggests that modularity in agent specifications is a key to scaling up, reducing costs, improving testability and so on. The intended usage scenario for EASE makes modularity even more critical. In particular good modularity should provide the following desirable properties:
- **Rapid Prototyping.** Over time libraries of agents can be built up. New actors can be rapidly put together by reusing old agents, i.e. parts of old actors.
- **Highly Complex Actors.** As with the development and design of any complex system modularity is a key to making a specification comprehensible and manageable.
- **Use by Novices.** Once libraries of agents have been developed relative novices should be able to piece together agents in order to create required actors.
- **Good Development Process.** All reasonable software development processes rely on being able to break the problem down into pieces, developing actors should be no different.
- **Development Teams.** Good modularity of specification allows different developers to work on different parts of the same actor, leading to shorter development times.
Modularity was emphasized in the computational engine through limiting interactions between the agents. To an even greater extent modularity was a driving concern in the design of the development system. The modularity was largely achieved by ensuring that the tools encouraged breaking a task into pieces. The idea of specific purpose agents leaves little room for, say, mixing
flying to a waypoint with avoiding obstacles. In a complementary way the tools provide no mechanisms for considering the internals of other agents or even knowing about the existence of any agents except those directly, hierarchically related.
5 Future Work
Future work is intended to push the system even further towards the hands of domain experts rather than the present, realistic target of low level agents and functions created by agent experts and pieced together by domain experts. At the time of writing EASE was about to begin testing on site at Saab with simulation experts.
Future work on the actor architecture will include experimenting with different negotiation mechanisms, including the method the factory uses for finding new suggestion values, the protocol for the negotiation and the function the factory uses to decide which suggestion is the “best”. At present negotiation only occurs at the level of engineers, in the future negotiation between managers may allow better integration of multiple high level goals.
Future work on the EASE specification system will focus on ways of encouraging designers to develop actor specifications that can be easily reused. At the other end of the development cycle the process of identifying which parts of new specifications can use existing agents and how the appropriate existing agents can be identified will be improved.
Acknowledgments
This work is supported by Saab AB, Operational Analysis division, The Swedish National Board for Industrial and Technical Development (NUTEK), under grants IK1P-97-09677 and IK1P-98-06280, and Linköping University’s Center for Industrial Information Technology (CENIT), under grant 99.7.
References
|
{"Source-Url": "http://www.cs.cmu.edu/~pscerri/papers/tacsim9901ease.pdf", "len_cl100k_base": 6290, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 78939, "total-output-tokens": 8636, "length": "2e12", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.0004954338073730469, "__label__crime_law": 0.0004112720489501953, "__label__education_jobs": 0.0010576248168945312, "__label__entertainment": 9.5367431640625e-05, "__label__fashion_beauty": 0.00019359588623046875, "__label__finance_business": 0.000308990478515625, "__label__food_dining": 0.0003769397735595703, "__label__games": 0.001178741455078125, "__label__hardware": 0.0009899139404296875, "__label__health": 0.0005311965942382812, "__label__history": 0.0003960132598876953, "__label__home_hobbies": 0.0001043081283569336, "__label__industrial": 0.0006380081176757812, "__label__literature": 0.0003032684326171875, "__label__politics": 0.0004239082336425781, "__label__religion": 0.000461578369140625, "__label__science_tech": 0.06170654296875, "__label__social_life": 0.00010031461715698242, "__label__software": 0.0070343017578125, "__label__software_dev": 0.92138671875, "__label__sports_fitness": 0.0003924369812011719, "__label__transportation": 0.00086212158203125, "__label__travel": 0.00024139881134033203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38963, 0.02922]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38963, 0.56237]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38963, 0.92024]], "google_gemma-3-12b-it_contains_pii": [[0, 2135, false], [2135, 5153, null], [5153, 8030, null], [8030, 9706, null], [9706, 13027, null], [13027, 13408, null], [13408, 15695, null], [15695, 19165, null], [19165, 19209, null], [19209, 22482, null], [22482, 22891, null], [22891, 24320, null], [24320, 27370, null], [27370, 29206, null], [29206, 32017, null], [32017, 34287, null], [34287, 36601, null], [36601, 38963, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2135, true], [2135, 5153, null], [5153, 8030, null], [8030, 9706, null], [9706, 13027, null], [13027, 13408, null], [13408, 15695, null], [15695, 19165, null], [19165, 19209, null], [19209, 22482, null], [22482, 22891, null], [22891, 24320, null], [24320, 27370, null], [27370, 29206, null], [29206, 32017, null], [32017, 34287, null], [34287, 36601, null], [36601, 38963, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38963, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38963, null]], "pdf_page_numbers": [[0, 2135, 1], [2135, 5153, 2], [5153, 8030, 3], [8030, 9706, 4], [9706, 13027, 5], [13027, 13408, 6], [13408, 15695, 7], [15695, 19165, 8], [19165, 19209, 9], [19209, 22482, 10], [22482, 22891, 11], [22891, 24320, 12], [24320, 27370, 13], [27370, 29206, 14], [29206, 32017, 15], [32017, 34287, 16], [34287, 36601, 17], [36601, 38963, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38963, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
e7758fd1c83fd9338a17c1a025dd5da414d3f35d
|
Realistic Hair Visualization in Maya:
A new work-flows and methods
<table>
<thead>
<tr>
<th>Table of Content</th>
</tr>
</thead>
</table>
1. **Introduction** ........................................................................................................... 03
2. **Abstract** .................................................................................................................. 04
3. **Overview** .................................................................................................................. 05
4. **Application in Maya** ............................................................................................... 06
4.1 Geometry base hair .................................................................................................. 06
4.1.1 Zbrush hair ......................................................................................................... 06
4.1.2 Polygon cluster hair .......................................................................................... 08
4.2 Software Hair tools in Maya .................................................................................. 12
4.2.1 Maya Paint Effect Hair ..................................................................................... 12
4.2.2 Maya Hair ......................................................................................................... 16
4.2.3 Maya Fur ........................................................................................................... 16
5. **Integration & Practicing** .......................................................................................... 22
5.1 Overview of pros & cons of previous methods .................................................... 22
5.2 Suggestion of a preferable method in Maya ......................................................... 22
5.3 Implementation ....................................................................................................... 24
5.3.1 Stage 1: Polygons to Curves ............................................................................ 24
5.3.2 Stage 2: Creating Paint Effect Hairs ................................................................. 26
5.3.3 Stage 3: Apply Hair Style systems ................................................................. 27
5.3.4 Stage 4: Apply Hair Dynamics systems .......................................................... 29
6. **GHM Hair Creator MEL Scripts** .............................................................................. 32
7. **Final Products** .......................................................................................................... 36
8. **Future developments** .............................................................................................. 40
9. **Conclusion** .............................................................................................................. 41
1. INTRODUCTION:
Hair modeling is a fundamental part in creating virtual human in CG. Many artists are successful in creating human head & body but have not archive the same result with hair. Hair varies from its form, shape, color and material to its dynamics property. Hair modeling is a complex issue which has many problems due to its complexity. Hair hasn't got a complete industry standard solution yet. Many companies and individuals have developed their own tools and method to create and manipulate CG hair. Modeling and rendering hair to look close to realistic are difficult but also simulating its dynamics are even more problematic.
Hair Modeling can be divided into 3 main categories: hair styling, simulation (dynamics) & rendering (Magnenat - Thalmann, 2001). In general, there are two groups of methods in Maya which can be used for hair creation. These two groups of methods are: Geometry hair and native software tool in Maya. The former can be further classified into Zbrush hair and Polygon patches, and the latter can be divided into Maya Paint Effect Hair, Maya Hair and Maya Fur. I will study how these methods can be used in Maya for production. Finally, I will combine these methods together to come up with a more flexible and preferable way of creating hair after studying pro & con of these methods.
2. ABSTRACT:
In this report, I propose a new work flow and method for hair modeling. Through investigating the existing methods of hair modeling, identifying their pros and cons. In order to combine the strengths of the existing most common three methods together and avoid their weaknesses, I first model polygon hair as usual, then using Paint Effect tubes to generate curves base on these polygon hair surfaces. Finally I assign Maya Hair to those Paint effect curves so that they inherit the looking/render of Maya Hair nicely while still able to keep the neat control properties of Paint Effect tubes.
This will result in a system of multi resolution editing for Hair modeling, which allow user to edit at multi levels of the hair system from overall silhouette to micro detail of individual hair strands. Besides it also has a great advantage in dynamics solving as well.
I also wrote an Mel Scripts for my proposed work flow and method, which provide an easy way and user interface to manage and organize data used for the hair models. The application examples given in this report demonstrate effectiveness of my proposed work flow and method.
3. OVERVIEW:
Firstly, we should consider the characteristic of human hairs. Hair style varies among different persons and races. Naturally, hair style and other properties of hair varies depend on the characteristic of individual hair strands to some extent. For example, African's hair style tend to be afro-like and solid shape because their hair strands are very hard, thick, and curly. Asian hair are tend to be smooth and thin because their hair strands are small and round while European are little curly and ellipse shape.
( source: http://en.wikipedia.org/wiki/Hair )
However, due to the massive numbers of hair that human have on head (average 100,000 strands) (Anna Sokol, 2003) With the current computing ability and hardware limitation, we cannot represent and simulate accurately every individual strands of hairs on human's head but we have to find a way to simplify hair to a general shape and volume, which is easier and lighter to control and manage. These methods below are different approaches of simplification of the hair as well as control and dynamics handling. I will go through each of them and evaluate the following characteristics (Kelly Ward, 2004):
- Hair dynamic
- Performance vs Visual Fidelity.
- Unique stylish / Easy to control.
- Variation in Hair property and shape.
4. APPLICATION IN MAYA
4.1 GEOMETRY BASE HAIR
GEOMETRY BASE hair - is the most common method of creating hair used in game and animation nowadays, because it's simple, easy to model, compute dynamics and render. Until now it is the most simple way to model hair and it is effective.
This approaches is based on the characteristic of real hair, hair tend to group together and being static enough to maintain a solid overall silhouette. This method represent group of hair as an identical shape geometry such as trigonal prisms or generalized cylinders (Kelly Ward, 2004).
This approach give user an easier and quicker way to model and decide the hair shape and form as it is merely straight forward polygon modeling. User can quickly block out the overall hair shape and freely in creating style using geometry modeling techniques.
Below are some different popular way of creating polygon hair:
4.1.1 Zbrush hair:
Using a powerful sculpting and modeling software like Zbrush, we can create various polygon hair style. Given a base mesh, then start sculpting to form the overall hair shape. However Hair model from Zbrush or any other Sculpting software will have problem in Rendering and dynamics as it consists of high density vertex geometry which is nearly impossible to render and make dynamics in animation software like Maya, Max or XSI...etc. This model can be used only for single rendering like still image and poster and prototyping hair style. However they are also can be used to generate normal for low polygon hair in Next Gen games.
Hair sculpting using Zbrush
By Rodrigue Pralier (2008)
Block out overall hair shape
Creating loose hair strands using Zsphere
By Joe Lee (2009) Complete hair
Zbrush version In game version
Army of Two: The 40th Day - Electronic Arts (2009)
Pro:
- Fast to prototype hair style, easy to stylish and block out overall shape.
- Good for one piece compact hair style (such as short male hair).
- Suitable for creating Normal Map to use in low-poly game art.
Cons:
- Lack of natural detail like loose strands, small curly hairs.
- Great consuming of polygon which is difficult to rig dynamics & render.
- An overall hair texture need to be made manually.
4.1.2 Polygon cluster/wisps:
The most popular way of creating polygon hair is using clusters of polygon to represent wisps of parallel hair strands. These polygon clusters vary in shape and form which will greatly alter and affect the looking of hair. These are typical forms of polygon clusters: strip (thin flat patch of NURB or polygon surface), cylinder, box, ...etc. Cluster polygon hair are often used in game and animation. As they are easy to model, texture, take less much time to render and also easy to make dynamic.
Example of polygon cluster hair patches:
Each patch of polygon are representation of a number of hair wisps, the more polygon patches used, the better the result would be because hair surface will be represented more accurately and closer to real hair model. However more surface patches mean more computing when rendering and dynamic. In the example below, when using 4 large hair geometry patches, the render time is 2.34 minutes.
But if we increase the number of hair patches to represent the real hair model (12 patches) the result is much closer to real hair but instead we lose the performance in rendering time & dynamics simulation.
Because of the characteristic of the simplification method by grouping hair into clusters of polygon patches. This method has a big flaw in interpolation between two cluster, this artifact can be seen clearly at the intersect point of two cluster when viewing in close up. Beside, we can also see the boundary of patches at some odd angle, this lead to unrealistic looking and make the hair looks like groups of thin shape plates. In order to prevent this artifacts we either have to increase the number of poly patches as well as lower the hair strands that each patch represent. Also more accurate collision detection are need to prevent crossing intersection among patches.
4.2 SOFTWARE HAIR TOOLS IN MAYA:
Maya also has its native tool to create hair, it comes with 3 tools included: Paint effect Hair, Maya Hair and Maya fur.
4.2.1 Maya Paint Effect Hair:
This is basically Paint effect tool in Maya which generate Paint effect Hair look-alike strokes.
**Pros:**
Using Paint Effect the user can either paint the stroke directly on surface or make the stroke follow a specific curve or surface.
This way we can control where the hair should be put and easily create a path that the hair grows along. This give us an easy and accurate way to manipulate the grow path of hair or create hair whorl.
Paint effect Hair have many attribute to control the way hair growing by using control curves, this allow user more flexible and absolute way to create various hair style. The users can also control properties of individual hair tubes such as Width, Width scale, length, curl...etc. Besides, using Paint effect Hair we can specify the path that hair grow and create unique hair style by making the hairs tubes run along user's control curves.
Paint effect hair can render in a short amount of time using Maya software render. To be able for Mental Ray to render Paint Effect Hair, the user can convert it to polygon. This will make it become polygon type hair which we can manipulate at vertex level and also assign custom textures and shaders to it.
**Cons:**
The weakest point of Paint effect Hair is its looking. Although it is rendered quickly using Maya software render, but the lighting, and shadow quality is not good enough for production - which will require more works during postproduction process later.
The reason is paint effect Hair is comp in as a post process via Z depth sorting if using Maya software render. In those illustrations above, we can see there are no self shadow among each hair tubes at all, and the shadow they cast on the wall or face mesh is fake offset shadow.
On the other hand, converting paint effect Hair to polygon and using custom shader improves the way it look much more, however If we convert it to polygon Hair and render with mental ray, we would need a massive amount of polygons which is heavy, expensive and not efficient for rendering and simulation.
Besides, Paint effect Hair Does not support dynamics solver, We either need to rig the polygon hair manually or apply a dynamics solver to Paint Effect stroke.
4.2.2 Maya Hair:
Maya Hair is a native tool to create and simulate realistic Hair in Maya. It is basically a dynamics curve system. Maya hair has almost covered all the weak point of paint effect Hair including dynamics solving, rendering, optimizing for speed...etc. Maya hair consists of number of groups of Hair follicles. These follicles will be used to solve the dynamics of the hair. They are also representation of hair clumps which consist groups of small hair strands.
PROS:
Similar to the method of simplification of polygon hair path, each follicle is a representation of a group of hairs. Therefore, the more follicles we use, the more realistic & accurate simulation will be and more control over individual hair strand we have.
We also have control over individual follicle properties to assign an unique look for each hair group such as density, curl, color ...etc. It also can generate hair braid easily if we check the option for braid.
Maya Hair also provide us with a number of attributes to control hairs texture and random noise in diffuse, specular ...etc which can make the looking of hair closer to realistic.
Maya Hair can be rendered fairly quickly using software render with decent quality. It can also be rendered in Mental ray for better quality without having to convert it to polygon as it is fully integrated into the scene when using Mental Ray. Maya Hair render is much more accurate and closer to reality hair than Paint effect hair.
**CONS:**
Despite of these big advantage over Paint effect hair, Maya Hair is very difficult to model and maintain overall hair style and shape using the conventional workflow which are:
- growing Hair follicles on surfaces or points.
- using Maya curve editing tool, deformation tool or Maya Hair- Modify curve tool to edit those curve / model the hair style.
This method is slow and hard to style the hair to user's desired shape, especially when there are too many follicles. The result will be dull looking hair or very primitive hair style. It would take a lot of time and effort to create complex styles hair.
Another backward is that Maya Hair dynamics will break the initial hair style or the initial shape of the curve during the simulation process, to prevent this we can tweak the stiffness and attract to curve value, but this will result in unrealistic dynamics and very stiff hair.
For example, using the Dynamics setting A shown in this illustration, we can keep the shape of the Hair style during the simulation. However, if the base object rotates and points downward, the hair still keep its shape and point upward rather than being pulled down by gravity. This is not how the real hair will behave. If we change the Dynamics setting to another one for example setting B, which allows the hair can be pull down by gravity then it cannot keep its style and initial form after the simulation.
4.2.3 Maya Fur
Maya Fur is simpler than Maya Hair and it is specialized for creating fur, however we can use it to generate Hair styles that have similar characteristic to Fur such as Afro, short male hair, punk hairs...etc.
It is fast to modeling using painting tool that can paint baldness, color, length, polar...etc maps. However as stated above it's only for short hair style only, it's nearly impossible or very difficult without any additional script or plugin to model long hair style using Maya Fur. Therefore it's ideal to use Maya Fur to model the extra small and short hair that poke out of overall hair volume or short curly hair around the back of the neck.
5. INTEGRATION & PRACTICING
5.1 Overview of pros & cons of previous methods:
After finish studying about different methods of making hair in Maya, I am able to conclude that:
+ polygon hair modeling is the quickest method and give the users the easiest and most accurate control to model and style the overall shape of the hair to users' desired style. However it is lack of realistic feeling of individual hair strand, clump interpolation and dynamics solving. It is only good for modeling overall silhouette of the entire hair style.
+ Paint Effect is good at controlling the way that individual hair clump grow and run while still keeping the overall shape. It also automatically generates hair strands on a given path curves which saves user a lots of time.
+ Maya Hair is very powerful in shading, rendering, controlling over individual hair strands and clump. It also provides an internal dynamics system to solve hair dynamic. However it consumes large amount of work and time to style the hair to user's desired style using the conventional workflow.
5.2 Suggestion of a preferable method in Maya.
Base on the pro and cons above, I thought of a way to combine the advantages of each method and form my own way of creating hair in Maya.
My basic idea is: Model and stylish overall silhouette of the hair using polygon modeling. Then I will use paint effect to generate curves going along these base geometries, it will result in a group of curves that take form of the hair silhouette that we created in polygon modeling stage (These curves would be used for representation of hair clump or strands later). Finally, using those Paint Effect curves as the input for Maya Hair so that Maya hair clumps will run along the curves that created by Paint Effect earlier.
To sum up of my method above, basically we model polygon hair as usual, then using Paint Effect to generate curves based on these polygon surfaces, these curves can be tweaked/controlled using Paint effect control for a more natural look. Finally, to be able to render these paint effect curves, we assign Maya Hair to them.
This method would inherit full advantages of the three methods above. We can model and style the hair as easy as polygon modeling, have much control over the hair strands like paint effect while still keeping the overall silhouette, and inheriting the render setting and looking of Maya Hair system. It also has some extra benefit when we want to carry out editing at
different history level, solving dynamic...etc. These extra advantages and the internal structures of my method will be discussed further below.
5.3 Implementation:
Below are detailed practical stages of my work flow for this method.
5.3.1 Stage 1: Polygons to Curves
Firstly I modeled the hair style using polygon modeling. Then convert it to curves using my own MEL script. In order to do this, the polygon surface need to be converted to a NURB surface, these curves are actually the duplicated of the NURB surface 's CV curves. So the polygon surfaces need to be in form of tube or flat patches, which mean only consist of one NURB path when it 's converted otherwise it will not work. Then using those converted curves as control and path curves for Paint Effect tube hair to grow.
Besides, as stated above these curves are duplication of C and V curves of the NURB surface, if C are control curves then one V curve at the end of the NURB surface must be path curves for Paint Effect Hair and vice versa.
Why do we need to create curves base on a polygon surface but not draw the curve directly in 3D space without having a base mesh?
The reason I did this because there are chances that we need to tweak and edit the overall shape of hair style many times later, it would be very time consuming if we need to edit all those curves to change a major part of the hair silhouette. These curves were created in 2D space on the NURB surfaces with histories & connections being kept therefore if we change the base surface, those curves will automatically update to adapt. Doing it this way, if we want to edit the hair style, we just need to edit the base mesh using what-ever tool we like to edit polygon surfaces.
5.3.2 Stage 2: Creating Paint Effect Hairs
I created a Paint Effect tubes stroke using these curve in earlier stage as path curves and control curves. These Paint Effect tubes will be used as Maya hair input curve later. Paint effect's tube curves automatically fill in the gap between two control curves so I does not need too much effort to fill all the rest hair strands on human head. I also can edit those control curve manually to get rid of the parallel looking between hair strands.
In this stage we can also control the hair style at hair clumps level using Paint effect control settings for tubes. We can make them bend, curl, spiral...etc and control the amount of how much should they follow the path, control curves...etc. This step makes the Hair style looks much more natural as it gets rid of the parallel artifacts of the hair simplification method and adding more follicles to our Hair model.
As you can see in the example above, Maya hair using Paint Effect Curves as follicles have much more natural looking than Maya hair using surface 's CV curves as follicles directly.
This whole process build up a Multi-resolution Editing system for the hair model (Kim and Neumann, 2003) Which allow user can easily pick any level of control that they want to edit such as large overall shape of the hair model or small details of individual hair strands.
5.3.3 Stage 3: Apply Hair Style systems
From this stage we can either render Paint Effect strokes in Software or convert it to polygon and render in Mental Ray. However for a better result, I output Paint Effect as curves then use these curves as hair strands for Maya Hair System by assign Maya Hair System to them.
I also found a way to enable one follicle to have different looking hair assigned to it. Commonly each Maya Hair system has its own follicles groups and from these follicles it know where and how the hair strands should be through the attributes "currentPosition" and "outputHair". However by applying one more Hair system to the follicle by connecting the
outHair attribute of existing follicles to inputHair attributes of the new Hair system. Besides, we don't need to make a connection for the currentPosition attribute of the second Hair System as the follicles already received position data from the first Hair System. Thus this new hair system will use the position of the existing follicles instead. The new hair System's style will be added to the existing follicles, so these follicles will have 2 different looking hair styles at a same time with different setting yet share the same positions and follicles. It is efficiency to do this and save unnecessary follicles when using multi hair system on a same hair model.
Using this technique, I can efficiently add extra hair systems styles to my hair model without increasing in number of follicles and computing for dynamics solving but also reuse of existing follicles. In this case I added these extra little thin hairs around the hair model to make it looks more realistic and natural.
5.3.4 Stage 4: Apply Hair Dynamics Systems
Even though there are more than 100,000 strands of hair on human head, however luckily hair strands tend to move in group or in a similar direction to their neighbors strands (Kelly Ward, 2004). Base on this fact, we don’t need to solve dynamics for every single hair strands. Because of the internal structure of this method, the actual Maya Hair are driven by Paint effect Curves, and those curves are actually driven by control curves which are based on polygon surfaces. All of these control levels histories are kept during the making process so we can solving hair dynamics by applying dynamics system to any of those three levels of control. The rest of the hair model will move according to the dynamic parts.
- We can solve dynamics by applying a Ncloth dynamics system to the base geometries level, however this will only solve the dynamics in level of large hair surface to each other but not individual clump.
- The other ways is we can use the dynamics property of the Maya Hair in the final level control. This will solve the dynamics at level of individual hair clumps. However because we want a natural looking hair, we suppose to have a fairly large amount of hair follicles in the modeling stage, so that hair would not look too clumpy. This will result in a large numbers of follicles which are difficult to be dynamics and very slow to preview and tweak the setting. Moreover, There is an artifact that applying dynamics at this final stage will also fade away the initial style and form that we model earlier. It will be very difficult for follicles to keep it initial form after and during the simulation process.
- Finally, the most optimal way of solving dynamics I can think of so far is applying dynamics to the control curves at level 2 control: Paint Effect tubes. when These control curves are made dynamic, they are not limited to the level of "surface to surface", each of them will have their own dynamics and collide feature.
Paint Effect Curves follow these control curves while still keeping all the setting of the Paint Effect stroke, So they can still keep the initial form while moving dynamically to those control curves. Also performance speed is much better since we only need to solve the dynamics for those few control curves.
To implement this, I decided to use dynamics system of Maya Hair (or Dynamics Curve system). The CV Curves that are duplicated from the NURB surface will be plug into a Maya Hair system as input curves to solve the dynamics. This dynamics Hair System will output group of curves which are then used as control curves for the Paint Effect tubes. This Maya Hair System doesn't need to be render, but to be used to solve dynamics only, so it doesn't need a PFXhair node connected to it.
To optimize speed when previewing the Hair dynamics in animation we just need to turn off visible of everything but output curves of this Hair system, which are few enough to run smoothly and enough for represent the overall hair model's behavior.
However, this method is only a simplification of full dynamics solving for the hair. It has a weak point of solving accurate collision among the small hair clumps with outside influence objects due to its interpolation nature. To cover this, we have to enable collision at the final level control of the hair system, however doing this will slow down the performance a lot.
It depends on the different situations and hair styles to decide which stage we need to put our dynamics solver. This study does not mainly focus on dynamics solving for hair so I would not go too deeply into this issue.
Here is a sample video showing the hair driven by dynamic control curves:
[ File: hair_dynamic.avi ]
6. GHM Hair Creator MEL Scripts:
Because of these complex steps in creating the hair, my method became having too many steps and difficult to implement manually as well as to maintain and organize objects and control layers. So I decided to write a Mel Scripts that can summarize these steps in a few buttons, also provide an easy way and interface to manage and organize every curves, stroke, style and hair system...etc. I name it base on the characteristic of the method: "Geometry Hair to Maya hair creator - Geo-MayaHair". In the process of writing this script and using it, I also find out some extra interesting and efficient way of making hair.
Here are some features of Geo-MayaHair to help users create Maya hair easier:
- **CONVERTING MESH TO CURVES:**
User can convert polygon objects to curves to use in the first stage of the method. As mentioned before, in order to get the correct result, polygon objects need to be in tube shape or flat patch of surfaces. However, tube shapes are recommended because the way the script works. Using tube shapes, users can extrude and edit polygons while still keeping the hair growing along with it, as long as it is in tube shape. Flat surface shapes are fine but the user can only edit the shape but not change the topology after converting it to curves.
- **CREATE AND APPLY PAINT EFFECTS:**
After finishing modeling the polygon hair and converting it to curves, users can create different PFX styles and apply them to any surface patch that has been converted to curves earlier. Even after applying the PFX stroke to polygon mesh, users can still edit the polygon.
Using a list of stroke and brush, users can manage and organize PFX styles and styles easily. It's easier to select a stroke in the list rather than select them on the viewport when it's converted to Maya hair later.
- **CREATE AND APPLY DYNAMIC HAIR SYSTEMS:**
For the moment, I only include the option for apply dynamics system to control curves only, which I consider to be the most optimum way for this method. User can create different dynamics hair system and apply these system to different strokes in the scene.
- **CREATE AND APPLY STYLIST HAIR SYSTEMS:**
The user can create and modify different hair system then apply to whichever stroke he want. As said above, multi stylist Maya Hair system can be applied to a single stroke without increasing in number of follicle that stroke has which mean all the stylish hair system apply to a stroke will share the same groups of follicle and follicle override settings as well.
- **MANAGING AND EDITING FOLLICLES:**
To take advantage of the feature that Maya Hair has over each its follicles that users can tweak override settings for individual follicle such as: intensity, twist, color ...etc. However selecting follicle is very difficult in Maya viewport and in Maya Outliner. This Script allow you to quickly get a list of follicle from selected stroke or Hair System or group of selected curves.
Here is a demonstration video showing the usage of the script.
[ File: script_demo.wmv]
7. Final Products
Using this script has saved me a lot of time in creating and editing Maya Hair with this method. All the sample images below are created in less than 1 hour (the actual time in making the hair model excluding the render time)
8. Future developments
For further development of my workflow and method of creating Hair in Maya, I would like to study further into hair dynamics to solve the existing dynamics problem that the hair losing its initial form and shape during and after the simulation process.
Beside I would like to develop a tool that allow user to model polygon hair easier and faster such as: drawing path for extruding polygon, and also provide a more free way to style the hair like combing and dragging the polygon hair, painting follicle attributes & color...etc. As for the GHM hair creator Mel Script I would like to add some extra template settings for PFX style, dynamics hair system and stylist hair system for different hair style like: Afro, Asian, Western, blonde...etc and different hair condition such as: wet, dry, electric shocked hair...etc.
However I am quite happy with the current state of the project. I took me days of working and a lot of effort to get a good looking hair in Maya before I come up with this method of creating hair and Geo-Mayahair creator Script.
9. Conclusion:
A new work flow and method has been proposed in this paper to achieve realistic hair visualization in Maya. This new work flow and method incorporate the advantages of the three existing most common hair modeling method in Maya: Polygon hair, Maya Paint Effect hair and Maya Hair. The applications indicate that my proposed workflow and method are advantageous over each these methods. With the using of my GMH Mel script and the new work flow, I was able not only to produce high quality hair model in short amount of time but also edit and changing properties of hair model easily and quickly.
Working on this project I have understand more about the internal structures of different Maya hair systems and how are they connected together and take advantage of it. Making a realistic and good looking CG hair is a difficult challenge, which involve not only style modeling, shading, looking of the hair but also dynamics of the hair as well. Because of the massive number of hair human have and the complexity of various hair styles as well as its dynamics properties, visualizing hair in 3D is a tricky field, we need to make the best out of everything we got in order to have a good result.
Reference
2 - "Practical Real-Time Hair Rendering and Shading"- Thorsten Scheuermann (2004).
4- "Rendering Skin and hair" - Jonathan Gibbs, 2001.
5-"Using maya Hair" - J. Adrian Herbez , 2006).
Further readings:
11- "Interactive multiresolution hair modeling and editing" - Kim and Neumann, 2002.
|
{"Source-Url": "http://phungdinhdung.com/Studies_paper/GMH/PhungDinhDzung_Innovation_report.pdf", "len_cl100k_base": 6701, "olmocr-version": "0.1.50", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 63284, "total-output-tokens": 8556, "length": "2e12", "weborganizer": {"__label__adult": 0.004474639892578125, "__label__art_design": 0.1414794921875, "__label__crime_law": 0.0010423660278320312, "__label__education_jobs": 0.00572967529296875, "__label__entertainment": 0.004230499267578125, "__label__fashion_beauty": 0.00948333740234375, "__label__finance_business": 0.0011186599731445312, "__label__food_dining": 0.0010576248168945312, "__label__games": 0.03619384765625, "__label__hardware": 0.0053253173828125, "__label__health": 0.0016803741455078125, "__label__history": 0.0019245147705078125, "__label__home_hobbies": 0.0008215904235839844, "__label__industrial": 0.0014657974243164062, "__label__literature": 0.00201416015625, "__label__politics": 0.0007371902465820312, "__label__religion": 0.002132415771484375, "__label__science_tech": 0.09149169921875, "__label__social_life": 0.0008296966552734375, "__label__software": 0.1630859375, "__label__software_dev": 0.5205078125, "__label__sports_fitness": 0.0010852813720703125, "__label__transportation": 0.0012149810791015625, "__label__travel": 0.0009670257568359376}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34535, 0.02307]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34535, 0.10981]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34535, 0.92734]], "google_gemma-3-12b-it_contains_pii": [[0, 67, false], [67, 2955, null], [2955, 4286, null], [4286, 5441, null], [5441, 6749, null], [6749, 8304, null], [8304, 8552, null], [8552, 9534, null], [9534, 9926, null], [9926, 10134, null], [10134, 10811, null], [10811, 11440, null], [11440, 12192, null], [12192, 12458, null], [12458, 13206, null], [13206, 13685, null], [13685, 14163, null], [14163, 14916, null], [14916, 15298, null], [15298, 16092, null], [16092, 16766, null], [16766, 18545, null], [18545, 19239, null], [19239, 20254, null], [20254, 20962, null], [20962, 21875, null], [21875, 23008, null], [23008, 23681, null], [23681, 24764, null], [24764, 26319, null], [26319, 27752, null], [27752, 28406, null], [28406, 29605, null], [29605, 30747, null], [30747, 30836, null], [30836, 31081, null], [31081, 31081, null], [31081, 31081, null], [31081, 31081, null], [31081, 32158, null], [32158, 33369, null], [33369, 34535, null]], "google_gemma-3-12b-it_is_public_document": [[0, 67, true], [67, 2955, null], [2955, 4286, null], [4286, 5441, null], [5441, 6749, null], [6749, 8304, null], [8304, 8552, null], [8552, 9534, null], [9534, 9926, null], [9926, 10134, null], [10134, 10811, null], [10811, 11440, null], [11440, 12192, null], [12192, 12458, null], [12458, 13206, null], [13206, 13685, null], [13685, 14163, null], [14163, 14916, null], [14916, 15298, null], [15298, 16092, null], [16092, 16766, null], [16766, 18545, null], [18545, 19239, null], [19239, 20254, null], [20254, 20962, null], [20962, 21875, null], [21875, 23008, null], [23008, 23681, null], [23681, 24764, null], [24764, 26319, null], [26319, 27752, null], [27752, 28406, null], [28406, 29605, null], [29605, 30747, null], [30747, 30836, null], [30836, 31081, null], [31081, 31081, null], [31081, 31081, null], [31081, 31081, null], [31081, 32158, null], [32158, 33369, null], [33369, 34535, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34535, null]], "pdf_page_numbers": [[0, 67, 1], [67, 2955, 2], [2955, 4286, 3], [4286, 5441, 4], [5441, 6749, 5], [6749, 8304, 6], [8304, 8552, 7], [8552, 9534, 8], [9534, 9926, 9], [9926, 10134, 10], [10134, 10811, 11], [10811, 11440, 12], [11440, 12192, 13], [12192, 12458, 14], [12458, 13206, 15], [13206, 13685, 16], [13685, 14163, 17], [14163, 14916, 18], [14916, 15298, 19], [15298, 16092, 20], [16092, 16766, 21], [16766, 18545, 22], [18545, 19239, 23], [19239, 20254, 24], [20254, 20962, 25], [20962, 21875, 26], [21875, 23008, 27], [23008, 23681, 28], [23681, 24764, 29], [24764, 26319, 30], [26319, 27752, 31], [27752, 28406, 32], [28406, 29605, 33], [29605, 30747, 34], [30747, 30836, 35], [30836, 31081, 36], [31081, 31081, 37], [31081, 31081, 38], [31081, 31081, 39], [31081, 32158, 40], [32158, 33369, 41], [33369, 34535, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34535, 0.01099]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
ba211641538a8bb5d96a1a1535acbb9cb33cc708
|
[REMOVED]
|
{"len_cl100k_base": 7932, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 36115, "total-output-tokens": 9622, "length": "2e12", "weborganizer": {"__label__adult": 0.0003101825714111328, "__label__art_design": 0.0004897117614746094, "__label__crime_law": 0.0004067420959472656, "__label__education_jobs": 0.0008554458618164062, "__label__entertainment": 0.00012803077697753906, "__label__fashion_beauty": 0.00017547607421875, "__label__finance_business": 0.000904083251953125, "__label__food_dining": 0.0003914833068847656, "__label__games": 0.0005812644958496094, "__label__hardware": 0.00276947021484375, "__label__health": 0.0008540153503417969, "__label__history": 0.0003979206085205078, "__label__home_hobbies": 0.00012063980102539062, "__label__industrial": 0.0007920265197753906, "__label__literature": 0.0003445148468017578, "__label__politics": 0.0003561973571777344, "__label__religion": 0.00046896934509277344, "__label__science_tech": 0.422119140625, "__label__social_life": 9.864568710327148e-05, "__label__software": 0.037078857421875, "__label__software_dev": 0.529296875, "__label__sports_fitness": 0.00022804737091064453, "__label__transportation": 0.0006341934204101562, "__label__travel": 0.0002300739288330078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37530, 0.04327]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37530, 0.29334]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37530, 0.88336]], "google_gemma-3-12b-it_contains_pii": [[0, 2369, false], [2369, 5578, null], [5578, 8686, null], [8686, 9904, null], [9904, 12650, null], [12650, 14412, null], [14412, 17282, null], [17282, 19487, null], [19487, 21891, null], [21891, 25070, null], [25070, 27424, null], [27424, 29540, null], [29540, 31669, null], [31669, 34350, null], [34350, 37530, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2369, true], [2369, 5578, null], [5578, 8686, null], [8686, 9904, null], [9904, 12650, null], [12650, 14412, null], [14412, 17282, null], [17282, 19487, null], [19487, 21891, null], [21891, 25070, null], [25070, 27424, null], [27424, 29540, null], [29540, 31669, null], [31669, 34350, null], [34350, 37530, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37530, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37530, null]], "pdf_page_numbers": [[0, 2369, 1], [2369, 5578, 2], [5578, 8686, 3], [8686, 9904, 4], [9904, 12650, 5], [12650, 14412, 6], [14412, 17282, 7], [17282, 19487, 8], [19487, 21891, 9], [21891, 25070, 10], [25070, 27424, 11], [27424, 29540, 12], [29540, 31669, 13], [31669, 34350, 14], [34350, 37530, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37530, 0.06283]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
068cf005bef319aa5fdadabbfe5b323de8907bc8
|
Objectives: parsing lectures
Understand:
- Theory and practice of parsing
- Underlying language theory (CFGs, ...)
- Top-down parsing (and be able to do it)
- Bottom-up parsing (time permitting)
- Today’s focus: grammars and ambiguity
Parsing
- Abstract Syntax Tree (AST)
- Captures hierarchical structure of the program
- Is the primary representation of the program used by the rest of the compiler
- It gets augmented and annotated, but the basic structure of the AST is used throughout
Parsing: two jobs
- Is the program syntactically correct?
- a := 3 * (5 + 4); if x > y then m := z;
- a := 3 * / 4; if x < y else m := z;
- If so, build the corresponding AST
Context-free grammars (CFGs)
- For lexing, we used regular expressions as the underlying notation
- For parsing, we use context-free grammars in much the same way
- Regular expressions are not powerful enough
- Intuitively, can’t express balance/nesting (a*b*, parens)
- More general grammars are more powerful than we need
- Well, we could use more power, but instead we delay some checking to semantic analysis instead of doing all the analysis based on the (general, but slow) grammar
CFG terminology
- Terminals: alphabet, or set of legal tokens
- Nonterminals: represent abstract syntax units
- Productions: rules defining nonterminals in terms of a finite sequence of terminals and nonterminals
- Start symbol: root symbol defining the language
Program ::= Stmt
Stmt ::= if Expr then Stmt else Stmt end
Stmt ::= while Expr do Stmt end
EBNF description of PL/0
```
Program ::= module Id ; Block Id .
Block ::= DeclList begin StmtList end
DeclList ::= { Decl ; }
Decl ::= ConstDecl | ProcDecl | VarDecl
ConstDecl ::= const ConstDeclItem (, ConstDeclItem )
ConstDeclItem ::= Id : Type = ConstExpr
ConstExpr ::= Id | Integer
ProcDecl ::= procedure Id ( [ FormalDecl , FormalDecl ] ) ;
Block Id
FormalDecl ::= Id : Type
Type ::= int
StmtList ::= { Stmt ; }
Stmt ::= CallStmt | AssignStmt | OutStmt |
IfStmt | WhileStmt
CallStmt ::= Id ( [ Exprs ]
AssignStmt ::= LValue := Expr
LValue ::= Id
OutStmt ::= output := Expr
IfStmt ::= if Test then StmtList end
Test ::= odd Sum | Sum Relop Sum
Relop ::= <= | <> | < | > | >= |
Exprs ::= Expr , Expr |
Expr ::= Sum
Sum ::= Term ( + | - ) Term |
Term ::= Factor ( * | / ) Factor |
Factor ::= = Factor | LValue | Integer |
input | ( Expr )
```
Exercise: produce a syntax tree for squares
```
module main;
var x:int, squareret:int;
procedure square(n:int);
begin
squareret := n * n;
end square;
begin
x := input;
while x <> 0 do
square(x);
output := squareret;
x := input;
end;
end main.
```
Derivations and parsing
- Derivation
- A sequence of expansion steps,
- Beginning with the start symbol,
- Leading to a string of terminals
- Parsing: inverse of derivation
- Given a target string of terminals,
- Recover nonterminals/productions representing structure
Parse trees
- We represent derivations and parses as parse trees
- Concrete syntax tree
- Exact reflection of the grammar
- Abstract syntax tree
- Simplified version, reflecting key structural information
- E.g., omit superfluous punctuation & keywords
Ex: An expression grammar
- E ::= E Op E | - E | ( E ) | int
- Op ::= + | - | * | /
Using this grammar, find parse trees for:
- 3 * 5
- 3 + 4 * 5
Ambiguity
- Some grammars are ambiguous
- Different parse trees with the same final string
- (Some languages are ambiguous, with no possible non-ambiguous grammar; but we avoid them)
- The structure of the parse tree captures some of the meaning of a program
- Ambiguity is bad since it implies multiple possible meanings for the same program
- Consider the example on the previous slide
Another famous ambiguity: dangling else
- Stmt ::= ...
- if Expr then Stmt | if Expr then Stmt else Stmt
- if e1 then if e2 then s1 else s2
- To which then does the else belong?
- The compiler isn’t going to be confused
- However, if the compiler chooses a meaning different from what the programmer intended, it could get ugly
- Any ideas for overcoming this problem?
Resolving ambiguity: #1
- Add a meta-rule
- For instance, “else associates with the closest previous unmatched if”
- This works and keeps the original grammar intact
- But it’s ad hoc and informal
Resolving ambiguity: #2
- Rewrite the grammar to resolve it explicitly
```
Stmt := MatchedStmt | UnmatchedStmt
MatchedStmt := if Expr then MatchedStmt
else MatchedStmt
UnmatchedStmt := if Expr then Stmt
else UnmatchedStmt
```
- Formal, no additional meta-rules
- Somewhat more obscure grammar
Resolving ambiguity: #3
- Redesign the programming language to remove the ambiguity
```
Stmt := if Expr then Stmt end
```
- Formal, clear, elegant
- Allows StmtList in then and else branch, without adding begin/end
- Extra end required for every if statement
What about that expression grammar?
- How to resolve its ambiguity?
- Option #1: add meta-rules for precedence and associativity
- Option #2: modify the grammar to explicitly resolve the ambiguity
- Option #3: redefine the language
Option #1: add meta-rules
- Add meta-rules for precedence and associativity
```
E ::= E+E | E-E | E*E | E/E | E^E | (E) | -E |
```
- +,- ,^ not unary, <^ etc.
- *^,^ left-associative; ^ right associative
- Simple, intuitive
- But not all parsers can support this
- yacc does
Option #2: new BNF
- Create a nonterminal for each precedence level
- Expr is the lowest precedence nonterminal
- Each nonterminal can be rewritten with higher precedence operator
- Highest precedence operator includes atomic expressions
- At each precedence level use
- Left recursion for left-associative operators
- Right recursion for right-associative operators
- No recursion for non-associative operators
Option #2: example
\[ \text{E ::= E+T} \]
\[ \text{T ::= T*F} \]
\[ \text{F ::= id | (E)} \]
\[ w + x + y * z \]
Option #3: New language
- Require parens
- E.g., in APL all exprs evaluated left-to-right unless parenthesized
- Forbid parens
- E.g.: RPN calculators
Designing a grammar:
- Accuracy
- Readability, clarity
- Unambiguity
- Limitations of CFGs
- Similarity to desired AST structure
- Ability to be parsed by a particular parsing algorithm
- Top-down parser => LL(k) grammar
- Bottom-up parser => LR(k) grammar
Parsing algorithms
- Given input (sequence of tokens) and grammar, how do we find an AST that represents the structure of the input with respect to that grammar?
- Two basic kinds of algorithms
- Top-down: expand from grammar's start symbol until a legal program is produced
- Bottom-up: create sub-trees that are merged into larger sub-trees, finally leading to the start symbol
Top-down parsing
- Build AST from top (start symbol) to leaves (terminals)
- Represents a leftmost derivation (e.g., always expand leftmost non-terminal)
- Basic issue: when replacing a non-terminal with a right-hand side (rhs), which rhs should you use?
- Basic solution: Look at next input tokens
Predictive parser
- A top-down parser that can select the correct rhs looking at the next k tokens (lookahead)
- Efficient
- No backtracking is needed
- Linear time to parse
- Implementation
- Table-driven: pushdown automaton (PDA) — like table-driven FSA plus stack for recursive FSA calls
- Recursive-descent parser [used in PL/0]
- Each non-terminal parsed by a procedure
- Call other procedures to parse sub-non-terminals, recursively
LL(k), LR(k), …?
- These parsers have generally snazzy names
- The simpler ones look like the ones in the title of this slide
- The first L means “process tokens left to right”
- The second letter means
- produce a (Right / Left)most derivation
- Leftmost => top-down
- Rightmost => bottom-up
- The k means “k tokens of lookahead”
- We won’t discuss LALR(k), SLR, and lots more parsing algorithms
LL(k) grammars
- It’s easy to construct a predictive parser if a grammar is LL(k)
- Left-to-right scan on input,
- Leftmost derivation, k tokens of lookahead
- Restrictions include
- Unambiguous
- No common prefixes of length ≥ k
- No left recursion
- … (more details later)…
- Collectively, the restrictions guarantee that, given k input tokens, one can always select the correct ms to expand
Eliminating common prefixes
- Left factor them, creating a new non-terminal for the common prefix and/or different suffixes
- Before
- If :::= if Test then Stats and |
- if Test then Stats else Stats and |
- After
- If :::= if Test then Stats IfCont |
IfCont ::= and | else Stats and |
- Grammar is a bit uglier
- Easy to do manually in a recursive-descent parser
Eliminating left recursion:
Before
\[
E ::= E + T \\
T ::= T * F \\
F ::= id | (E) |
\]
After
\[
E ::= T ECont \\
ECont ::= + T ECont | \varepsilon \\
T ::= F TCont \\
TCont ::= * F TCont | \varepsilon \\
F ::= id | (E) |
\]
Just add sugar
\[
E ::= T ( + T ) \\
T ::= F [ * F ] \\
F ::= id | ( E ) |
\]
- Sugared form is still pretty readable
- Easy to implement in hand-written recursive descent parser
- Concrete syntax tree is not as close to abstract syntax tree
LL(1) Parsing Theory
Goal: Formal, rigorous description of those grammars for which “I can figure out how to do a top-down parse by looking ahead just one token”, plus corresponding algorithms.
Notation:
- T = Set of Terminals (Tokens)
- N = Set of Nonterminals
- $ = End-of-file character (T-like, but not in N ∪ T)
Table-driven predictive parser
- Automatically compute PREDICT table from grammar
- PREDICT(nonterminal,input-symbol)
action, e.g. which rhs or error
Example 1
```
Stmt ::= 1 if expr then Stmt else Stmt |
2 while Expr do Stmt |
3 begin Stmts end
Stmts ::= 4 Stmt ; Stmts |
Expr ::= 6 id
```
<table>
<thead>
<tr>
<th>if</th>
<th>else</th>
<th>while</th>
<th>do</th>
<th>begin</th>
<th>end</th>
<th>id</th>
<th>;</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td></td>
<td></td>
<td></td>
<td>4</td>
<td>5</td>
</tr>
</tbody>
</table>
Example 1 (cont.)
Example 1 (cont.)
<table>
<thead>
<tr>
<th></th>
<th>FIRST</th>
<th>FOLLOW</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>$E := \text{if } E \text{ then } S \text{ else } S$</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>$\text{while } E \text{ do } S$</td>
</tr>
<tr>
<td>3</td>
<td></td>
<td>$\text{begin } S \text{ end}$</td>
</tr>
<tr>
<td>4</td>
<td>$S := S ; Ss$</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>$\varepsilon$</td>
<td></td>
</tr>
<tr>
<td>6</td>
<td>$E := \text{id}$</td>
<td></td>
</tr>
</tbody>
</table>
LL(1) Parsing Algorithm
```
push S$ /* S is start symbol */
while Stack not empty
X := pop(Stack)
a := peek at next token /* assume EOF = $ */
if X is terminal or $
If X=a, read token a else abort;
else look at PREDICT(X, a) /* X is nonterminal */
Empty : abort
rule X → α : push α
If not at end of input, abort
```
Constructing PREDICT: overview
- Compute FIRST set for each rhs
- All tokens that can appear first in a derivation from that rhs
- In case rhs can be empty, compute FOLLOW set for each non-terminal
- All tokens that can appear right after that non-terminal in a derivation
- Constructions of FIRST and FOLLOW sets are interdependent
- PREDICT depends on both
FIRST(α) – 1st “token” from α
Definition: For any string α of terminals and non-terminals, FIRST(α) is the set of terminals that begin strings derived from α, together with ε, if α can derive ε. More precisely:
For any $\alpha \in (N \cup T)^*$,
$\text{FIRST}(\alpha) = \{ a \in T \mid \alpha \Rightarrow^* a \beta \text{ for some } \beta \in (N \cup T)^* \} \cup \{ \varepsilon, \text{ if } \alpha \Rightarrow^* \varepsilon \}$
Computing FIRST – 4 cases
1. \( \text{FIRST}(\varepsilon) = \{ \varepsilon \} \)
2. For all \( a \in T \), \( \text{FIRST}(a) = \{ a \} \)
3. For all \( A \in N \), repeat until no change
If there is a rule \( A \rightarrow \varepsilon \), add(\( \varepsilon \)) to FIRST(A)
For all rules \( A \rightarrow Y_1 \ldots Y_k \) add(FIRST(\( Y_i \)) - {\varepsilon})
If \( \varepsilon \in \text{FIRST}(Y_i) \) then add(FIRST(Y_j) - {\varepsilon})
If \( \varepsilon \in \text{FIRST}(Y_1 Y_2) \) then add(FIRST(Y_3) - {\varepsilon})
... If \( \varepsilon \in \text{FIRST}(Y_1 \ldots Y_k) \) then add(\( \varepsilon \))
Example 1 (cont.)
<table>
<thead>
<tr>
<th>( S )</th>
<th>( E := ) if ( E ) then ( S ) else ( S )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( )</td>
<td>( )</td>
</tr>
<tr>
<td>( )</td>
<td>( )</td>
</tr>
<tr>
<td>( )</td>
<td>( )</td>
</tr>
<tr>
<td>( )</td>
<td>( )</td>
</tr>
<tr>
<td>( )</td>
<td>( )</td>
</tr>
</tbody>
</table>
FOLLOW(B) – Next “token” after \( B \)
Definition: for any non-terminal \( B \), FOLLOW(\( B \)) is the set of terminals that can appear immediately after \( B \) in some derivation from the start symbol, together with \( \$, \) if \( B \) can be the end of such a derivation. (\$ represents “end of input”.) More precisely: For all \( B \in N \),
\[
\text{FOLLOW}(B) = \{ a \in (T \cup \{ \$ \}) \mid S$ \Rightarrow^* \alpha B a \beta \\
\text{for some } \alpha, \beta \in (N \cup T \cup \{ \$ \})^* \}
\]
(\( S \) is the Start symbol of the grammar.)
PREDICT – Given lhs, which rhs?
For all rules $A \rightarrow \alpha$
- For all $a \in \text{FIRST}(\alpha) - \{\epsilon\}$
- Add $(A \rightarrow a)$ to PREDICT$(A,a)$
- If $\epsilon \in \text{FIRST}(\alpha)$ then
- For all $b \in \text{FOLLOW}(A)$
- Add $(A \rightarrow \alpha)$ to PREDICT$(A,b)$
Defn: $G$ is LL(1) iff every cell has $\leq 1$ entry
Properties of LL(1) Grammars
- Clearly, given a conflict-free PREDICT table ($\leq 1$ entry/cell), the parser will do something unique with every input
- Key fact is, if the table is built as above, that something is the correct thing
- i.e., the PREDICT table will reliably guide the LL(1) parsing algorithm so that it will
- Find a derivation for every string in the language
- Declare an error on every string not in the language
Exercises (1st especially recommended)
- Easy: Pick some grammar with common prefixes, left recursion, and/or ambiguity.
- Build PREDICT; it will have conflicts
- Harder: prove that every grammar with $\geq 1$ of those properties will have PREDICT conflicts
- Harder: Find a grammar with none of those features that nevertheless gives conflicts.
- i.e., absence of those features is necessary but not sufficient for a grammar to be LL(1).
- Harder, for theoryheads: if the table has conflicts, and the parser chooses among them nondeterministically, it will work correctly
Example 2
$$E ::= T \; ( \; + \; T \; )$$
$$T ::= F \; ( \; * \; F \; )$$
$$F ::= - \; F \mid \text{id} \mid ( \; E \; )$$
Example 2 (cont.)
<table>
<thead>
<tr>
<th></th>
<th>FIRST</th>
<th>FOLLOW</th>
</tr>
</thead>
<tbody>
<tr>
<td>$E$</td>
<td>$\text{id}$</td>
<td>$+$</td>
</tr>
<tr>
<td>$E'$</td>
<td>$\text{id}$</td>
<td>$-$</td>
</tr>
<tr>
<td>$T$</td>
<td>$\text{id}$</td>
<td>$*$</td>
</tr>
<tr>
<td>$T'$</td>
<td>$\text{id}$</td>
<td>$/$</td>
</tr>
<tr>
<td>$F$</td>
<td>$\text{id}$</td>
<td>$( ; )$</td>
</tr>
</tbody>
</table>
Example 2: PREDICT
<table>
<thead>
<tr>
<th></th>
<th>$\text{id}$</th>
<th>$+$</th>
<th>$-$</th>
<th>$*$</th>
<th>$/$</th>
<th>$( ; )$</th>
<th>$$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$E$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$E'$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$T$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$T'$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$F$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
PREDICT and LL(1)
- The PREDICT table has at most one entry in each cell if and only if the grammar is LL(1).
- If there is only one choice (it's predictive), making it fast to parse and easy to implement.
- Multiple entries in a cell:
- Arise with left recursion, ambiguity, common prefixes, etc.
- Can patch by hand, if you know what to do.
- Or use more powerful parser (LL(2), or LR(k), or...).
- Or change the grammar.
Recursive descent parsers
- Write procedure for each non-terminal.
- Each procedure selects the correct right-hand side by peeking at the input tokens.
- Then the r.h.s. is consumed.
- If it's a terminal symbol, verify it is next and then advance through the token stream.
- If it's a non-terminal, call corresponding procedure.
- Build and return AST representing the r.h.s.
Recursive descent example
```
Stat ::= if expr then Stat else Stat |
while Expr do Stat |
begin Stats end
Stats ::= 4 Stat ; Stats | $ &
Expr ::= id
ParseStat() {
switch (next token) {
"begin": ParseStats(); read "end"; break;
"while": ParseExpr(); read "do"; ParseStat(); break;
"if": ParseExpr(); read "then"; ParseStat();
read "else"; ParseStat(); break;
default: abort;
}
}
```
LL(1) and Recursive Descent
- If the grammar is LL(1), it's easy to build
a recursive descent parser.
- One nonterminal/row one procedure
- Use 1 token lookahead to decide which rhs
- Table-driven parser's stack recursive call stack
- Recursive descent can handle some non-LL(1) features, too.
Example
```
Stat ::= if expr then Stat else Stat |
while Expr do Stat |
begin Stats end
Stats ::= 4 Stat ; Stats | $ &
Expr ::= id
ParseStat() {
switch (next token) {
"begin": ParseStats(); read "end"; break;
"while": ParseExpr(); read "do"; ParseStat(); break;
"if": ParseExpr(); read "then"; ParseStat();
read "else"; ParseStat(); break;
default: abort;
}
}
```
Example
```
Stat ::= if expr then Stat else Stat |
while Expr do Stat |
begin Stats end
Stats ::= 4 Stat ; Stats | $ &
Expr ::= id
ParseStat() {
switch (next token) {
"begin": ParseStats(); read "end"; break;
"while": ParseExpr(); read "do"; ParseStat(); break;
"if": ParseExpr(); read "then"; ParseStat();
read "else"; ParseStat(); break;
default: abort;
}
}
```
Example
```
Stat ::= if expr then Stat else Stat |
while Expr do Stat |
begin Stats end
Stats ::= 4 Stat ; Stats | $ &
Expr ::= id
ParseStat() {
switch (next token) {
"begin": ParseStats(); read "end"; break;
"while": ParseExpr(); read "do"; ParseStat(); break;
"if": ParseExpr(); read "then"; ParseStat();
read "else"; ParseStat(); break;
default: abort;
}
}
```
Example
It's demo time…
Let's look at some of the PL/0 code to see how the recursive descent parsing works in practice.
```cpp
// PL/0 Code Example
// ParseStmts()
Stmt* Parser::ParseStmts() {
StmtArray* stmts = new StmtArray; Stmt* stmt;
for (;;) {
Token t = scanner->Peek();
switch (t->kind()) {
case IDENT: stmt = ParseIdentStmt(); break;
case OUTPUT: stmt = ParseOutputStmt(); break;
case IF: stmt = ParseIfStmt(); break;
case WHILE: stmt = ParseWhileStmt(); break;
default: return stmts; // no more stmts
}
stmts->add(stmt);
scanner->Read(SEMICOLON);
}
}
// ParseIfStmt()
Stmt* Parser::ParseIfStmt() {
scanner->Read(IF);
Expr* test = ParseTest();
scanner->Read(THEN);
StmtArray* stmts = ParseStmts();
scanner->Read(END);
return new IfStmt(test, stmts);
}
// ParseWhileStmt()
Stmt* Parser::ParseWhileStmt() {
scanner->Read(WHILE);
Expr* test = ParseTest();
scanner->Read(DO);
StmtArray* stmts = ParseStmts();
scanner->Read(END);
return new WhileStmt(test, stmts);
}
// ParseIdentStmt()
Stmt* Parser::ParseIdentStmt() {
Token* id = scanner->Read(IDENT);
if (scanner->CondRead(LPAREN)) {
ExprArray* args;
if (scanner->CondRead(RPAREN)) {
args = new ExprArray;
} else {
args = ParseExprs();
scanner->Read(RPAREN);
}
return new CallStmt(id->ident(), args);
} else {
LValue* lvalue = new VarRef(id->ident());
return new AssignStmt(lvalue, ParseExpr());
} return;
}
```
...
Parser::ParseTerm()
```cpp
Expr* Parser::ParseTerm()
{
Expr* expr = ParseFactor();
for (;;) {
Token* t = scanner->Peek();
if (t->kind() == MUL || t->kind() == DIVIDE) {
scanner->Get(); // eat the token
Expr* expr2 = ParseFactor();
expr = new BinOp(t->kind(), expr, expr2);
} else {
return expr;
}
}
}
```
**Yacc — A bottom-up-parser generator**
- "yet another compiler-compiler"
- Input:
- grammar, possibly augmented with action code
- Output:
- C code to parse it and perform actions
- LALR(1) parser generator
- practical bottom-up parser
- more powerful than LL(1)
- modern updates of yacc
- yacc++, bison, byacc, ...
**Yacc input grammar**
```
assignstmt: IDENT GETS expr
ifstmt: IF test THEN stmts END
| IF test THEN stmt ELSE stmts END
expr: term
| expr '+′ term
| expr '-′ term
factor: '-' factor
| IDENT
| INTEGER
| INPUT
| '(' expr ')'
```
**Yacc with actions**
```
assignstmt: IDENT GETS expr { $5 = new AssignStmt($1, $3); };
ifstmt: IF test stmts END { $5 = new IFStmt($2, $4, NULL); };
| IF test stmts ELSE stmts END
expr: term { $5 = $1; }
| expr '+′ term { $5 = new BinOp(PLUS, $1, $3); }
| expr '-′ term { $5 = new BinOp(MINUS, $1, $3); }
factor: '-' factor { $5 = new UnOp(MINUS, $2); }
| IDENT { $5 = new VarRef($1); }
| INTEGER { $5 = new IntLiteral($1); }
| INPUT { $5 = new InputExpr; }
| '(' expr ')' { $5 = $2; }
```
**Parsing summary**
- Discover/impose a useful (hierarchical) structure on flat token sequence
- Represented by Abstract Syntax Tree
- Validity check syntax of input
- Could build concrete syntax tree (but don’t)
- Many methods available
- Top-down: LL(1)/recursive descent common for simple, by-hand projects
- Bottom-up: LR(1)/LALR(1)/SLR(1) common for more complex projects
- parser generator (e.g., yacc) almost necessary
**Parsing summary – Technical details you should know**
- Context-free grammars
- Definitions
- Manipulations (algorithmic)
- Left factor common prefixes
- Eliminating left recursion
- Ambiguity & (semi-heuristic) fixes
- meta-rules (code/precedence tables)
- rewrite grammar
- change language
- Building a table-driven predictive parser
- LL(1) grammar: definition & common obstacles
- PREDICT(terminal, input symbol)
- FIRST(RHS)
- FOLLOW(terminal)
- Building a recursive descent parser
- Including AST
Objectives: today
- Ambiguity
- Issues in designing a grammar
- AST extensions for the 401 project
- Overview of parsing algorithms
- Motivation and details of top-down, predictive parsers
- Recursive descent parsing
- Today++: a walk through the PL/0 parser
Objectives: today
- Recap and clarify PREDICT table
- Describe computation of FIRST and FOLLOW
- And the relationship to PREDICT
- Recursive descent parsing
- High-level issues and
- (time-permitting) a walk through the PL/0 parser
AST extensions in project
- Expressions
- true and false constants
- array index expression (an [value])
- function call expression
- and and or operators
- tests are expressions
- constant expressions
- Statements
- for
- break
- return
- if with else
- Declarations
- procedures with result types
- Types
- bool
- array
|
{"Source-Url": "http://courses.cs.washington.edu/courses/cse401/03au/slides/03-parsing.pdf", "len_cl100k_base": 6800, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 54873, "total-output-tokens": 7408, "length": "2e12", "weborganizer": {"__label__adult": 0.0003757476806640625, "__label__art_design": 0.0003066062927246094, "__label__crime_law": 0.00023221969604492188, "__label__education_jobs": 0.001026153564453125, "__label__entertainment": 7.516145706176758e-05, "__label__fashion_beauty": 0.0001232624053955078, "__label__finance_business": 0.0001081228256225586, "__label__food_dining": 0.00034999847412109375, "__label__games": 0.0007252693176269531, "__label__hardware": 0.000438690185546875, "__label__health": 0.0002741813659667969, "__label__history": 0.0001825094223022461, "__label__home_hobbies": 7.361173629760742e-05, "__label__industrial": 0.0002815723419189453, "__label__literature": 0.0003604888916015625, "__label__politics": 0.0001962184906005859, "__label__religion": 0.00043845176696777344, "__label__science_tech": 0.0035991668701171875, "__label__social_life": 9.66787338256836e-05, "__label__software": 0.003543853759765625, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.0003445148468017578, "__label__transportation": 0.0003960132598876953, "__label__travel": 0.00020205974578857425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23002, 0.00618]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23002, 0.38188]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23002, 0.65596]], "google_gemma-3-12b-it_contains_pii": [[0, 1536, false], [1536, 3201, null], [3201, 4336, null], [4336, 5878, null], [5878, 7559, null], [7559, 9555, null], [9555, 11484, null], [11484, 12871, null], [12871, 15047, null], [15047, 17973, null], [17973, 19612, null], [19612, 22153, null], [22153, 23002, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1536, true], [1536, 3201, null], [3201, 4336, null], [4336, 5878, null], [5878, 7559, null], [7559, 9555, null], [9555, 11484, null], [11484, 12871, null], [12871, 15047, null], [15047, 17973, null], [17973, 19612, null], [19612, 22153, null], [22153, 23002, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23002, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23002, null]], "pdf_page_numbers": [[0, 1536, 1], [1536, 3201, 2], [3201, 4336, 3], [4336, 5878, 4], [5878, 7559, 5], [7559, 9555, 6], [9555, 11484, 7], [11484, 12871, 8], [12871, 15047, 9], [15047, 17973, 10], [17973, 19612, 11], [19612, 22153, 12], [22153, 23002, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23002, 0.05008]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
59cb78fd9a362151ce0b45ca23ac4ffd4e79aa94
|
Konfigurierbare Systemsoftware (KSS)
VL 1 – Einführung
Daniel Lohmann
Lehrstuhl für Informatik 4
Verteilte Systeme und Betriebssysteme
Friedrich-Alexander-Universität
Erlangen-Nürnberg
SS 14 – 2014-04-08
http://www4.informatik.uni-erlangen.de/Lehre/SS14/V_KSS
1.1 Commodity Operating Systems Today
1.2 Reality Check: Granularity
1.3 The Domain of Embedded Systems
1.4 About KSS
1.5 KSS — Organization
1.6 References
Commodity operating systems provide a rich set of features to be prepared for all kinds of applications and contingencies:
- Malicious or erroneous applications
- preemptive scheduling, address space separation, disk quotas
- Multi-user operation
- authentication, access validation and auditing
- Multi-threaded and interacting applications
- Threads, semaphores, pipes, sockets
- Many/large concurrently running applications
- virtual memory, swapping, working sets
One size fits all?
“Clearly, the operating system design must be strongly influenced by the type of use for which the machine is intended. Unfortunately it is often the case with ‘general purpose machines’ that the type of use cannot be easily identified; a common criticism of many systems is that in attempting to be all things to all men they wind up being totally satisfactory to no-one.”
Some applications may require only a subset of services or features that other applications need. These 'less demanding' applications should not be forced to pay for the resources consumed by unneeded features.
Parnas 1979: “Designing Software for Ease of Extension and Contraction” [8]
Variability and Granularity
**Variability**
(Definition 1)
Variability of system software is the property that denotes the *range* of functional requirements that can be fulfilled by it.
**Granularity**
(Definition 2)
Granularity of system software is the property that denotes the *resolution* of which requirements can be fulfilled by it, in the sense that requirements are fulfilled but not overfulfilled.
- Can general purpose (GP) systems fulfill these demands?
- Reality check – a small study with `printf()` from `glibc`:
(Analogy: GP operating system ←→ GP library ←→ GP function)
```c
int main() {
printf( "Hello World\n" );
}
```
Agenda
1.1 Commodity Operating Systems Today
1.2 Reality Check: Granularity
1.3 The Domain of Embedded Systems
1.4 About KSS
1.5 KSS — Organization
1.6 References
Reality Check: Granularity
The setup:
> uname -a
Linux faui48a 2.6.32-5-amd64 #1 SMP Mon Oct 3 05:45:56 UTC 2011 x86_64 GNU/Linux
> gcc -dumpversion
4.4.5
Experiment 1: printf()
> echo 'main(){printf("Hello World\n");}' | gcc -xc -w -Os -static -o hello1
> ./hello1
Hello World
> size hello1
text data bss dec hex filename
508723 1928 7052 517703 7e647 hello1
Maybe the general-purpose printf() is just too powerful?
- supports many data types, formatting rules, ...
- implementation requires a complex parser for the format string
Let’s try the much more specialized puts()!
Experiment 2: `puts()`
```bash
> echo 'main(){puts("Hello World");}'; | gcc -xc - -Os -w -static -o hello2
> ./hello2
Hello World
> size hello2
```
<table>
<thead>
<tr>
<th>text</th>
<th>data</th>
<th>bss</th>
<th>dec</th>
<th>hex</th>
<th>filename</th>
</tr>
</thead>
<tbody>
<tr>
<td>508723</td>
<td>1928</td>
<td>7052</td>
<td><strong>517703</strong></td>
<td>7e647</td>
<td>hello2</td>
</tr>
</tbody>
</table>
- That didn’t help much!
- Maybe `puts()` is yet too poweful?
- buffered IO, streams
- Let’s work directly with the OS file handle!
Experiment 3: `write()`
```bash
> echo 'main(){write(1, "Hello World\n", 13);}' | gcc -xc - -Os -w -static -o hello3
> ./hello3
Hello World
size hello3
text data bss dec hex filename
508138 1928 7052 517118 7e3fe hello3
512 KiB!
```
- 517703 compared to 517118 – a net saving of 585 bytes (0.1%)
Experiment 4: empty program
```bash
> echo 'main(){}' | gcc -xc - -Os -w -static -o hello4
> size hello4
size hello4
```
```bash
> objdump -D --reloc hello4 | grep printf | wc -l
```
yields still 2611 matches!
It’s the startup code!
Experiment 5: `write()`, no startup code
```
> echo `_start(){write(1, "Hello World\n", 13);_exit(0);}` | gcc -xc - -Os -w -static -nostartfiles -o hello5
> size hello5
text data bss dec hex filename
597 0 4 601 259 hello5
> ./hello5
Segmentation fault
```
- Even a simple `write()` cannot be issued without the complete initialization.
- Last ressort: invoke the syscall directly!
Experiment 6: `SYS_write()`
```
> echo `_start(){syscall(4, 1, "Hello World\n", 13);_exit(0);}` | gcc -xc - -Os -w -static -nostartfiles -o hello6
> size hello6
text data bss dec hex filename
293 0 4 297 129 hello6
> ./hello6
Hello World
```
0.5 KiB :-)
but segfault :-(
On Linux/glibc, a simple “Hello World” application takes 1750 times more memory than necessary!
However, is this a problem?
- The glibc has been designed for a “standard case”
- Large, multithreaded, IO-intensive UNIX application
- Assumption: every program uses malloc(), printf(), ...
- Variability has been traded for Granularity
Every Program?
“I know of no feature that is always needed. When we say that two functions are almost always used together, we should remember that “almost” is a euphemism for “not”.
Parnas 1979: “Designing Software for Ease of Extension and Contraction” [8]
On Linux/glibc, a simple “Hello World” application takes 1750 times more memory than necessary!
However, is this a problem?
- The glibc has been designed for a “standard case”
- Large, multithreaded, IO-intensive UNIX application
- Assumption: every program uses malloc(), printf(), ...
- Variability has been traded for Granularity
Assumption: The GP operating system will compensate for it...
- Virtual memory ~ memory is not an issue
(but is that a reason to waste it?)
- Shared libraries ~ memory is actually shared between processes
(unless we relocate the symbols, e.g., for address-space randomization...)
What about other domains?
1.1 Commodity Operating Systems Today
1.2 Reality Check: Granularity
1.3 The Domain of Embedded Systems
1.4 About KSS
1.5 KSS — Organization
1.6 References
A Different Domain: Embedded Systems
CPU/DSP sales in 2002 [13]
## The ATmega µC Family (8-Bit)
<table>
<thead>
<tr>
<th>Type</th>
<th>Flash</th>
<th>SRAM</th>
<th>IO</th>
<th>Timer 8/16</th>
<th>UART</th>
<th>I²C</th>
<th>AD</th>
<th>Price (€)</th>
</tr>
</thead>
<tbody>
<tr>
<td>ATTINY11</td>
<td>1 KiB</td>
<td>6</td>
<td>6</td>
<td>1/-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>0.31</td>
</tr>
<tr>
<td>ATTINY13</td>
<td>1 KiB</td>
<td>64 B</td>
<td>6</td>
<td>1/-</td>
<td>-</td>
<td>-</td>
<td>4*10</td>
<td>0.66</td>
</tr>
<tr>
<td>ATTINY2313</td>
<td>2 KiB</td>
<td>128 B</td>
<td>18</td>
<td>1/1</td>
<td>1</td>
<td>1</td>
<td>-</td>
<td>1.06</td>
</tr>
<tr>
<td>ATMEGA4820</td>
<td>4 KiB</td>
<td>512 B</td>
<td>23</td>
<td>2/1</td>
<td>2</td>
<td>1</td>
<td>6*10</td>
<td>1.26</td>
</tr>
<tr>
<td>ATMEGA8515</td>
<td>8 KiB</td>
<td>512 B</td>
<td>35</td>
<td>1/1</td>
<td>1</td>
<td>-</td>
<td>-</td>
<td>2.04</td>
</tr>
<tr>
<td>ATMEGA8535</td>
<td>8 KiB</td>
<td>512 B</td>
<td>32</td>
<td>2/1</td>
<td>1</td>
<td>1</td>
<td>-</td>
<td>2.67</td>
</tr>
<tr>
<td>ATMEGA169</td>
<td>16 KiB</td>
<td>1024 B</td>
<td>54</td>
<td>2/1</td>
<td>1</td>
<td>1</td>
<td>8*10</td>
<td>4.03</td>
</tr>
<tr>
<td>ATMEGA64</td>
<td>64 KiB</td>
<td>4096 B</td>
<td>53</td>
<td>2/2</td>
<td>2</td>
<td>1</td>
<td>8*10</td>
<td>5.60</td>
</tr>
<tr>
<td>ATMEGA128</td>
<td>128 KiB</td>
<td>4096 B</td>
<td>53</td>
<td>2/2</td>
<td>2</td>
<td>1</td>
<td>8*10</td>
<td>7.91</td>
</tr>
</tbody>
</table>
Bulk prices and features of ATmega variants (excerpt, DigiKey 2006)
### Limited Resources
- Flash is limited, RAM is extremely limited
- A **few bytes** can have a **massive impact on per-unit costs**
- The “glibc approach” is **doomed to fail!**
The Role of the Operating System
(a) Types of operating systems (n = 1200)
- Commercial: 36.30%
- Proprietary (developed internally): 15.10%
- Open source (commercially distributed): 8.40%
- No: 28.70%
- Open source: 11.50%
(b) Why no operating system? (Multiple answers possible)
- Too complicated: 7%
- Too expensive: 10%
- Resource concerns: 30%
- Do not need one: 85%
Operating systems (not) employed in embedded-system projects in 2006 [12]
The Role of the Operating System
(a) Types of operating systems
- commercial: 36.30%
- open source (commercially distributed): 15.10%
- proprietary (developed internally): 11.50%
- no: 28.70%
(b) Why no operating system?
- too complicated: 7%
- too expensive: 10%
- resource concerns: 30%
- do not need one: 85%
Operating systems (not) employed in embedded-system projects in 2006 [12]
> 40% of all projects use “in house” OS functionality!
Wide-spread fear of the resource overhead of GP operating systems
- OS functionality is developed “side-by-side” with the applications
- This leads to very high “hidden” development costs [14]
The Role of the Operating System
(a) Types of operating systems
- Commercial: 36.30%
- Proprietary (developed internally): 28.70%
- Open source (commercially distributed): 15.10%
- Open source: 11.50%
- No: 8.40%
- (n = 1200)
(b) Why no operating system?
- Too complicated: 7%
- Too expensive: 10%
- Resource concerns: 30%
- Do not need one: 85%
Operating systems (not) employed in embedded-system projects in 2006 [12]
Rest spreads over **hundreds of different** operating systems!
- C{51, 166, 251}, CiAO, CMX RTOS, Contiki, C-Smart/Raven, eCos, eRTOS, Embos, Ercos, Euros Plus, FreeRTOS, Hi Ross, HyNet-OS, LynxOS, MicroX/OS-II, Nucleus, OS-9, OSE, OSEK {Flex, Turbo, Plus}, OSEKtime, Precise/MQX, Precise/RTCS, proOSEK, pSOS, PURE, PXROS, QNX, Realos, RTMOSxx, Real Time Architect, RTA, RTX{51, 166, 251}, RTXC, Softune, SSXS RTOS, ThreadX, TinyOS, Tresos, VRTX, VxWorks, ...
∽ The “glibc approach” (one size fits all) **does not work**!
Between a Rock and a Hard Place...
functional and nonfunctional requirements
Application
System Software
Functional and nonfunctional properties
Hardware
tasks
sockets
file system
... event latency
safety
...
ISA
IRQ handling
MMU / MPU
... cache size
coherence
IRQ latency
...
Between a Rock and a Hard Place...
**functional and nonfunctional requirements**
- High variety of functional and nonfunctional application requirements
- High variety of hardware platforms
- High per-unit cost pressure
→ System software has to be **tailored** for each concrete application
**functional and nonfunctional properties**
- tasks
- sockets
- file system
- ... event latency
- safety
- ...
- ISA
- IRQ handling
- MMU / MPU
- ... cache size
- coherence
- IRQ latency
- ...
Customizing or tailoring is the activity of modifying existing system software in order to fulfill the requirements of some particular application.
This calls for *granularity* and *variability*!
Between a Rock and a Hard Place...
**functional and nonfunctional requirements**
- High variety of functional and nonfunctional application requirements
- High variety of hardware platforms
- High per-unit cost pressure
System software has to be **tailored** for each concrete application
**functional and nonfunctional properties**
Agenda
1.1 Commodity Operating Systems Today
1.2 Reality Check: Granularity
1.3 The Domain of Embedded Systems
1.4 About KSS
1.5 KSS — Organization
1.6 References
What to do?
297 $\leftrightarrow$ 517703 Bytes!
Why?
On Linux/gplibc, a simple “Hello World” application takes 1750 times more memory than necessary!
- Reason: software structure
- Trade-off between reuse $\leftrightarrow$ coupling
$\sim$ by extensive internal reuse, glibc has become an all-or-nothing blob
- Reason: software interface
- C standard defines printf() as a swiss army knife
$\sim$ printf() has become a “god method”
- Reason: language and tool chain
- Compiler/linker work on the granularity of symbols or even object files
$\sim$ dead code is not effectively eliminated
What to do?
297 ←→ 517703 Bytes!
On Linux/glibc, a simple “Hello World” application takes 1750 times more memory than necessary!
Konfigurierbare Systemsoftware – KSS
Throughout the software development cycle, variability and granularity have to be considered as primary design goals from the very beginning!
In KSS you will learn about principles, methods, and tools to achieve this.
Individually Developed Software Product
Induction developed software products are specifically designed to address specific problems and provide specific solutions. The system developer and system user operate on the intentional and extensional sides, respectively.
intentional side
System Developer
Specific Problem
$f_1, f_2, \ldots$
intended properties
extensional side
Specific Solution
Variant
System User
actual implementation
Software Product Derived from Reusable Assets
Solution Space
Architect / Developer
Aspect
Class
Architecture and Implementation
Variant
System User
specific solution
Specific Problem
intentional side
System Developer
instance level
model level
intended properties
$ f_1, f_2, \ldots $
Configurable Software – Software Product Line
Problem Space
- Domain Expert
- Features and Dependencies
Solution Space
- Architect / Developer
- Architecture and Implementation
Specific Problem
- System User
- Intended properties
Specific Solution
- System User
- Actual implementation
Features and Dependencies
- f1
- f2
- f3
- f4
- f5
- f6
- f7
Variants
- A
- B
- C
- D
© dl KSS (VL 1 | SS 14) 1 Motivation | 1.4 About KSS
Agenda
1.1 Commodity Operating Systems Today
1.2 Reality Check: Granularity
1.3 The Domain of Embedded Systems
1.4 About KSS
1.5 KSS — Organization
Objectives
Einordnung
Semesterplanung
1.6 References
Learning Objectives
- **Improve** your understanding of the design and development of low-level system software
- Starting point: “Betriebssysteme” [BS]
- Focus: Static configuration and tailoring
- **Expand** your knowledge by new software engineering methods and language techniques for configurable system software
- Software families and software product lines [7]
- Aspect-oriented and generative programming in C/C++ [10]
- **Apply** these techniques in the context of current operating-system research projects
- CiAO, SLOTH, VAMOS, DanceOS [2, 5, 9, 11]
- Get prepared for a master thesis or project in the field!
KSS $\rightarrow$
Konfigurierbare Systemsoftware
Software engineering
Operating systems
Variability
Product Lines
Aspects
Generators
eCos [6]
CiAO [5]
Sloth [2]
Linux [11]
KSS – Einordnung
Systemprogrammierung
10 ECTS
EZS
5 – 7,5
BS
5 – 7,5
MW
5 – 7,5
EZS2
P: 10
PASST
P: 10
BST
5
SST
7,5
KSS
2,5
VS
5 – 7,5
Systemsoftwaretechnik
© dl KSS (VL 1 | SS 14) 1 Motivation | 1.5 KSS — Organization
Organisation: Systemsoftwaretechnik (SST)
Modul Systemsoftwaretechnik (SST) 7.5 ECTS
1 Vorlesung Betriebssystemtechnik (BST) 2.5
- Mo 12–14
- 12–14 Vorlesungstermine
2 Übungen zu Betriebssystemtechnik (BST-Ü) 2.5
- Di 10–12
- 12–14 Übungstermine/Rechnerübungen
3 Vorlesung und Übung Konfigurierbare Systemsoftware (KSS) 2.5
- Do 14–16 (Vorlesung)
- 7 Vorlesungstermine, 1 Übungsaufgabe, 1 Projekt
- Übung integriert in BST-Übung / Rechnerübung
→ KSS kann nur zusammen mit BST belegt werden!
- Es gibt keine 2.5 ECTS Module...
- Wenn Bedarf besteht, wird KSS auf 5 ECTS erweitert
Organisation: Beteiligte
Vorlesung
- Daniel Lohmann
Übung
- Daniel Danner
- Gabor Drescher
Projekt
- Daniel Danner
- Martin Hoffmann
- Jens Schedel
- ?
© dl KSS (VL 1 | SS 14) 1 Motivation | 1.5 KSS — Organization
<table>
<thead>
<tr>
<th>KW</th>
<th>Mo</th>
<th>Di</th>
<th>Mi</th>
<th>Do</th>
<th>Fr</th>
</tr>
</thead>
<tbody>
<tr>
<td>15</td>
<td>07.04.</td>
<td>08.04.</td>
<td>09.04.</td>
<td>10.04.</td>
<td>11.04.</td>
</tr>
<tr>
<td></td>
<td>BST VL1</td>
<td>KSS VL1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>14.04.</td>
<td>15.04.</td>
<td>16.04.</td>
<td>17.04.</td>
<td>18.04.</td>
</tr>
<tr>
<td></td>
<td>BST VL2</td>
<td>TÜ BST A1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Ostern</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>17</td>
<td>21.04.</td>
<td>22.04.</td>
<td>23.04.</td>
<td>24.04.</td>
<td>25.04.</td>
</tr>
<tr>
<td></td>
<td>Ostern</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>KSS VL3</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>28.04.</td>
<td>29.04.</td>
<td>30.04.</td>
<td>01.05.</td>
<td>02.05.</td>
</tr>
<tr>
<td></td>
<td>BST VL3</td>
<td>TÜ KSS A1</td>
<td></td>
<td>1. Mai</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>19</td>
<td>05.05.</td>
<td>06.05.</td>
<td>07.05.</td>
<td>08.05.</td>
<td>09.05.</td>
</tr>
<tr>
<td></td>
<td>BST VL4</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>20</td>
<td>12.05.</td>
<td>13.05.</td>
<td>14.05.</td>
<td>15.05.</td>
<td>16.05.</td>
</tr>
<tr>
<td></td>
<td>BST VL5</td>
<td>TÜ BST A2</td>
<td>Abgabe BST A1</td>
<td>KSS VL5</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>21</td>
<td>19.05.</td>
<td>20.05.</td>
<td>21.05.</td>
<td>22.05.</td>
<td>23.05.</td>
</tr>
<tr>
<td></td>
<td>BST VL6</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>22</td>
<td>26.05.</td>
<td>27.05.</td>
<td>28.05.</td>
<td>29.05.</td>
<td>30.05.</td>
</tr>
<tr>
<td></td>
<td>BST VL7</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Abgabe KSS A1</td>
<td></td>
<td></td>
<td>Himmelf.</td>
<td></td>
</tr>
<tr>
<td>23</td>
<td>02.06.</td>
<td>03.06.</td>
<td>04.06.</td>
<td>05.06.</td>
<td>06.06.</td>
</tr>
<tr>
<td></td>
<td>BST VL8</td>
<td>TÜ BST A3</td>
<td></td>
<td></td>
<td>Anstich</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>24</td>
<td>09.06.</td>
<td>10.06.</td>
<td>11.06.</td>
<td>12.06.</td>
<td>13.06.</td>
</tr>
<tr>
<td></td>
<td>Pfingsten/Berg</td>
<td></td>
<td></td>
<td>KSS VL7</td>
<td></td>
</tr>
<tr>
<td>25</td>
<td>16.06.</td>
<td>17.06.</td>
<td>18.06.</td>
<td>19.06.</td>
<td>20.06.</td>
</tr>
<tr>
<td></td>
<td>BST VL9</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Abgabe BST A2</td>
<td></td>
<td>Fronleich.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>26</td>
<td>23.06.</td>
<td>24.06.</td>
<td>25.06.</td>
<td>26.06.</td>
<td>27.06.</td>
</tr>
<tr>
<td></td>
<td>BST VL10</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>27</td>
<td>30.06.</td>
<td>01.07.</td>
<td>02.07.</td>
<td>03.07.</td>
<td>04.07.</td>
</tr>
<tr>
<td></td>
<td>BST VL11</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>28</td>
<td>07.07.</td>
<td>08.07.</td>
<td>09.07.</td>
<td>10.07.</td>
<td>11.07.</td>
</tr>
<tr>
<td></td>
<td>BST VL12</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Abgabe BST A3</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Themen**
- **BST VL1**: Organisation und Einleitung
- **KSS VL1**: Introduction, Motivation and Concept
- **KSS VL2**: Software Families and Software Product Lines
- **BST VL2**: Systemaufruf
- **KSS VL3**: Aspect-Oriented Programming, AspectC++
- **BST VL3**: Betriebssystemarchitektur
- **BST VL4**: Hierarchien
- **KSS VL4**: Aspect-Aware Design, CiAO
- **BST VL5**: Adressraumverwaltung
- **KSS VL5**: Variability in the Large, VAMOS
- **BST VL6**: Adressraummodelle
- **KSS VL6**: Generative Programming, Sloth
- **BST VL7**: Sprachbasierung
- **BST VL8**: Interprozesskommunikation
- **KSS VL7**: Conclusion, Summary
- **BST VL9**: Kommunikationsabstraktionen
- **BST VL10**: Mitbenutzung
- **BST VL11**: Bindelader
- **BST VL12**: Nachlese
http://www4.informatik.uni-erlangen.de/Lehre/SS14/V_KSS
Referenzen
|
{"Source-Url": "https://www4.cs.fau.de/Lehre/SS13/V_KSS/Vorlesung/folien/01-Introduction_handout.pdf", "len_cl100k_base": 6643, "olmocr-version": "0.1.50", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 63849, "total-output-tokens": 9147, "length": "2e12", "weborganizer": {"__label__adult": 0.00031304359436035156, "__label__art_design": 0.0004930496215820312, "__label__crime_law": 0.00021505355834960935, "__label__education_jobs": 0.00090789794921875, "__label__entertainment": 7.212162017822266e-05, "__label__fashion_beauty": 0.00012814998626708984, "__label__finance_business": 0.0003409385681152344, "__label__food_dining": 0.00023567676544189453, "__label__games": 0.0007066726684570312, "__label__hardware": 0.00508880615234375, "__label__health": 0.00022995471954345703, "__label__history": 0.00018584728240966797, "__label__home_hobbies": 0.00010377168655395508, "__label__industrial": 0.0005369186401367188, "__label__literature": 0.00019991397857666016, "__label__politics": 0.00018203258514404297, "__label__religion": 0.00034880638122558594, "__label__science_tech": 0.02508544921875, "__label__social_life": 5.310773849487305e-05, "__label__software": 0.0092315673828125, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.0001928806304931641, "__label__transportation": 0.0005288124084472656, "__label__travel": 0.0001310110092163086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21860, 0.07003]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21860, 0.18738]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21860, 0.61966]], "google_gemma-3-12b-it_contains_pii": [[0, 266, false], [266, 422, null], [422, 899, null], [899, 1355, null], [1355, 1643, null], [1643, 2297, null], [2297, 2461, null], [2461, 3058, null], [3058, 3488, null], [3488, 4037, null], [4037, 4741, null], [4741, 5342, null], [5342, 5992, null], [5992, 6148, null], [6148, 6213, null], [6213, 7437, null], [7437, 7888, null], [7888, 8529, null], [8529, 9479, null], [9479, 9762, null], [9762, 10251, null], [10251, 10448, null], [10448, 10785, null], [10785, 10949, null], [10949, 11559, null], [11559, 11950, null], [11950, 12394, null], [12394, 12694, null], [12694, 13126, null], [13126, 13126, null], [13126, 13337, null], [13337, 13974, null], [13974, 14150, null], [14150, 14380, null], [14380, 14985, null], [14985, 15203, null], [15203, 18144, null], [18144, 19163, null], [19163, 20426, null], [20426, 21765, null], [21765, 21860, null]], "google_gemma-3-12b-it_is_public_document": [[0, 266, true], [266, 422, null], [422, 899, null], [899, 1355, null], [1355, 1643, null], [1643, 2297, null], [2297, 2461, null], [2461, 3058, null], [3058, 3488, null], [3488, 4037, null], [4037, 4741, null], [4741, 5342, null], [5342, 5992, null], [5992, 6148, null], [6148, 6213, null], [6213, 7437, null], [7437, 7888, null], [7888, 8529, null], [8529, 9479, null], [9479, 9762, null], [9762, 10251, null], [10251, 10448, null], [10448, 10785, null], [10785, 10949, null], [10949, 11559, null], [11559, 11950, null], [11950, 12394, null], [12394, 12694, null], [12694, 13126, null], [13126, 13126, null], [13126, 13337, null], [13337, 13974, null], [13974, 14150, null], [14150, 14380, null], [14380, 14985, null], [14985, 15203, null], [15203, 18144, null], [18144, 19163, null], [19163, 20426, null], [20426, 21765, null], [21765, 21860, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21860, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21860, null]], "pdf_page_numbers": [[0, 266, 1], [266, 422, 2], [422, 899, 3], [899, 1355, 4], [1355, 1643, 5], [1643, 2297, 6], [2297, 2461, 7], [2461, 3058, 8], [3058, 3488, 9], [3488, 4037, 10], [4037, 4741, 11], [4741, 5342, 12], [5342, 5992, 13], [5992, 6148, 14], [6148, 6213, 15], [6213, 7437, 16], [7437, 7888, 17], [7888, 8529, 18], [8529, 9479, 19], [9479, 9762, 20], [9762, 10251, 21], [10251, 10448, 22], [10448, 10785, 23], [10785, 10949, 24], [10949, 11559, 25], [11559, 11950, 26], [11950, 12394, 27], [12394, 12694, 28], [12694, 13126, 29], [13126, 13126, 30], [13126, 13337, 31], [13337, 13974, 32], [13974, 14150, 33], [14150, 14380, 34], [14380, 14985, 35], [14985, 15203, 36], [15203, 18144, 37], [18144, 19163, 38], [19163, 20426, 39], [20426, 21765, 40], [21765, 21860, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21860, 0.10672]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
5e29193994c38464253a6d17bda0c8b722f895e9
|
Master Thesis
Improving the Efficiency of Fuzz Testing Using Checkpointing
Author(s):
Zachow, Ernst-Friedrich
Publication Date:
2014
Permanent Link:
https://doi.org/10.3929/ethz-a-010144446
Rights / License:
In Copyright - Non-Commercial Use Permitted
Improving the Efficiency of Fuzz Testing Using Checkpointing
Master Thesis
E. Zachow
April 1, 2014
Advisors: Prof. Dr. D. Basin, Dr. M. Torabi Dashti, P.Tsankov, M. Guarnieri
Department of Computer Science, ETH Zürich
Abstract
Fuzz testing, applied to protocols, attempts to provoke misbehaviour of the programs running those protocols by tampering with the messages that are being exchanged. In general, there are too many ways how we can manipulate protocol runs, so we cannot try them all. Instead, we just try to run as many test cases as we possibly can. For each test case we waste time by rerunning the protocol from the start up to the state where we can make our modification.
We want to reduce this overhead by using a checkpointing tool to take snapshots of the relevant states so we can simply restore the state of a protocol rather than performing a whole new protocol run. We present SNAPINTERCEPT, a tool that uses a novel approach involving snapshots to increase the efficiency of fuzz testing on protocols. We also provide some preliminary empirical results showing the benefits of our approach.
## Contents
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Introduction</td>
<td>1</td>
</tr>
<tr>
<td>Background</td>
<td>3</td>
</tr>
<tr>
<td>2.1 Fuzz Testing</td>
<td>3</td>
</tr>
<tr>
<td>2.2 Checkpointing</td>
<td>3</td>
</tr>
<tr>
<td>Requirements</td>
<td>5</td>
</tr>
<tr>
<td>Tool Architecture</td>
<td>7</td>
</tr>
<tr>
<td>4.1 Components</td>
<td>8</td>
</tr>
<tr>
<td>4.2 Relay</td>
<td>9</td>
</tr>
<tr>
<td>4.3 Message Hub and Fuzzer</td>
<td>10</td>
</tr>
<tr>
<td>4.4 Oracle, Central Controller and Control Modules</td>
<td>12</td>
</tr>
<tr>
<td>Experiments</td>
<td>15</td>
</tr>
<tr>
<td>Challenges</td>
<td>17</td>
</tr>
<tr>
<td>Conclusion</td>
<td>21</td>
</tr>
<tr>
<td>Bibliography</td>
<td>23</td>
</tr>
</tbody>
</table>
Introduction
In this thesis we focus on fuzz-testing network protocols. This can be done by randomly changing the protocol’s messages and examining the resulting system behaviour. Since protocols are usually stateful, we have to exercise the system ideally in all possible states. For instance, according to the specification of the Internet Key Exchange protocol (IKE) [3], the nodes must encrypt their keep-alive messages with the session key. Therefore, to test whether IKE handles malformed keep-alive messages, the protocol nodes must have completed the key-exchange phase of the protocol.
The effectiveness of fuzz-testing depends on the number of test cases we can run. Due to the random nature of fuzz-testing, the number of tests required to thoroughly test a system is generally large. Empirical research [5] suggests that we need to run half a million test cases after we have found the last bug before we can stop fuzz testing.
Even for simple stateful protocols like IKE, executing a single test case can take one second on average [6]. Most of this time is spent on taking the protocol nodes to the desired state. For instance, to test the keep-alive message of IKE, we have to perform the expensive key-exchange phase each time.
Our goal is to improve the effectiveness of fuzz-testing by removing the overhead caused by taking the nodes to a desired state. Our hypothesis is that we can do this by using checkpointing, which is a technique for saving and restoring program states. Our approach is as follows. First we run the protocol and take a snapshot of all protocol nodes before a message is delivered to a node. Then, to run a test we restore the desired snapshot, fuzz the pending message, and deliver it. Note that we may run the protocol multiple times to acquire more snapshots.
To illustrate the advantages of our approach, consider testing IKE’s keep-alive messages. We take a snapshot of IKE’s nodes just before the delivery of a keep-alive message. For each test, we just need to restore the snapshot, modify the keep-alive message, deliver it, and observe whether a failure
1. Introduction
has occurred. In this way, we replace the cost of taking the protocol to the desired state with the cost of restoring the snapshot. If the latter is faster than the former, then our approach improves the efficiency of fuzz testing. We remark that for complex security protocols, this is usually the case.
Contributions: We propose a novel approach to fuzz-testing that leverages the capabilities of checkpointing tools. The idea is to intercept all protocol messages on the transport layer and store snapshots of the whole system just before we fuzz a message. In this way we can restore the system to that state again and skip rerunning the whole protocol.
As a proof of concept, we present SNAPINTERCEPT, a fuzz-testing tool for network protocols that uses DMTCP [2], a state of the art checkpointing tool. Our tool provides the mechanisms for intercepting and injecting messages as well as taking snapshots. To use our tool, the testers must provide fuzz-operators and the test oracle.
Outline: We provide a short background for fuzz-testing and checkpointing in Chapter 2. The requirements for our new tool SNAPINTERCEPT will be shown in Chapter 3. We explain the architecture of SNAPINTERCEPT in Chapter 4 and measure it's performance in comparison with the conventional method of protocol reruns in Chapter 5. We will briefly report about the development process of SNAPINTERCEPT in Chapter 6 before we summarize our findings in Chapter 7 and talk about possible future directions and development options for our new tool.
2.1 Fuzz Testing
Fuzz testing is about testing software by introducing changes to the inputs at different states of the problem [4]. These changes are called fuzz operators and a proper selection of fuzz operators can increase the likelihood of a successful fuzz testing outcome. The selection of fuzz operators can also be done in various ways ranging from a selection that is uniform at random to more strategic approaches based on previous results or classification of fuzz operators.
We focus on fuzz-testing network protocols. This requires that we can intercept the transmitted messages of a protocol, tamper with the content that is relevant to the protocol we are trying to test and inject the messages back into the system instead of the original message. Depending on the protocol there may be different more or less elaborate ways to tamper with the message. Maybe for some test case we just try to flip individual bits and see if the protocol nodes misbehave or we spent some time to understand the different fields and the range of values they can possibly take and change the message based on those fields, for example we could swap two numbers that are part of some message or randomize them.
2.2 Checkpointing
Application checkpointing describes the idea to freeze a program in some state and store all relevant information about that frozen state on the hard drive, so that the execution can be later resumed at that particular state. Motivation to use checkpointing can vary, for example the user may wish to stop execution of a program and simply resume it later or migrate to a different host system in between. Checkpointing is a very active area of development and in the recent past there have been multiple changes introduced to the Linux Kernel so that snapshot tools are better supported.
2. Background
Current problems that are also relevant for our tool relate to the various types of connections between systems and how to checkpoint these connections in a reasonable way. For example, think about a stateless connection between two systems where we try to checkpoint one end of the connection, but not the other. It is also an issue in general to run a monitor like valgrind attached to some program we try to checkpoint. Nevertheless, we want to discuss an application for fuzz testing in this document and implement a new tool on top of an existing snapshot tool.
There are different checkpointing tools that try to implement a way to take snapshots of a program’s state with all relevant information such as file descriptors and open connections. We do not want to restrict our approach to one specific tool, but we have to use one for the implementation of our SnapIntercept tool. Because of their different approach and usability, we will note two different checkpointing tools here that are worth looking into. For the SnapIntercept tool, we decided to use DMTCP:
**CRIU** Checkpoint/ Restore In Userspace [1] is a tool that allows the user to take a snapshot of a running application and later restore it. The developers have recently committed a number of changes to the main Linux Kernel.
**DMTCP** Distributed MultiThreaded Checkpointing [2] provides a framework with a central coordinator. Programs can be launched within that framework and become attached to the coordinator by communicating over a dedicated DMTCP port. DMTCP can keep track of resources that each program tries to access because it gets attached to a program upon launch and the coordinator provides a way to coordinate the snapshot and restore process. For a snapshot, every program gets stored in a separate file. These checkpoint files can be either manually restored, which is what we do for our SnapIntercept tool, or we can run a script that the coordinator automatically generates to restore the whole system.
This chapter defines some general requirements towards a new tool like SnapIntercept.
We can immediately formulate the first two requirements based on the core functionality we desire. We want to be able to intercept messages in such a way that we can apply fuzz operators to them and take snapshots of every involved program at exactly the state after a message has been sent by a protocol node, but before it has been delivered to another protocol node.
**Requirement 1** The system has to provide a method for intercepting and injecting messages.
Snapshots are defined as the content of the intercepted message and the state of the involved programs at the exact point in time where the read/write access to that message is gained.
**Requirement 2** We must be able to take snapshots of protocol nodes.
Furthermore, we want our system to be applicable to a variety of protocols and snapshot tools rather than just one. The best we can do is to require the existence of an interface that provides read/write access to the intercepted message without forcing restrictions on the developer of the code that fuzzes the message (from here on called the fuzzer). For example, he should not be required to use a specific programming language or be restricted to use a stateless fuzzer. It follows that the fuzzer cannot be part of the snapshot, because the developer may decide that he wants to keep track of the test cases he has already tried which is impossible if we roll the fuzzer back to a previous state.
**Requirement 3** The system has to be applicable to a wide range of protocols rather than being tailored towards one specific protocol.
**Requirement 4** There must be an interface which provides access to the intercepted message for a fuzzer program whose state is not captured by the snapshot tool.
3. Requirements
Finally, the amount of work required to implement and use our system should be justifiable in comparison to other methods. We do not want to reinvent solutions to complex issues that have already been successfully solved and do not even occur for other approaches. For example, we do not want to invent a new operating system or transport layer just for this purpose.
**Requirement 5** The implementation and usage of the new tool should require an amount of time and work that is justifiable in comparison to other methods such as restarting the protocol.
Tool Architecture
Our system consists of six main components besides the protocol nodes that we run. The setup looks as shown in Figure 4.1. We need at least one component for a minimal fuzz-testing architecture in order to intercept a connection and modify the protocol messages. Since we want to be able to intercept multiple connections at the same time, but do not want to distribute our fuzzer across machines, we need another component that receives all intercepted messages and provides read/write access to the messages on the same machine, which we will call the tool-machine.
The user of our tool has provide the fuzz-operators and the test oracle. Since we want to decouple SnapIntercept from protocol specific functions, we declare both the fuzzer and the oracle to be separate components.
The final two components coordinate the snapshot taking process. The tool-machine needs a controller that can notify DMTCP at the right moment to take a snapshot and also contact every other machine in the network to perform local operations like storing snapshot files or restoring a certain snapshot. The controller must be another component because he must control DMTCP and not be affected by it. The last component is the program running on the regular machines that receives the commands from the controller.
There are two modes of operation. During snapshot-mode, we perform at least one, but depending on the protocol possibly multiple, clean protocol run without message modifications and take snapshots of the different states after messages have been sent, but directly before fuzz operators are applied to them. In testing-mode we have already acquired the snapshots we are interested in and possibly performed preprocessing operations on the pending messages that will be fuzzed upon snapshot restoration. We keep running test cases in this mode where one test case is the restoration of some snapshot and following execution of the whole component-system until we decide to end the test case.
4. Tool Architecture
![Diagram of Tool Architecture]
Figure 4.1: Basic setup and communication layout (DMTCP architecture simplified)
4.1 Components
Here is a brief description of all components and their tasks, we will discuss them in more detail below. Aside from the protocol nodes only the relays and control modules do not run on the tool-machine. Again, the basic setup is shown in Figure 4.1.
Relay: intercept messages, send them to the tool-machine, inject response, see Section 4.2
Message Hub: receive messages from relays on tool-machine, provide message access to fuzzer, send messages back to correct relays, see Section 4.3
Fuzzer: apply fuzz-operators to message provided by message hub, see Section 4.3
Oracle: decide whether a test passes or fails, see Section 4.4
Central Controller: coordinate snapshot/restore process by utilizing DMTCP and commanding control modules on other machines, see Section 4.4
Control Module: execute commands received from central controller for local machine, see Section 4.4
As we will see in the next sections, we take snapshots of the application programs as well as the relays and the message hub. The fuzzer, oracle and control programs keep running and are not affected by snapshot or restore processes. The fuzzer and oracle need to be tailored towards the specific
protocol we want to test since there is no general set of fuzz operators that apply to every protocol in existence and there is also no standard way to decide when to end a test case and whether the test has passed or failed that is a reasonable choice for all protocols. This means that the fuzzer and oracle have to be provided by the tester and require some interface in order to interact with the rest of the tool.
4.2 Relay
According to Requirement 3, we should be able to intercept messages for several different protocols. Right away we can see a conflict with Requirement 5. If we want our system to be able to handle TCP and UDP, we need to be able to intercept messages on the network layer. This means for example that we must reimplemt large portions of the TCP protocol in order to avoid fragmentation of messages. Because of this reason it makes sense to relax Requirement 3 so that it only applies to protocols that operate above the transport layer and focus on one protocol of the transport layer and the layers below. For our tool SNAPINTERCEPT we have decided to focus on TCP. Another good choice based on popularity of the protocol would be UDP, but we have chosen to use DMTCP as a snapshot tool which currently does not support UDP.
TCP addresses programs by their port number and the IP of the machine they run on. We will try to exploit this. For every TCP connection one end has to be the server and listen on an open port for incoming connections and the other end will be the client and connect to that port. We will try to implement a relay that runs in between both ends.
If we run the relay on the same machine as the application listening on the server port, there are two ways to achieve this. One way is to configure the client to connect to the relay port and have the relay connect to it’s own machine, localhost, on the real server port. The other way is to configure the server to run on a different port and have the relay listen on the default server port. This way an unmodified client will connect to the relay and the relay will forward the connection in the same way to the server. Every message from one end will be relayed towards the other end and vice versa. Figure 4.2 shows the two possible setups for our relay in comparison to the usual setup without a relay. Since every message passes through the relay, which we have implemented, we can intercept the message and tamper with it in any way we like before we forward it to the intended recipient.
4. Tool Architecture
![Diagram of client-server setup with relay]
Figure 4.2: Relay setup, port x is the default server port, y is a new unused port
4.3 Message Hub and Fuzzer
Relays intercept protocol messages so that we could technically implement our fuzz operators at the relay. However, this would also require that we must distribute the fuzzing operation to every node where a relay is running. This may not work for stateful fuzzing strategies which may require tedious synchronization among the nodes.
We have required that we do not want to burden the fuzzer implementation with unnecessary constraints, so it makes sense to find a way to perform fuzzing in a centralized way. It follows that every relay has to send the intercepted messages to some dedicated machine where a program, called the message hub, receives all messages, provides access to the fuzzer and sends the modified messages back to the correct relays. You can refer to Figure 4.3 to see the setup of the message hub and understand how every intercepted message gets passed along to the fuzzer and back.
Note that we have separated the message hub from the fuzzer. This is not just a design choice because we want to clearly separate fuzzer and oracle, which are supplied by the tester, from the rest of the architecture. It is nec-
necessary to do so, because we have required the fuzzer to be unconstrained by the snapshot procedure, while the message hub’s state must be stored in every snapshot because the intercepted messages travel along open connections and taking snapshots of only one end of an open connection is a serious cause for errors for the current state of snapshot tools. It is even questionable whether a generally applicable solution exists for taking snapshots of only one connection end, so we try to avoid this scenario altogether.
However, we need to extract the message from the message hub to make it accessible for the fuzzer. Instead of using a connection, we will just write the message to a file and later read the fuzzed message from another file. The message file, written by the message hub, is part of the snapshot for that state, so we can perform preprocessing like decryption operations after we have acquired the snapshots that we want. Another benefit of this approach is that we can implement the fuzzer in any programming language that we want.
The fuzzer will only be required to read the message from a file and write the modified message to another file. Whatever happens in-between is subject to the specific implementation. The message hub only provides the intercepted message in a file, but does not communicate with the fuzzer in any other way in order to minimize all unnecessary communication between programs that are part of the snapshot and those who are running without interruption. The fuzzer will be told when to start fuzzing a message by the central controller via regular TCP connection, respond once the fuzzer is done for that message, and wait for the next trigger message.
4. Tool Architecture
4.4 Oracle, Central Controller and Control Modules
As we have pointed out in Section 2.2 on the example of CRIU and DMTCP, the checkpointing tools can vary significantly in their approach. In practice we found that the checkpointing tool can impose the most significant restrictions on the usage of our tool, because the whole system fails if the checkpointing tool does not support every critical aspect of the program we want to run with it. At this moment we found for our approach that DMTCP handles multiple restorations with open TCP connections better than CRIU, so we chose this program for our SNAPINTERCEPT tool. However, we will try not to exploit program behaviour that is specific to just DMTCP, so the same method can be applied to different checkpointing tools as well.
We have already said that SNAPINTERCEPT can be run in two different modes. One mode is used to get the relevant snapshots, the other is used for the fuzz-testing process. For the snapshot-mode we want to take the snapshots just after the message hub has written a message to the message file. Since the central controller coordinates the snapshot taking process, the message hub has to signal the central controller somehow. Again, we want to avoid external connections, so we create a trigger file whose existence signals the central controller that a new message is accessible. Later the central controller will signal the message hub in a similar way to continue execution. For more details about this approach, see the paragraph about development issues in Chapter 6.
The central controller coordinates the process of checkpointing. He will contact the DMTCP-coordinator in order to invoke the DMTCP checkpoint mechanism that creates snapshot files for each program attached to DMTCP on the machine it is running on. Some additional file operations are necessary to store those files in a new folder so they do not get overwritten. The central controller can store the snapshot for the message hub this way because the message hub is running on the tool-machine, but the other programs run on different machines, so he needs to contact all control modules and command them to perform the necessary file operations for their local machine. This means that we have a distributed snapshot and every machine has the relevant information for its local programs. Figure 4.4 shows the sequence of events if SNAPINTERCEPT takes a snapshot.
During snapshot mode we can ignore the oracle. We could already use the fuzzier to apply fuzz operators if we want to reach states that require fuzzing, but most likely we are only interested in clean protocol runs at this point. It is up to the tester whether or not he wants to use the fuzzier in this mode. During testing mode, the oracle can decide to trigger the central controller at any point in time. This signals that the current test case is over and we restore some snapshot to start the next test case. The central controller will restore the message hub in the relevant state, overwrite the
message file, so the correct message is accessible for the fuzzer and contact all control modules, so they can restore the local application programs and relays and reattach them to DMTCP. The sequence of events for the snapshot restoration is shown in Figure 4.5
4. Tool Architecture
![Diagram of tool architecture]
**Figure 4.5:** Restoration mode, event 3 and 4 are done in parallel
Experiments
We want to compare the performance of our approach that uses checkpointing techniques to the conventional method that uses protocol reruns. For that purpose we have written a simple program capable of sending mail according to the SMTP protocol and use a python class that implements an SMTP server capable of running the SMTP protocol without actually sending the mail away. This provides a minimal environment where we can reasonably perform comparisons between both methods. Our primary interest is the average execution time and standard deviation of each test case, but we will also provide information about the size of the snapshots.
We have mentioned that it is necessary to implement the fuzzer and oracle for the specific protocol at hand. For our tests we are simply interested in measuring the time overhead we introduce for initializing test cases, so we do not fuzz messages at all and simply write the input of the fuzzer directly to the output. The oracle decides when a test case ends. Again, we are only interested in the overhead of initializing a test case, so we immediately end it once we are sure that every program has been restored. For this purpose we distribute the oracle to all nodes, so we can query them whether or not all programs are running.
For SMTP the inputs are e-mails. For the purpose of this experiments, a test case consists of a sequence of e-mails where the last one will be fuzzed. By test case size we mean the number of e-mails in the test case. For our experiments, we used a machine with an Intel i4770 CPU and 32 GB of RAM running Linux. We used virtualbox to setup our tool architecture with two virtual machines running the protocol nodes and one dedicated tool-machine for the central components of the tool.
We measure the time required to run test cases of different sizes using protocol reruns and SNAPINTERCEPT. The results are given in Figure 5.1. Our results show that on average it takes 4.33 ms to run a test case of size 1 using protocol reruns. This time increases linearly in the size of test cases. For example it takes 110.34 ms to execute a test case of size 30. Using
5. Experiments
Table 5.1: Disk Usage of Snapshot Files
<table>
<thead>
<tr>
<th>Program</th>
<th>Snapshot Size (bytes)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Message Hub</td>
<td>25,927,680</td>
</tr>
<tr>
<td>Relay</td>
<td>30,253,056 - 32,342,016</td>
</tr>
<tr>
<td>SMTP Client</td>
<td>27,459,584</td>
</tr>
<tr>
<td>SMTP Server</td>
<td>26,554,368 - 26,562,560</td>
</tr>
</tbody>
</table>
Figure 5.1: Time comparison of SMTP mail transfers (blue) and SnapIntercept performance (orange)
SnapIntercept it takes 102.14 ms to run a test case. This time does not depend on the size of the test case, because we send only the last (fuzzed) e-mail in the test case. In the case of SMTP, SnapIntercept improves the efficiency of fuzz-testing for test cases larger than size 30. We expect that the benefits of SnapIntercept are more significant in more complex protocols with expensive initialization phases. For instance, in order to test IKE keep-alive messages, it takes one second to reach the required state.
For the snapshot size, it is worth pointing out that the measured disk usage across all snapshots seems constant to the byte for the involved programs. The relay has a large drop in size by two megabytes, which occurs only for the last snapshot. This is likely because of the fact that SMTP exchanges ‘Bye’ messages for the final two protocol messages and one end has already torn the connection down before we intercept the final ‘Bye’ message. Another observable change in size occurs for the SMTP Server after he has received the actual mail text content. The size was constant before and after that.
Challenges
This part of the thesis is not crucial to the understanding of SnapIntercept or its architecture and performance, but reports some of the issues and findings that occurred during development.
**Virtual Machine Approach:** First of all, if someone tries to implement a tool running on a network of virtual machines that somehow utilizes snapshots and is satisfied with the performance of snapshot mechanisms implemented in the virtualization software, then it is possible to use a simpler approach and just take snapshots of the whole virtual machines. However, if we care about speed, then it is likely that we can do better by using a tool that can handle checkpointing of programs, since we do not need to restore a whole operating system. If we do not want to implement our own checkpointing tool, then we are currently left with no other options.
We want to mention that we have also implemented another tool with related functionality to SnapIntercept that takes snapshots of whole virtual machines instead of using some checkpointing tool for programs. The main challenge here is to send a signal from within the virtual environment to the host system, since the host system controls the virtualization tool, virtualbox in this case, and can take or restore snapshots.
To solve this, we can integrate one of the host’s network interfaces in a network with all virtual machines and use UDP, which is stateless, to notify the host system that it can take or restore a snapshot from within the virtual environment. The notification has to be stateless, so we do not break any connection during snapshot restoration. The rest of the architecture can be implemented similar to SnapIntercept.
The performance of this approach depends on the configuration of the virtual environment such as the memory size, but even for reasonably small memory sizes of 512 MB we found that the restoration of a virtual machine snapshot takes virtualbox multiple seconds, which is too long in order to be practical for our purpose.
6. Challenges
**Development issues:** During development of SNAPINTERCEPT, the first serious issue we had was the communication between the programs that are subject to checkpointing and those who are not. As mentioned before, we could not use stateless communication like UDP in order to bridge this gap because DMTCP does not support it currently. Simply using the existence of a file to notify the central controller that a new message is ready for fuzzing seemed to be the most robust choice in this case. However, this is only suitable as a communication option for two programs at a time because we have to delete the trigger file immediately in order for this approach to be effective multiple times. We will introduce race conditions if multiple programs change behaviour based on the existence of the same file and delete it concurrently.
Another problem worth mentioning is that killing and restarting processes with open connections on some port should be done with care. If the parent of a child process has not collected the child’s return value, the child will linger in the system even if it has been killed and we cannot reuse the same port, so restarting the same program from its snapshot multiple times will fail and it is not immediately obvious why.
**Checkpointing constraints:** Both snapshot tools mentioned in this paper (DMTCP [2] and CRIU [1]) are currently under active development. For SNAPINTERCEPT, some of the minor tasks might be implemented in a simpler way now because of very recent updates, for example the first version of DMTCP that we used would not allow us to name snapshot files, but assign long, unintuitive names that change for every program. Restarting a program therefore required a workaround to get the file name that is no longer necessary with the current version.
A more serious problem occurred when DMTCP would not let us restart a snapshot with superuser privileges, which was necessary for our experiments. This would have had a serious impact on our ability to run experiments, but fortunately this issue was solved with an update.
In general, our whole approach is mostly constrained by the support of the checkpointing tool. DMTCP already supports a lot of programming and scripting languages, but in practice we could not successfully checkpoint several protocol nodes. We tried approaches for different protocols like IKE and telnet, but ran into issues especially for checkpointing the daemon. We also tried SSH, which is supported by DMTCP, but we felt that this does not demonstrate the approach we want to show appropriately because it is treated as a special case in DMTCP and DMTCP handles checkpointing in that case slightly different. This means that the first thing someone should try after verifying that a protocol runs over TCP is to try and checkpoint just the protocol nodes before doing the whole setup with SNAPINTERCEPT. If the
protocol nodes are supported by DMTCP, then we only need to configure the port where the relay listens and the port where the relay forwards the traffic to. The tool’s internal ports and file names can remain the same.
Conclusion
We proposed a novel approach to fuzz-testing that uses checkpointing techniques to decrease the time overhead caused by restarting protocols to reach a specific state of interest. Furthermore, we presented SNAPINTERCEPT, a new tool that implements this novel approach.
Our tool is capable of intercepting messages sent over TCP connections while avoiding low level networking problems like fragmentation. It can take a snapshot of programs involved in the execution of a protocol in a controlled way, so that an intercepted message has not reached it’s intended recipient yet and can immediately be subject to fuzz testing techniques if we choose to restore that particular snapshot later. This eliminates the need for protocol reruns from the start. We also performed a preliminary evaluation of the benefits of our tool using the SMTP protocol.
Future Work: First and foremost, we need to apply SNAPINTERCEPT to more protocols with deeper state spaces than SMTP and measure the benefits of our tool over the conventional method of protocol reruns.
We have utilized DMTCP for the snapshot and restore process, so the system is constrained to implementations of the protocol that are supported by DMTCP. We can extend SNAPINTERCEPT to support other checkpointing tools such as CRIU.
For the DMTCP implementation, it may be worth investigating the impact of configuration options like state compression or which monitors are supported if any and how they affect performance.
Another obvious way to improve SNAPINTERCEPT is to adapt the system to support message interception for UDP as well. In principle we only need to care about both of the relay’s connection points to the application ends and leave the more internal parts of the system as they are.
Before we run SNAPINTERCEPT on large networks of machines, we can add
more parallelism especially for the central controller connection to the individual nodes which is currently done in sequence.
There may be other applications besides fuzz testing for our tool whenever we want to examine a sequence of messages with the ability to continue execution of a system at the moment that message was sent. After we have obtained our snapshots, we can simply investigate and even change the message that was sent last. This may be used for a network oriented debugging process of programs or reverse engineering of protocols.
Finally, the idea of improving the effectiveness of testing using checkpointing goes beyond fuzz testing and can be applied to any dynamic analysis. We plan to generalize our approach and investigate whether we can reduce the time required to run general test cases.
|
{"Source-Url": "https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/83187/eth-8582-01.pdf;jsessionid=2BBB3F772C36FFA02BBF302F9B207A62?sequence=1", "len_cl100k_base": 7377, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 48676, "total-output-tokens": 8655, "length": "2e12", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0002522468566894531, "__label__crime_law": 0.0003905296325683594, "__label__education_jobs": 0.00145721435546875, "__label__entertainment": 9.614229202270508e-05, "__label__fashion_beauty": 0.00015592575073242188, "__label__finance_business": 0.0002770423889160156, "__label__food_dining": 0.0003638267517089844, "__label__games": 0.0006794929504394531, "__label__hardware": 0.0014963150024414062, "__label__health": 0.0006313323974609375, "__label__history": 0.0003077983856201172, "__label__home_hobbies": 0.0001036524772644043, "__label__industrial": 0.0004949569702148438, "__label__literature": 0.00036215782165527344, "__label__politics": 0.0002231597900390625, "__label__religion": 0.0004200935363769531, "__label__science_tech": 0.1097412109375, "__label__social_life": 0.00015151500701904297, "__label__software": 0.017669677734375, "__label__software_dev": 0.86328125, "__label__sports_fitness": 0.00031876564025878906, "__label__transportation": 0.0005931854248046875, "__label__travel": 0.00022554397583007812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36730, 0.03716]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36730, 0.16828]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36730, 0.92442]], "google_gemma-3-12b-it_contains_pii": [[0, 257, false], [257, 480, null], [480, 480, null], [480, 1377, null], [1377, 1377, null], [1377, 2066, null], [2066, 2066, null], [2066, 4176, null], [4176, 5725, null], [5725, 7544, null], [7544, 9560, null], [9560, 11378, null], [11378, 11953, null], [11953, 13966, null], [13966, 15298, null], [15298, 17801, null], [17801, 19119, null], [19119, 20828, null], [20828, 23879, null], [23879, 24143, null], [24143, 24267, null], [24267, 26419, null], [26419, 27949, null], [27949, 29980, null], [29980, 32892, null], [32892, 33111, null], [33111, 33111, null], [33111, 34953, null], [34953, 35773, null], [35773, 36730, null]], "google_gemma-3-12b-it_is_public_document": [[0, 257, true], [257, 480, null], [480, 480, null], [480, 1377, null], [1377, 1377, null], [1377, 2066, null], [2066, 2066, null], [2066, 4176, null], [4176, 5725, null], [5725, 7544, null], [7544, 9560, null], [9560, 11378, null], [11378, 11953, null], [11953, 13966, null], [13966, 15298, null], [15298, 17801, null], [17801, 19119, null], [19119, 20828, null], [20828, 23879, null], [23879, 24143, null], [24143, 24267, null], [24267, 26419, null], [26419, 27949, null], [27949, 29980, null], [29980, 32892, null], [32892, 33111, null], [33111, 33111, null], [33111, 34953, null], [34953, 35773, null], [35773, 36730, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36730, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36730, null]], "pdf_page_numbers": [[0, 257, 1], [257, 480, 2], [480, 480, 3], [480, 1377, 4], [1377, 1377, 5], [1377, 2066, 6], [2066, 2066, 7], [2066, 4176, 8], [4176, 5725, 9], [5725, 7544, 10], [7544, 9560, 11], [9560, 11378, 12], [11378, 11953, 13], [11953, 13966, 14], [13966, 15298, 15], [15298, 17801, 16], [17801, 19119, 17], [19119, 20828, 18], [20828, 23879, 19], [23879, 24143, 20], [24143, 24267, 21], [24267, 26419, 22], [26419, 27949, 23], [27949, 29980, 24], [29980, 32892, 25], [32892, 33111, 26], [33111, 33111, 27], [33111, 34953, 28], [34953, 35773, 29], [35773, 36730, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36730, 0.14103]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
be21301f69efe748d72bb8cdb7607b1d7ce9f511
|
Code Optimization and Scaling of the Astrophysics Software Gadget on Intel Xeon Phi
P. Borovska\textsuperscript{a*}, D. Ivanova\textsuperscript{a}
\textsuperscript{a}National Centre for Supercomputing Applications, Bulgaria
Abstract
The whitepaper reports our investigation into the porting, optimization and subsequent performance of the astrophysics software package GADGET, on the Intel Xeon Phi. The GADGET code is intended for cosmological N-body/SPH simulations to solve a wide range of astrophysical tasks. The test cases within the project were simulations of galaxy systems. A performance analysis of the code was carried out and porting, tuning and scaling of the GADGET code were completed. As a result, the hybrid MPI/OpenMP parallelization of the code has been enabled and scalability tests on the Intel Xeon Phi processors, on the PRACE EURORA system are reported.
Keywords: Gadget Software, Galaxy Data Set, Hybrid MPI/OpenMP Parallelization, High-Performance Computing, Scalability Tests, Intel Xeon Phi
1. Introduction
Cosmic phenomena involve complex interactions between physical processes on a wide range of scales. Simulations are of great importance in this area of research. The development of such models is supported by rapid advances in computer technology, which provide a greater dynamic range of investigation in simulation models.
The software code, GADGET, is a freely available code, widely used for cosmological N-body/SPH simulations to solve a wide range of astrophysical tasks - colliding and merging galaxies, forming of large-scale structure in the space, studying the dynamics of the gaseous intergalactic medium, forming of the stars and its regulation, etc. The test case for within the project will be simulations of galaxy systems.
GADGET is written by Volker Springel to make possible the execution of cosmological N-body/SPH simulations on massively parallel computers with distributed memory.
GADGET uses an explicit communication model and is parallelized by a MPI communication interface. The cosmological code includes a tree-code module, a communication scheme for gravitational and SPH forces, a domain decomposition strategy, a novel smooth particle hydrodynamics (SPH) formulation based on entropy as an independent variable, and in addition of the “TreePM” functionality.
The objective of this project is to optimize the multithreading of GADGET along with developing the hybrid MPI/OpenMP parallelization of the code to allow the code to exploit new hybrid computing, and to perform scalability testing of the code on the Intel Xeon Phi processors on the EURORA system. This work is part of the PRACE-IIP WP7 Extension project.
\* Corresponding author. E-mail address: pborovska@tu-sofia.bg
\* Corresponding author. E-mail address: d_ivanova@tu-sofia.bg
2. GADGET Code Structure
The main structure of the GADGET software package is the TreeSPH code (Hernquist & Katz 1989) [2], where gravitational interactions are calculated by a hierarchical multipolar expansion and gas dynamics followed by a SPH computation. Gas and dark matter are represented by particles in this method. In the calculations for this method, collisions between the particles are not taken into account [1].
The Particle-Mesh (PM) methods, (e.g. Klypin & Shandarin 1983, White, Frenk & Davis 1983) is the fastest scheme for the calculation of gravitational forces, but for scales under one or two cells, the forces are strongly suppressed, as a result, this technique is not good at operations on data with a high spatial resolution. This resolution can be increased significantly by adding the direct calculations of power over short distances (Hockney & Eastwood 1981, Efstathiou et al., 1985), or using additional adaptive Fourier cells placed on areas of particular interest (Couchman 1991). The cell mesh can also be adaptively refined, with the potential found in real space using relaxation methods (Kravtsov, Klypin & Khokhlov 1997; Knebe, Green & Binney 2001) [2, 3, 4].
The Hierarchical Tree algorithm (Appel 1985, Barnes & Hut 1986, Dehnen 2000) is another method of execution and presents no limit when it comes to resolution, especially for mass distribution in which there are parts of the area with a low density of the tables, it can be considerably slower than the methods based on the Fourier transform algorithm. Developing the TreePM hybrid algorithm tries to combine the advantages of both methods by using tree algorithm calculations only at close distances and calculating forces over long distances through the PM algorithm [5, 6].
In the code of the software package GADGET2, both dark matter and gaseous fluid are represented by particles along for the two components to be calculated by the method of N-bodies. The basic method, which Gadget software package used to achieve spatial adaptively is hierarchically multipolar expansion, often called a tree algorithm. This method classifies remote particles in even bigger cells, allowing their gravity can be measured by a single multipolar power. Instead needed the N-1 interactions of particles - for each particle, as required in the method of direct summation gravitational forces on a particle can be calculated with only O (logN) interactions [6].
In practice, a hierarchical clustering, this forms the basis of a multi-extension, most often obtained by recursively partitioning the space. In the method used in hierarchical tree algorithm cubic node covering the entire mass of the problem, many times multiple of 8 knots subsidiaries each of which has its sides equal to half of the parental unit until they reach a "leaf" nodes each of which contains only one piece. The forces are calculated by browsing the tree, and deciding whether a multi-polar expansion of the unit will provide a sufficiently accurate force (which usually happens when the nodes are small and at a great distance from each other). If this is true, the power of a multi- use and crawling on the branch of the tree is terminated, and if this is not true, the daughter nodes are redrawn line by making the same check on each of them. We should pay attention to the fact that the end result of the tree algorithm will be approximately equal to the actual result. However, the error can be very easily reduced to acceptable levels by changing the criteria for crawling nodes of the tree, because the greater accuracy is achieved - it goes through the tree in greater depth.
3. GADGET Parallelization Strategy
The GADGET software package is a massively parallel simulation code. It uses MPI communication instructions. The code is written in the C programming language and uses the GSL and FFTW libraries, which are open source. As a result, this software package can be used on a wide variety of UNIX based machines, without having to use special features of proprietary compilers.
The software package, GADGET, uses a domain decomposition scheme ensuring that the results of forces will depend on the number of used processors, which is usually obtained by using orthogonal bisection in domain decomposition. It also addresses shortcomings of bisection. This scheme uses spatial filling fractal curve Peano-Hilbert to become three-dimensional space in the one-dimensional curve. This is then simply divided into parts that define the different domains. This scheme has several advantages, such as the fact that points that are close to one-dimensional curve usually close in three-dimensional space [6].
4. Dynamic Analysis of the GADGET Code
A performance analysis of the code was necessary to determine the functions that needed to be optimized for Intel Xeon Phi. The software package chosen for the dynamic analysis of the GADGET code was the Tuning and Analysis Utilities (TAU) profiling tool [10]. The profiling tool, TAU, is a software package for analysis of high performance parallel and distributed computer systems. TAU provides a set of tools for static and dynamic analysis of programs written in C, C++, Fortran 77/90, High Performance Fortran and Java.
TAU implementation focuses on the requirements of the toolkit DOE ACTS and computer platforms ASCI. This forces the software package TAU to face the challenge of working with the constantly evolving standards of languages, large software frameworks and many different machine platforms, as well as libraries for real-time compilers. As a result, TAU provides one of the most portable and robust systems for the analysis of parallel scientific applications that currently exist. The heart of the software package TAU is its environment for profiling and tracking. It is an integrated set of tools for performance measurement and analysis of parallel multithreaded programs. The software package TAU supports integrated structure analysis, which can be expanded by a modular design of components, and also through the disclosed formats and programs for coupling to other tools. This software package allows be redirecting to a new language, development environment and system contexts and also extending with new features for analysis [10].
The dynamic analysis of the GADGET code with TAU requires some changes of the GADGET code Makefile. The first thing to do was to replace the compiler that is used with the appropriate TAU shell script. In the case of the GADGET code, which is entirely written in the C programming language with MPI for inter-processor communication, instead of the standard mpicc compiler we used tau_cc.sh. The compilation of GADGET code with TAU required to be set two variables of the Linux environment. The first environment variable is TAU_MAKEFILE. It is usually located in the lib subdirectory of the installation directory of the package and is there beginning with "Makefile," followed by the libraries that are included in the program. This file contains paths to dynamic libraries needed to compile software applications with the software package TAU. The second environment variable is TAU_OPTIONS. It is set options of the software package TAU. In this case, the following options are: -optVerbose, -optCompInst and -optMpi. The first shows that the package displays additional information when compiling. The second tells the package to instrument code using the compiler. The third package indicates that the code will use MPI communication.
The data set galaxy of the GADGET code is a simulation of a two galaxies collision. To obtain the best dynamic analysis results the code is necessary to be well optimized for standard processors. To get highly optimized code for the data set galaxy, the code was compiled with the following options:
- #OPT += -DPERIODIC
- OPT += -DUNEQUALSOFTENINGS
- #OPT += -DPMGRID=128
- OPT += -DHAVE_HDF5
- #OPT += -DDOUBLEPRECISION
- #OPT += -DDOUBLEPRECISION_FFTW
The simulation results can be visualized with the command paraprog, which invokes the GUI analyser of software package TAU.
The dynamic analysis of the GADGET code shows that the most time-consuming function was force_treeevaluate, followed by the domain_determineTopTree, peano_hilbert_order and force_treebuild_single functions. The execution time of the force_treeevaluate function takes 1398 sec. and it is 15 times more than the next most time-consuming function. In order to execute the GADGET code on the EURORA supercomputer and to enable the GADGET code on Intel Xeon Phi processors we had to rewrite these parts of the code with hybrid MPI/OpenMP parallelization.
5. Experimental Framework
EURORA (EUropean many integrated cORe Architecture) is a heterogeneous cluster based system, located at the CINECA facility in Bologna, Italy. It was used to obtain the results presented in this paper. The system is made up of 64 Intel compute nodes. Half of the compute nodes comprise two eight-core Intel Xeon E5-2658 processors the other half comprise two eight-core Intel Xeon E5-2678W processors [9].
58 of the nodes have 16 GB of memory but only 14 GB of this can be safely allocated by the application due to system overheads. The remaining 6 nodes have 32 GB of memory. 32 of the EURORA compute nodes have two NVIDIA Tesla K20 (Kepler) GPU cards attached with the remaining 32 compute nodes having two Intel Xeon Phi 5110P co-processors instead [9].
Each Xeon Phi card contains 60 physical cores running 4 virtual threads per core giving access to a total of 240 threads per card. For more details on the hardware specifications please see [8, 9].
For the case of scalability testing on Intel Xeon Phi processors, galaxy data set of the software package GADGET is used.
6. Hybrid Code Implementation on Intel Xeon Phi
The original GADGET code is written in standard ANSI C and uses an explicit communication model, parallelized by the MPI communication interface. The code execution required some external libraries [6, 7, 8]:
- GSL is a library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total with an extensive test suite.
```
CC=mpicc CFLAGS="-O3 -mmic -mt_mpi" ./configure --host=x86_64 --prefix=/gpfs/scratch/userexternal/pborovsk/install/gsl_mic
make
make install
```
- ZLIB is a software library used for data compression. Zlib compressed data is typically written with a gzip or a zlib wrapper. The wrapper encapsulates the raw DEFLATE data by adding a header and trailer. This provides stream identification and error detection that are not provided by the raw DEFLATE data.
```
CC=mpicc CFLAGS="-O3 -mmic -mt_mpi" ./configure --prefix=/gpfs/scratch/userexternal/pborovsk/install/zlib_mic
make
```
• SZIB is compression software, providing lossless compression of scientific data, has been provided with HDF software products.
```bash
CC=mpiicc CFLAGS="-O3 -mmic -mt_mpi" CXX=mpiicpc CXXFLAGS="-O3 -mmic -mt_mpi" ./configure --host=x86_64 --prefix=/gpfs/scratch/userexternal/pborovsk/install/szip_mic make make install
```
• HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of data types, and is designed for flexible and efficient I/O and for high volume and complex data. HDF5 is portable and is extensible, allowing applications to evolve in their use of HDF5.
• FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.
```bash
CC=mpiicc CFLAGS="-O3 -mmic -mt_mpi" ./configure --host=x86_64 --disable-fortran --enable-mpi --enable-type-prefix --prefix=/gpfs/scratch/userexternal/pborovsk/install/fftw_mic make make install make clean
CC=mpiicc CFLAGS="-O3 -mmic -mt_mpi" ./configure --host=x86_64 --disable-fortran --enable-mpi --enable-type-prefix --enable-float --prefix=/gpfs/scratch/userexternal/pborovsk/install/fftw_mic make make install
```
The dynamic analysis of the GADGET code identified the most time-consuming function, `force_treeevaluate`. The code review of this function shows that the function receives an integer variable called `target` and calculates the gravitational forces of the particle index `target` and depending on the variable `mode` or the particle is local to the CPU or the communication buffer (the function is called twice in the function `gravity_tree` once for mode 0 in which the calculations are made to the local processor for particles, and then to mode 1 in which the calculations are made of the particles in the communications buffer). The GADGET code implementation on Intel MIC requires hybrid granularity of parallelization.
### 7. Scalability Testing
The goal of this project is to optimize the multithreading of GADGET software with developing the hybrid MPI/OpenMP parallelization of the code and to perform the scalability testing of the code on the Intel Xeon Phi processors on the EURORA system.
The executions are performed on Intel MIC using different MPI/OpenMP configurations with maximum 240 threads on the MIC in native mode. The results are shown in Table 1.
<table>
<thead>
<tr>
<th>Number</th>
<th>MIC</th>
<th>Total thread number per MIC</th>
<th>MPI tasks per MIC</th>
<th>Threads per MPI per MIC</th>
<th>Test (Data Set)</th>
<th>Total execution time [s]</th>
<th>Speed up</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>60</td>
<td>1</td>
<td>60</td>
<td>galaxy</td>
<td>1750</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>120</td>
<td>1</td>
<td>120</td>
<td>galaxy</td>
<td>1490</td>
<td>1.17</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>180</td>
<td>1</td>
<td>180</td>
<td>galaxy</td>
<td>1450</td>
<td>1.21</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>240</td>
<td>1</td>
<td>240</td>
<td>galaxy</td>
<td>1490</td>
<td>1.17</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>60</td>
<td>2</td>
<td>30</td>
<td>galaxy</td>
<td>1420</td>
<td>1.00</td>
</tr>
<tr>
<td>6</td>
<td>1</td>
<td>120</td>
<td>2</td>
<td>60</td>
<td>galaxy</td>
<td>1190</td>
<td>1.19</td>
</tr>
<tr>
<td>7</td>
<td>1</td>
<td>180</td>
<td>2</td>
<td>90</td>
<td>galaxy</td>
<td>1160</td>
<td>1.22</td>
</tr>
<tr>
<td>8</td>
<td>1</td>
<td>240</td>
<td>2</td>
<td>120</td>
<td>galaxy</td>
<td>1200</td>
<td>1.18</td>
</tr>
<tr>
<td>9</td>
<td>1</td>
<td>60</td>
<td>3</td>
<td>20</td>
<td>galaxy</td>
<td>1570</td>
<td>1.00</td>
</tr>
</tbody>
</table>
Table 1: Execution times in [s] for a galaxy data set of the software package GADGET with vary number of MPI tasks and threads per Intel MIC
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th>galaxy</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>1</td>
<td>120</td>
<td>3</td>
<td>40</td>
<td>galaxy</td>
<td>1300</td>
</tr>
<tr>
<td>11</td>
<td>1</td>
<td>180</td>
<td>3</td>
<td>60</td>
<td>galaxy</td>
<td>1200</td>
</tr>
<tr>
<td>12</td>
<td>1</td>
<td>240</td>
<td>3</td>
<td>80</td>
<td>galaxy</td>
<td>1300</td>
</tr>
<tr>
<td>13</td>
<td>1</td>
<td>60</td>
<td>4</td>
<td>15</td>
<td>galaxy</td>
<td>1690</td>
</tr>
<tr>
<td>14</td>
<td>1</td>
<td>120</td>
<td>4</td>
<td>30</td>
<td>galaxy</td>
<td>1390</td>
</tr>
<tr>
<td>15</td>
<td>1</td>
<td>180</td>
<td>4</td>
<td>45</td>
<td>galaxy</td>
<td>1360</td>
</tr>
<tr>
<td>16</td>
<td>1</td>
<td>240</td>
<td>4</td>
<td>60</td>
<td>galaxy</td>
<td>1410</td>
</tr>
<tr>
<td>17</td>
<td>1</td>
<td>56</td>
<td>8</td>
<td>7</td>
<td>galaxy</td>
<td>1700</td>
</tr>
<tr>
<td>18</td>
<td>1</td>
<td>120</td>
<td>8</td>
<td>15</td>
<td>galaxy</td>
<td>1410</td>
</tr>
<tr>
<td>19</td>
<td>1</td>
<td>176</td>
<td>8</td>
<td>22</td>
<td>galaxy</td>
<td>1380</td>
</tr>
<tr>
<td>20</td>
<td>1</td>
<td>240</td>
<td>8</td>
<td>30</td>
<td>galaxy</td>
<td>1360</td>
</tr>
<tr>
<td>21</td>
<td>1</td>
<td>48</td>
<td>16</td>
<td>3</td>
<td>galaxy</td>
<td>1640</td>
</tr>
<tr>
<td>22</td>
<td>1</td>
<td>112</td>
<td>16</td>
<td>7</td>
<td>galaxy</td>
<td>1360</td>
</tr>
<tr>
<td>23</td>
<td>1</td>
<td>176</td>
<td>16</td>
<td>11</td>
<td>galaxy</td>
<td>1310</td>
</tr>
<tr>
<td>24</td>
<td>1</td>
<td>240</td>
<td>16</td>
<td>15</td>
<td>galaxy</td>
<td>1240</td>
</tr>
<tr>
<td>25</td>
<td>2</td>
<td>60</td>
<td>2</td>
<td>30</td>
<td>galaxy</td>
<td>1390</td>
</tr>
<tr>
<td>26</td>
<td>2</td>
<td>120</td>
<td>2</td>
<td>60</td>
<td>galaxy</td>
<td>1140</td>
</tr>
<tr>
<td>27</td>
<td>2</td>
<td>180</td>
<td>2</td>
<td>90</td>
<td>galaxy</td>
<td>1060</td>
</tr>
<tr>
<td>28</td>
<td>2</td>
<td>240</td>
<td>2</td>
<td>120</td>
<td>galaxy</td>
<td>1010</td>
</tr>
</tbody>
</table>
In all the experimental tests, the thread scatter affinity is used. Scatter affinity means that threads are placed on cores in a round-robin fashion. The process continues until all threads are mapped to a core. This process results in a balanced amount of threads on each core which mostly removes the situation in which one or a small amount of cores are working while the rest are idle.
Some experiments have been carried out utilizing the hybrid parallel program implementation of GADGET code. Galaxy has been used as experimental data. The objective of the experiments was to measure parallel performance parameters of the specified parallel application for cosmological N-body/SPH simulations.
The speedup is evaluated as a ratio of the execution time running within 60 threads relative to the execution time on 120, 180 and 240 threats per MIC with different number of MPI processes respectively. The experimental results for the speedup of GADGET code on Intel MICs using various numbers of threads and MPI processes are shown in Table 1 and Figure 4.
In all the tests above the GADGET code was compiled with the optimization level O3, which for MIC accelerators means the compiler’s auto vectorization is enabled. Better results might be achieved with manual adjustments to the code for better vectorization but the amount of work and necessary time to require for it compared to the possible gain in speedup did not warrant and attempt.
The speedup results are shown and illustrated in Table 1 and Figures 4 and 5.
The hybrid MPI/OpenMP implementations on multiple MICs achieve reduction in execution time. The speedup results show that the parallel system scales on the multiple co-processors. The higher scales are for the cases up to 120 threads. The experimental part on Intel Xeon Phi is part of the project 2010PA1487, Type C, part of PRACE 1IP - WP7 Extension.
8. Conclusion and Future Work
The hybrid MPI/OpenMP parallelization of the GADGET code for cosmological N-body/SPH simulations has been developed and investigated. It is best suited to work in homogenous native Xeon Phi MPI mode in a combination of MPI tasks and threads. Porting the developed hybrid parallel code of GADGET software is very straightforward as Intel support a wide range of programming models, including the widely use MPI and OpenMP as well as Intel-specific models like TBB, Cilk+ and others. The future work is to provide additional tests in offload mode.
Performance metrics, such execution time and speedup, have been measured. The performance measurements for the galaxy data sets of the GADGET code show that the hybrid parallel implementation utilizing MPI and OpenMP of the software scales as the number of the cores increases. The future work is to demonstrate the experimental results of GADGET code for different data sets on Intel Xeon Phi in order to provide a complete scalability analysis.
The hybrid GADGET code can be apply for other similar research projects and experiments in the field of cosmological N-body/SPH simulations and will allow researchers to conduct their experiments on even more powerful supercomputers. They will be able to perform cosmological simulations with very large amounts of data.
References
Acknowledgements
This work was financially supported by the PRACE project funded in part by the EU's 7th Framework Programme (FP7/2007-2013) under grant agreement no. RI-261557. The project was realized using the EURORA System at CINECA, Italy.
|
{"Source-Url": "http://www.prace-project.eu/IMG/pdf/wp136.pdf", "len_cl100k_base": 5825, "olmocr-version": "0.1.48", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26574, "total-output-tokens": 6217, "length": "2e12", "weborganizer": {"__label__adult": 0.0003364086151123047, "__label__art_design": 0.0004355907440185547, "__label__crime_law": 0.00044155120849609375, "__label__education_jobs": 0.0007472038269042969, "__label__entertainment": 0.0001678466796875, "__label__fashion_beauty": 0.00019073486328125, "__label__finance_business": 0.0003113746643066406, "__label__food_dining": 0.00041794776916503906, "__label__games": 0.0007963180541992188, "__label__hardware": 0.0041656494140625, "__label__health": 0.00058746337890625, "__label__history": 0.0005087852478027344, "__label__home_hobbies": 0.0001302957534790039, "__label__industrial": 0.0011987686157226562, "__label__literature": 0.00025725364685058594, "__label__politics": 0.0003631114959716797, "__label__religion": 0.0007152557373046875, "__label__science_tech": 0.41064453125, "__label__social_life": 0.00013446807861328125, "__label__software": 0.0232696533203125, "__label__software_dev": 0.552734375, "__label__sports_fitness": 0.0004391670227050781, "__label__transportation": 0.0007171630859375, "__label__travel": 0.00022935867309570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23378, 0.07441]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23378, 0.31301]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23378, 0.86625]], "google_gemma-3-12b-it_contains_pii": [[0, 2821, false], [2821, 7505, null], [7505, 10934, null], [10934, 13670, null], [13670, 17698, null], [17698, 20368, null], [20368, 20721, null], [20721, 23378, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2821, true], [2821, 7505, null], [7505, 10934, null], [10934, 13670, null], [13670, 17698, null], [17698, 20368, null], [20368, 20721, null], [20721, 23378, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23378, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23378, null]], "pdf_page_numbers": [[0, 2821, 1], [2821, 7505, 2], [7505, 10934, 3], [10934, 13670, 4], [13670, 17698, 5], [17698, 20368, 6], [20368, 20721, 7], [20721, 23378, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23378, 0.256]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
b372b8f9ea186eae53d4aed1891fdc8da9fb3c56
|
HorsePower: Accelerating Database Queries for Advanced Data Analytics
Hanfeng Chen, Joseph V. D’silva, Laurie Hendren, Bettina Kemme
McGill University
hanfeng.chen@mail.mcgill.ca, joseph.dsilva@cs.mcgill.ca, hendren@cs.mcgill.ca, kemme@cs.mcgill.ca
ABSTRACT
The rising popularity of data science has resulted in a challenging interplay between traditional declarative queries and numerical computations on the data. In this paper, we present and evaluate the advanced analytical system HorsePower that is able to combine and optimize both program- ming styles in a holistic manner. It can execute traditional SQL-based database queries, programs written in the statistical language MATLAB, as well as a mix of both by supporting user-defined functions within database queries. HorsePower exploits HorseIR, an array-based intermediate representation (IR), to which source programs are translated, allowing to combine query optimization and compiler optimization techniques at an intermediate level of abstraction.
1 INTRODUCTION
Complex data analytics has become the cornerstone of our data-driven society. Although the amount of data stored in traditional relational database systems (DBS) has been growing rapidly, the by far most common current approach is to take the data first out of the DBS and load it into stand-alone analytical tools, which are based on languages such as Python, or the statistical languages MATLAB [1], and R [3]. However, as the size of the data increases, the expensive data movement between DBS and analytics tools can become a severe bottleneck. Integrating analytical capabilities into the DBS avoids such expensive data exchange. A common approach is to use user-defined functions (UDFs) that are embedded in SQL queries [13]. For example, MonetDB supports UDFs written in Python, that are executed by a Python language interpreter that is embedded inside the DBS engine. While no data transfer is needed with this approach, there are still two separate execution environments, one being the SQL execution engine, the other the programming language execution environment. This can lead to costly data format conversion. Furthermore, the SQL and the UDF components of the query are each individually optimized by their respective execution environments, without the consideration of any holistic optimization across the entire task.
To address these issues, we propose HorsePower, an advanced analytical SQL system, which provides a holistic solution to integrate UDFs in SQL queries. The system is based on HorseIR [5], an array-based intermediate representation (IR) language which was developed to explore the usage of compiler optimizations for query execution. Chen et al. [5] translated the execution plans of standard SQL queries into HorseIR and compiled the generated HorseIR code using various compiler optimization strategies developed for array-based languages. Using arrays to represent database columns, HorseIR follows conceptually the data model of column-based DBS, which has been proven to be effective for data analytics tasks. HorsePower extends the idea to a full-fledged execution environment for data analytics. Additionally to supporting plain SQL queries, HorsePower also supports functions written in MATLAB, a popular high-level array language widely used in the field of statistics and engineering. HorsePower can take stand-alone functions written in MATLAB and translate them to HorseIR, of have these functions be embedded in SQL queries and then translate all into a single HorseIR program, before optimizing and compiling the code in a holistic manner.
As such HorsePower avoids the overhead of inter-system data movements as it has a single execution environment, and eliminates the barriers between SQL queries and analytical functions allowing optimizations across both the declarative and functional parts of the query. The contributions of this paper are thus as follows:
• We present HorsePower, an advanced analytical system, that extends the approach proposed in [5] to not only offer a compiler-based execution environment for SQL queries, but also for programs written in the array-based language MATLAB and for SQL queries with embedded UDFs.
• HorsePower uses a holistic approach of exploiting array-based compiler optimization techniques for both SQL and MATLAB taking advantage of the conceptual similarities of columns and arrays.
• The performance of HorsePower is shown through an extensive set of experiments on programs written in MATLAB, and SQL queries with embedded UDFs.
2 BACKGROUND
2.1 HorseIR: an Array-based IR for SQL
Recent years have seen the development of modern query compilers that translate an SQL query into an intermediate representation (IR) before target code is generated from the IR, making it possible to leverage any existing code optimizations available within the IR platform.
In this context, HorseIR [5] was developed as a high-level IR specifically for database applications [7]. Being an array-based IR, it is relatively straightforward to generate basic HorseIR code following the execution plans developed by column-based DBS, as the operators executing on entire columns can be translated to functions executing on vectors in HorseIR. In fact, Chen et al. [5] took the execution plans generated by the column-based database system HyPer [11], that incorporate a wide range of traditional DBS optimizations, as the input for generating HorseIR programs.
In this regard, HorseIR provides a rich set of array-based built-in functions to which one can map the standard database operations. Moreover, the HorseIR compiler provides vital optimizations over these array-based operations. For example, loop fusion merges multiple loops into one loop, allowing for an intuitive merge of chained operations and...
SELECT SUM(l_price * l_discount) AS RevenueChange
FROM lineitem WHERE l_discount >= 0.05;
module ExampleQuery{
def main():
...
// assume t1, t2 are references to l_price/l_discount columns
t3:fool = @geq(t2, 0.05);
t4:fo4d = @compress(t3, t1);
t5:fo4d = @compress(t3, t2);
t6:fo4d = @mul(t4, t5);
t7:fo4d = @sum(t6);
...
}
FUNCTION RevChangeSclr(price,discount)
RETURN price * discount;
SELECT SUM(RevChangeSclr(l_price,l_discount)) AS RevChange
FROM lineitem WHERE l_discount >= 0.05;
Figure 1: Example query and its HorseIR program
Figure 2: Rewriting the example query with a scalar UDF
these are the most commonly employed types of UDFs and also the ones supported presently in HorsePower.
A scalar UDF returns a single value per row (which could be a vector) and can be therefore essentially used wherever a regular table column is used, such as the SELECT or the WHERE clause of SQL queries. Figure 2 shows a scalar UDF which performs the multiplication that was originally part of the SELECT clause in Figure 1. In a column-based database system, the execution of such a query first evaluates the WHERE clause on l_discount, returning a boolean vector. Then, the database applies the corresponding boolean selection on columns l_discount and l_extendedprice, returning compressed vectors containing the rows where the boolean vector was true. These columns are then given to the UDF as arrays, and the UDF performs an element-wise multiplication on them and produces a result array. This is then the input to the SUM operator. Thus, the UDF is only called a single time and works on entire arrays.
A table UDF returns a table-like data structure, and thus, is typically called within the FROM clause of an SQL statement, similar to regular database tables. For an example of a table UDF, we refer to a technical report [6].
Introducing UDFs into queries can bring performance issues. If the data types used by the two execution environments are different, this can introduce a conversion overhead when exchanging data. Further, as UDF languages are typically black-boxes to the database engine, cross-optimization attempts are minimal, resulting in sub-optimal execution plans.
3 HORSEPOWER
In this section we present HorsePower, a system designed for the code generation and optimization of HorseIR generated from (1) SQL queries, (2) MATLAB programs, and (3) SQL queries with analytical functions written in MATLAB.
3.1 SQL to HorseIR
While prior work used HyPer’s execution plans [11] to translate SQL to HorseIR, HorsePower uses MonetDB’s execution plans, as MonetDB supports UDFs and the execution plans contain the relevant UDF information. Our implementation first translates the tree-based plans to JSON objects that are then translated to HorseIR
Furthermore, HorsePower supports a wider range of SQL queries than [5], which did not properly support multi-join queries. This includes all queries of the TPC-H benchmark [16].
3.2 MATLAB to HorseIR
MATLAB is a sophisticated dynamic language which provides numerous flexible language features. In order to transform MATLAB code to HorseIR, as an intermediate step,
HorsePower calls upon the McLab framework [2] which translates MATLAB programs to its own internal IR, called TameIR, handling MATLAB’s many dynamic features and lack of strict typing. Type and shape information for all variables in the program are automatically derived. Furthermore, class program analysis steps, such as constant propagation, are performed to produce optimized TameIR code [9]. TameIR can represent MATLAB’s matrix and high-dimension arrays, and currently supports an essential subset of MATLAB array operations.
HorsePower then translates TameIR code to HorseIR. So far, this translator supports a core subset of MATLAB features and built-in functions. It preserves MATLAB pass-by-value semantics but automatically switches to pass-by-reference when it determines that the input parameters are not modified, avoiding data copies. It supports the common control structures if-else and while with a restriction on the condition which must be a single boolean element. While explicit loop iteration is not supported, MATLAB’s array-based built-in functions (which have implicit loop execution) are translated in a straightforward way as similar functions exist in HorseIR. All types supported by TameIR are also supported by HorseIR, however, due to type rule mismatches, input types for some operators are restricted (e.g. because integer + double returns integer in MATLAB, but double in HorseIR). Finally, the translator requires MATLAB arrays to have the data layout of N-by-1 instead of N-by-1, as the former one is more cache-friendly in MATLAB.
### 3.3 SQL and UDF to HorseIR
HorsePower supports SQL queries with embedded UDFs written in MATLAB. As described in Section 3.1, HorsePower uses execution plans generated by MonetDB, which contain hooks into UDFs with their names, and input and output parameters, but otherwise treat the UDFs as a black-box. HorsePower translates such a plan to HorseIR, where the invocation of the UDF is translated to a method invocation in HorseIR. Next, we generate a separate piece of HorseIR code by translating the UDF written in MATLAB using the MATLAB-to-HorseIR translator introduced in Section 3.2. Finally, the two segments of code for SQL and UDFs are integrated into a single HorseIR program.
HorsePower supports both scalar and table UDFs. In order to make the MATLAB functions conform to the semantic form expected of these types of UDFs, we enforce some restrictions on the MATLAB functions. For instance, we require a function to have one return statement with either a single vector (for scalar UDFs) or a table-like data structure (for table UDFs).
Figure 3 shows the HorseIR program for the example query in Figure 2 with a scalar UDF. The HorseIR code consists of a module with two methods: the SQL component is translated to the main method, and the UDF is translated to the method `RevChangeSclr` which takes two arrays of type float as input and returns the resulting product. This method is called by the main method, which otherwise is the same as we have already seen in Figure 1.
### 3.4 Holistic HorsePower Optimizations
HorsePower performs compiler-based optimizations when translating a HorseIR program to target C code. We have discussed in Sec. 2.1, how automatic loop-fusion and pattern-based, as introduced in [7] lead to efficient parallel C code.
However, such optimizations require all statements to be in one method. But when SQL statements have embedded UDFs, the HorseIR code has at least two methods, with a main method calling the method representing the UDF as shown in our example in Figure 3.
If we were to optimize both parts independently using loop fusion and pattern-based fusion, the overall result would be sub-optimal. In fact, if we look at the dependence graph for this program on the left side of Figure 4 (with S0 to S4 depicting the statements in the code), we can see that the optimization opportunities are now separated into three snippets: before, after, and in the method being called in the statement S3. The snippets have to be optimized individually because the content of the statement S3 is invisible to the rest of the code. Thus, statements S1 and S2 of the main method need to be evaluated and intermediate results t4 and t5 cannot be eliminated as the method `RevChangeSclr` requires their actual values to be passed as parameters. Furthermore, the return value of the method needs to be materialized to be assigned to t8 which is then the input of the statement S1. This means the potential scope for fusion is significantly reduced leading to more intermediate results.
In order to enable a more holistic cross-optimization, we use the concept of inlining. This involves replacing the method calls within the main method with the corresponding code segments that constitute the method that is being called. For our example program in Figure 3 this means the code of `RevChangeSclr` can be inlined into the main method with the generated HorseIR being almost the same as the one in Figure 1 except for possibly different variable names. As a result, a dependence graph can be built across the main method, as illustrated on the right side of Figure 4, allowing for loop fusion across all statements and generating a single loop of all tasks as outlined in Section 2.1, and avoiding the
materialization of any intermediate results introduced by UDF invocations.
In some scenarios method inlining offers additional optimization opportunities, such as the elimination of unused computations. For example, consider a scenario where a table UDF computes and returns two columns as part of its invocation, but the enclosing SQL query itself uses only one of those two columns. HorsePower will employ the backward slicing technique [15] to avoid the computation of the unused column in the table UDF.
While performing inlining, to respect the pass-by-value convention for parameter passing, a copy of the object used as the parameter will be generated if the parameter is found to be modified inside the original callee method. This ensures that inlining does not result in any unintended data modifications to the objects inside the method that was making the call. Further, if inlining results in any variable name conflicts, they are resolved by assigning new but unique variable names. Finally, an inlined method is removed if it can be inlined in all the code locations where it is called.
4 EVALUATION
In this section we present the evaluation result of our framework for pure MATLAB programs, and for SQL queries with analytical UDFs written in MATLAB. For the latter, we compare it with MonetDB.
The experiments are conducted on a server equipped with 4 Intel Xeon E7-4850 2.00GHz (total 40 cores with 80 threads, and 24 MB of shared L3 CPU cache) and 128 GB RAM running Ubuntu 18.04.4 LTS. We use GCC v8.1.0 to compile HorseIR source code with optimization options -O3 and -march=native; MonetDB version v11.35.9 (Nov2019-SP1) and NumPy v1.13.3 along with Python v2.7.17 interpreter for embedded Python support in MonetDB; and MATLAB version R2019a.
The response time is measured only for the core computation, and excludes the overhead for parsing SQL, plan generation, compilation, and serialization for sending the results to the client. We only consider execution time once data resides in the main memory. We run each test 15 times but only measure the average execution time over the last 10 times. Scripts and data used in our experiments can be found in our GitHub repository.
4.1 MATLAB Benchmarks
We first evaluate MATLAB programs in order to understand the performance of using HorsePower for executing non-SQL based data analytics, and use the following benchmarks: the Black-Scholes algorithm from the PARSEC benchmark suite v3.0 [4] having two UDFs BlackScholes and CNDF, and the Morgan algorithm [8] from a finance application having a main function morgan and another function msum. Both contain several element-wise functions and are fully vectorizable.
In our experiments, we compare the following:
- We execute the original MATLAB program using the HorseIR interpreter with default settings.
- We compile the HorseIR program generated from the MATLAB code into C code without any of the optimizations that we mentioned in Section 3.4. We refer to this version as HorsePower-Naive. As such, it is likely to produce a similar amount of intermediate results as the MATLAB interpreter.
<table>
<thead>
<tr>
<th>Size</th>
<th>MATLAB</th>
<th>HorsePower</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Speedup</td>
<td>Speedup</td>
</tr>
<tr>
<td>4M</td>
<td>1.06x</td>
<td>4.49x</td>
</tr>
<tr>
<td>2M</td>
<td>1.08x</td>
<td>3.4x</td>
</tr>
<tr>
<td>1M</td>
<td>1.26x</td>
<td>2.49x</td>
</tr>
<tr>
<td>5M</td>
<td>1.06x</td>
<td>3.4x</td>
</tr>
<tr>
<td>8M</td>
<td>1.07x</td>
<td>3.3x</td>
</tr>
</tbody>
</table>
Table 1: Speedup of HorsePower over MATLAB in execution time using Black-Scholes (in milliseconds)
- We compile the HorseIR code into C code with all optimizations enabled, referred to as HorsePower-Opt.
We execute the original MATLAB program using the HorsePower-Opt optimization technique.
For Morgan (no table shown due to space limitations) we run experiments up to 8 million rows as well. HorsePower-Naive also provides similar performance to MATLAB with smaller data sizes, but already has a speedup of 2 with 8 million rows. We believe the reason is our efficient parallel implementation of built-in functions, such as the cumulative sum. Again, the optimized version is significantly faster, with a speedup of 7 with 5 million rows.
In summary, HorsePower can execute data analytics tasks in an efficient manner due to its data-centric IR and compiler optimization techniques.
4.2 SQL and UDF Benchmarks: TPC-H
This is the first of two sections to evaluate the performance of HorsePower in executing SQL statements with embedded UDFs, and comparing it with MonetDB.
Froid [14] proposed a whole range of queries derived from the TPC-H benchmark in which part of the SELECT or WHERE clauses, e.g., to check certain conditions, are outsourced into a UDF. In all cases, these are scalar UDFs. For instance, they propose a variation of the q6 of the TPC-H benchmark, which is very similar to our example query of Figure 1, simply containing more conditions.
For MonetDB, we rewrote the queries to use Python-based UDFs, for HorsePower, the UDFs are written in MATLAB. The structure of the programs is very similar for both languages. Some of the proposed UDFs have embedded SQL statements which are currently not supported by the McLab framework that we use. Thus, we excluded those unsupported queries and present results only for queries q1, q6, q12, q14, and q19.
Table 2 shows the execution times of these queries with a different number of threads using HorsePower and MonetDB. When first looking only at MonetDB we can see that execution times are relatively low for some queries and improve with an increasing number of threads considerably (q1 and q14), but are high for others with little benefit of parallelization (q6, q12, q19). The reason is that in these queries, the UDF is in the WHERE clause and MonetDB
has to perform costly data conversion when sending the entire database columns as arrays to the Python interpreter in order to execute the UDF. MonetDB is able to use zero-copy transfer for data types where the database system uses the same main-memory representation as Python. But for strings, it needs to convert the data to a different format as the database internal and the Python formats are incompatible. This data conversion seems to be not be parallelized to multiple threads, making it the predominant factor of the execution. In q1 and q14, the UDFs are in the SELECT clause (where data sizes are smaller as they got reduced due to the selection that was already executed), and do not require any string conversions.
HorsePower has overall much better performance for all queries, being under 1 second for all queries except q1, and can always improve execution times by increasing the number of threads. As no data conversion is necessary it is orders of magnitude faster than MonetDB for queries q6, q12, and q19. We observe the advantage of having a unified execution environment that has translated both the UDF part and the SQL part to a single HorseIR program with its own data structures. But we also observe significant improvements for q1 and q14. These are due to the unified optimization across the HorseIR code generated from SQL and UDF.
4.3 SQL and UDF Benchmarks: MATLAB
In this second experiment, we embed the Black-Scholes algorithm in form of UDFs into SQL queries.
We again have a HorsePower version, with the Black-Scholes UDF implemented in MATLAB, and a MonetDB version, with the UDF implemented in Python UDF using the NumPy library and the same array programming style as the MATLAB UDF.
In order to understand the implication of having the UDFs written in different programming languages, we first compared the execution time of Black-Scholes written in Python and using HorseIR (both naive and optimized). Execution is in one thread because NumPy does not support multi-threading. Similar to what we have seen with our analysis with MATLAB, a naive usage of HorseIR provides similarly impacted by the plans generated by MonetDB for HorsePower, which similarly impacted by the plans generated by MonetDB for table UDF based queries. This results in HorsePower’s own table UDF based queries costing more than its scalar versions. However, unlike MonetDB, HorsePower benefits from being able to avoid data copies and conversions as well as from generating parallelized code for UDFs, thus expanding this performance gap when the number of threads increases.
In order to have a broad set of tests and comparisons, we first integrated these two UDF versions into a straightforward base query. From there we created three significant variations of this base query that had different columns in the SELECT and WHERE clauses. Furthermore, the selectivity of WHERE clause can be high (returning few records) or low (having many qualifying records).
Table 3 shows the result of all the variations for MonetDB and HorsePower for 1 thread (T1) and 64 threads (T64). Base query. The base query bs0_base selects all the data from the database table and passes it to the UDF and returns all the data produced by the UDF.
<table>
<thead>
<tr>
<th>Variation</th>
<th>MonetDB (ms)</th>
<th>HorsePower (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Base query</td>
<td>bs0_base, Scalar UDF</td>
<td>SELECT spotPrice, optionType, bScholesUDF(spotPrice, ..., optionType) AS optionPrice FROM blackScholesData;</td>
</tr>
<tr>
<td>Variation 1</td>
<td></td>
<td>SELECT spotPrice, optionType, bScholesUDF(spotPrice, ..., optionType) AS optionPrice FROM blackScholesData WHERE spotPrice > 100;</td>
</tr>
<tr>
<td>Variation 2</td>
<td></td>
<td>SELECT spotPrice, optionType, bScholesUDF(spotPrice, ..., optionType) AS optionPrice FROM blackScholesData WHERE spotPrice > 100 AND optionType = 'call';</td>
</tr>
</tbody>
</table>
We first observe that for MonetDB multi-threading has little impact on its performance, while HorsePower benefits a lot. As Python is not multi-threaded, the Black-Scholes UDF in MonetDB runs always in a single thread even if 64 threads are enabled, while HorsePower creates optimized parallel also for the Black-Scholes part. But HorsePower is already significantly better with a single thread. In fact, HorsePower’s execution time for the entire query is nearly the same as executing the Black-Scholes algorithm alone, while MonetDB takes nearly double the time (> 900 ms) to execute the entire query than the time used by the Python interpreter to execute Black-Scholes (around 500 ms). The reason for this performance penalty in MonetDB must be the communication between its SQL engine and the Python UDF interpreter.
Variation 1. The first variation bs1_* applies a predicate condition on spotPrice, a column which is actually part of the input database table. The objective of this test case is to analyze if the systems can intelligently avoid performing the UDF computation on records that will not be in the result set. As can be seen, for one thread, HorsePower’s speedup over MonetDB is at least 3.5x for both scalar and table UDFs, and for 64 threads at least 50x. MonetDB follows the traditional database optimization technique of applying high selectivity operations first, discarding the records that do not qualify before processing the UDFs. As HorsePower relies on MonetDB for database execution plans, it is similarly impacted by the plans generated by MonetDB for table UDF based queries. This results in HorsePower’s own table UDF based queries costing more than its scalar versions. However, unlike MonetDB, HorsePower benefits from being able to avoid data copies and conversions as well as from generating parallelized code for UDFs, thus expanding this performance gap when the number of threads increases.
Variation 2. In the next variation, bs2_*, the SQL does not include the computed column optionPrice in the final
result. A smart system should be able to analyze the semantics of the request and avoid processing the UDF both together. MonetDB is able to do the optimization when the SQL query is using the scalar UDF, avoiding the computation of the `optionPrice` column that is not included in the final result. Similarly, HorsePower, being an integrated system, can avoid the computation of `optionPrice` by using a backward slice. As both avoid executing the UDF, HorsePower has only moderate speedup over MonetDB due to other optimizations. However, with a table UDF, MonetDB is unable to avoid this computation as there is no way for it to pass this optimization information to the UDF interpreter. On the other hand, HorsePower uses method inlining and backward slicing to remove this computation, offering a huge advantage.
**Variation 3.** The last variation, bs3_high applies a predicate condition on `optionPrice`. As this is a column computed by the UDFs, both the systems have to process the UDFs across all input records before discarding records that do not qualify, providing limited opportunities for optimization. As can be seen, HorsePower has speedups of around 3.5x for both scalar and table UDFs with one thread and between around 50x and 80x for 64 threads. HorsePower has better performance than MonetDB simply because HorsePower can avoid the data movement between the UDFs. With more threads, HorsePower’s speedup is even better as the data movement in MonetDB is not parallelized and takes most of the time in the whole execution pipeline. In summary, HorsePower avoids the problems of a black-box integration of programming language execution environments as used in current DBS. As such, it avoids expensive data conversions, can optimize in a holistic manner and provides full support for parallelization, leading to significant speedups.
**5 RELATED WORK AND CONCLUSIONS**
Intermediate representations and compiler techniques have been applied by others to improve the performance of database queries. However, there is little research in these systems extending to support UDFs within the database queries.
Froid [14] shows a holistic optimization solution by transforming simple UDF to relational code. Thus, the existing query optimizer can be utilized for the optimizations of the execution plan. However, this approach is limited as not all UDFs are translatable to a relational operator.
Weld [12] presents its IR (WeldIR) to support the code generation from various source languages. WeldIR is able to handle database queries and call UDFs written in C code. However, in contrast to HorsePower that automatically optimizes across different source languages, such capabilities have not been implemented by Weld.
Lara [10] is a domain-specific language tailored for relational algebra and UDFs. Its code is first compiled to an IR which is able to inspect UDFs by collecting necessary information from UDFs. Thus, Lara can optimize such transparent UDFs together with its IR code. This is different from our HorsePower which compiles database queries and UDFs to its common IR with holistic optimizations enabled.
In conclusion, HorsePower differs from previous work in that it is a compiler-based approach exploiting array-based optimizations to support database queries, MATLAB programs and database queries with analytical UDFs in a holistic framework. Given the very promising evaluation results, future work will integrate different programming languages, and enhance our relational operators.
**REFERENCES**
|
{"Source-Url": "https://openproceedings.org/2021/conf/edbt/p103.pdf", "len_cl100k_base": 6459, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22039, "total-output-tokens": 7643, "length": "2e12", "weborganizer": {"__label__adult": 0.0003733634948730469, "__label__art_design": 0.00029778480529785156, "__label__crime_law": 0.0003631114959716797, "__label__education_jobs": 0.0006909370422363281, "__label__entertainment": 0.0001043081283569336, "__label__fashion_beauty": 0.0001748800277709961, "__label__finance_business": 0.0004642009735107422, "__label__food_dining": 0.0004949569702148438, "__label__games": 0.0005702972412109375, "__label__hardware": 0.001285552978515625, "__label__health": 0.0007033348083496094, "__label__history": 0.00029349327087402344, "__label__home_hobbies": 0.00011467933654785156, "__label__industrial": 0.0008306503295898438, "__label__literature": 0.00026988983154296875, "__label__politics": 0.0003638267517089844, "__label__religion": 0.00048613548278808594, "__label__science_tech": 0.1142578125, "__label__social_life": 0.00011616945266723631, "__label__software": 0.01195526123046875, "__label__software_dev": 0.8642578125, "__label__sports_fitness": 0.0003483295440673828, "__label__transportation": 0.0007104873657226562, "__label__travel": 0.0002191066741943359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32367, 0.03539]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32367, 0.55155]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32367, 0.89557]], "google_gemma-3-12b-it_contains_pii": [[0, 5831, false], [5831, 9044, null], [9044, 14368, null], [14368, 20084, null], [20084, 25949, null], [25949, 32367, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5831, true], [5831, 9044, null], [9044, 14368, null], [14368, 20084, null], [20084, 25949, null], [25949, 32367, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32367, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32367, null]], "pdf_page_numbers": [[0, 5831, 1], [5831, 9044, 2], [9044, 14368, 3], [14368, 20084, 4], [20084, 25949, 5], [25949, 32367, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32367, 0.1]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
4df34400a5c9e47243b84ad617476cd40b83ba8e
|
Analysis of Algorithms & Big-O
CS16: Introduction to Algorithms & Data Structures
Spring 2018
Outline
- Running time
- Big-$\mathcal{O}$
- Big-$\Omega$ and Big-$\Theta$
- Analyzing Seamcarve
- Dynamic programming
- Fibonacci sequence
Algorithms
› What can we analyze about an algorithm?
› Make a list
Algorithms
- What can we analyze about an algorithm?
- Make a list
Algorithms
› What can we analyze about an algorithm?
› Make a list
What is an “Efficient” Algorithm
- Possible efficiency measures
- Total amount of time on a stopwatch?
- Low memory usage?
- Low power consumption?
- Network usage?
- The analysis of algorithms helps us quantify this
Q: How can we measure running time?
Measuring Running Time
- Experimentally?
- Implement algorithm
- Run algorithm on inputs of different size
- Measure the running time
- Plot the results
Great! We’re done, right?
Measuring Running Time
- What if you can’t implement algorithm?
- Which inputs exactly should you choose?
- Which hardware should you run on?
- Which operating system?
- Which compiler?
- Which compiler flags?
- …
Measuring Running Time
- We need a measure that
- independent of hardware
- independent of OS
- independent of compiler
- ...
- It should depend only on
- “intrinsic properties of the algorithm”
Q: What is the *intrinsic* running time of an algorithm?
Knuth’s Observation
- Running time can be determined using
- Time/cost of each operation
- Frequency of each operation
Example:
- function that sums 100 integers
\[
\text{time}(\text{sum}) = \text{time}(\text{read}) \cdot 100 + \text{time}(\text{add}) \cdot 99
\]
Key insight!
- cost of operations depend on hardware, OS, compiler,…
- frequency of operations depend on algorithm
Q: What operations exactly?
Elementary Operations
- Algorithmic running “time” is measured in elementary operations
- Math: +, -, *, /, max, min, log, sin, cos, abs, ...
- Comparisons: ==, >, <, ≤, ≥
- Variable assignment
- Variable increment or decrement
- Array allocation
- Creating a new object
- Function calls and value returns
- Careful: an object's constructor & function calls may have elementary ops too!
- In practice all these operations take different amounts of time
- in algorithm analysis we assume each operation takes 1 unit of time
Towards an **Algorithmic** Running Time
- **Problem #1**
- running time varies with hardware, OS etc…
- solution #1: focus on number of operations
- **Problem #2**
- number of operations varies with input size
- solution #2: focus on number of operations for *large* inputs
- **Problem #3**
- number of operations varies with input
- solution #3: focus on number of operations on *worst-case* inputs
Towards an **Algorithmic** Running Time
- Why worst-case inputs?
- Easier to analyze
- Gives useful information
- what if a plane autopilot program runs slower than predicted due to an unexpected input?
- Why large inputs?
- Easier to analyze
- We usually care what happens on large data
- Allows us to ignore odd behaviors that happen on small data
(Worst-case) Analysis of Algorithms
- Based only on high-level algorithm descriptions
- not on implementation
- Takes into all possible inputs
- by considering the worst-case inputs
- Quantifies running time independently of
- hardware, OS, compiler etc.
- Algorithm’s running time vs program’s running time
**Constant Running Time**
- How many operations are executed?
- What if array has **100** elements?
- What if array has **100,000** elements?
- **Key observation:** running time does not depend on array size!
function argmax(array)
// Input: an array
// Output: the index of the maximum value
index = 0
for i in [1, array.length):
if array[i] > array[index]:
index = i
return index
1 op
1 op per loop
3 ops per loop
1 op per loop (sometimes)
1 op
function argmax(array)
// Input: an array
// Output: the index of the maximum value
index = 0
for i in [1, array.length):
if array[i] > array[index]:
index = i
return index
function argmax(array)
// Input: an array
// Output: the index of the maximum value
index = 0
for i in [1, array.length):
if array[i] > array[index]:
index = i
return index
Activity #1
Linear Running Time
- How many operations are executed?
- What if array has 10 elements?
- What if array has 100,000 elements?
**key observation:** running time depends on array size
- $5n+2$ operations where $n=\text{size}(\text{array})$
function possible_products(array):
// Input: an array
// Output: a list of all possible products
// between any two elements in the list
products = []
for i in [0, array.length):
for j in [0, array.length):
products.append(array[i] * array[j])
return products
Activity #2
1 min
function possible_products(array):
// Input: an array
// Output: a list of all possible products
// between any two elements in the list
products = []
for i in [0, array.length):
for j in [0, array.length):
products.append(array[i] * array[j])
return products
function possible_products(array):
// Input: an array
// Output: a list of all possible products
// between any two elements in the list
products = []
for i in [0, array.length):
for j in [0, array.length):
products.append(array[i] * array[j])
return products
1op
1op per loop
1op per loop per loop
4ops per loop per loop
Quadratic Running Time
function possible_products(array):
// Input: an array
// Output: a list of all possible products between any two elements in the list
products = []
for i in [0, array.length):
for j in [0, array.length):
products.append(array[i] * array[j])
return products
- How many operations are executed?
- What if array has 10 elements?
- What if array has 100,000 elements?
**key observation:** running time depends on the *square* of array size
- $5n^2+n+2$ operations where $n=size(array)$
Running Times
**Constant**
independent of input size
**Linear**
depends on input size
**Quadratic**
depends on square of input size
Plotting Running Times
\( T(n) \)
\[ 5n^2 + n + 2 \]
Plotting Running Times
\[ T(n) \]
\[ 5n^2 + n + 2 \]
\[ 5n + 2 \]
We don’t care what happens here We only care what happens here
Big-O Notation
**Definition (Big-O):** $f(n)$ is $O(g(n))$ if there exists positive constants $c$ and $n_0$ such that:
$$f(n) \leq c \cdot g(n)$$
for all $n \geq n_0$
- Example: $2n+10$ is $O(n)$
- for example, choose $c=3$ and $n_0=10$
- Why? because
- $2n+10 \leq 3 \cdot n$ when $n \geq 10$
- for example, $2 \cdot 10+10 \leq 3 \cdot 10$
Big-O Notation
- Another example
- \( n^2 \) is not \( O(n) \)
- Why? To prove that \( n^2 \) is \( O(n) \) we have to show that there exists constants \( c \) and \( n_0 \) such that
- \( n^2 \leq c \cdot n \iff n \leq c \text{ for all } n \geq n_0 \)
- This is not possible!
- for example set \( c = 10 \)
Big-O & Growth Rate
Activity #3
Big-O & Growth Rate
Activity #3
Big-O & Growth Rate
Big-O & Growth Rate
- Big-O gives upper bound on
- growth rate of function when input is large
- An algorithm is $\mathcal{O}(g(n))$ if growth its rate is
- no more than growth rate of $g(n)$
- Examples
- $n^2$ is not $\mathcal{O}(n)$
- $n$ is $\mathcal{O}(n^2)$
- $n^2$ is $\mathcal{O}(n^3)$
Summary of Big-O Rules
- If $f(n)$ is a polynomial of degree $d$ then
- $f(n)$ is $O(n^d)$
- In other words you can ignore
- lower-order terms
- constant factors
- Use the term with the smallest possible degree
- $2n$ is $O(n^{50})$ but that’s not helpful
- instead it is better to say it is $O(n)$
- **Discard constant factors & use smallest possible degree**
Example Big-O Notation
- Count number of operations as a function of input size
- For example
- **first**: $T(n) = 2$
- **argmax**: $T(n) = 5n + 2$
- **possible_products**: $T(n) = 5n^2 + n + 3$
- Can replace constants with $c$ b/c they are irrelevant as $n$ grows
- **first**: $T(n) = c$
- **argmax**: $T(n) = c_0n + c_1$
- **possible_products**: $T(n) = c_0n^2 + n + c_1$
Example Big-O Notation
- **Discard constant factors & use smallest possible degree**
- For example
- **first**: $T(n) = c$ is $O(1)$
- **argmax**: $T(n) = c_0n + c_1$ is $O(n)$
- **possible_products**: $T(n) = c_0n^2 + n + c_1$ is $O(n^2)$
- The convention for $T(n) = c$ is to write $O(1)$
5n^2 + n + 2
O(n^2)
?
Big-Omega
**Definition (Big-Ω):** $f(n)$ is $O(g(n))$ if there exists positive constants $c$ and $n_0$ such that:
$$f(n) \leq c \cdot g(n)$$
for all $n \geq n_0$
- $f(n)$'s growth rate is upper bounded by $g(n)$'s growth rate
- But what if we need to express a lower bound?
- we use Big-$\Omega$ notation!
Big-Omega
**Definition (Big-Ω):** \( f(n) \) is \( \Omega(g(n)) \) if there exists positive constants \( c \) and \( n_0 \) such that:
\[
f(n) \geq c \cdot g(n)
\]
for all \( n \geq n_0 \)
- \( f(n) \)'s growth rate is lower bounded by \( g(n) \)'s growth rate
- What about an upper **and** a lower bound?
- We use Big-\( \Omega \) notation
Big-Theta
Definition (Big-\(P\)): \( f(n) \) is \( P(g(n)) \) if it is \( \Theta(g(n)) \) and \( \Omega(g(n)) \).
- \( f(n) \)'s growth rate is the same as \( g(n) \)'s
More Examples
Activity #4
2 min
More Examples
## More Examples
<table>
<thead>
<tr>
<th>$f(n)$</th>
<th>Big-$\mathcal{O}$</th>
<th>Big-$\Omega$</th>
<th>Big-$\mathcal{P}$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$an + b$</td>
<td>?</td>
<td>?</td>
<td>$\mathcal{P}(n)$</td>
</tr>
<tr>
<td>$an^2 + bn + c$</td>
<td>?</td>
<td>?</td>
<td>$\mathcal{P}(n^2)$</td>
</tr>
<tr>
<td>$a$</td>
<td>?</td>
<td>?</td>
<td>$\mathcal{P}(1)$</td>
</tr>
<tr>
<td>$3^n + an^{40}$</td>
<td>?</td>
<td>?</td>
<td>$\mathcal{P}(3^n)$</td>
</tr>
<tr>
<td>$an + b \log n$</td>
<td>?</td>
<td>?</td>
<td>$\mathcal{P}(n)$</td>
</tr>
</tbody>
</table>
Running Times
$O(1)$
- independent of input size
$O(n)$
- depends on input size
$O(n^2)$
- depends on square of input size
$O(n^3)$
- depends on cube of input size
$O(n^{70})$
- depends on 70th power of input size
$O(2^n)$
- exponential in input size
Finding Low Importance Seams
- How many seams in a $c \times r$ image?
- At each row the seam can go Left, Right or Down
- It chooses 1 out of 3 dirs at each row and there are $r$ rows
- So $3^r$ possible seams from some starting pixel
- Since there are $c$ starting pixels total # of seams is $c \times 3^r$
- For square image with $n$ total pixels
- there are $\sqrt{n} \times 3^{\sqrt{n}}$ possible seams
Finding Low Importance Seams
- Brute force algorithm:
- **Try every possible** seam & find least important one
- What is running time of brute force algorithm?
- Suppose image is of length $n = c \times r$
- $O(\sqrt{n} \times 3^{\sqrt{n}})$
Seamcarve
- What is the runtime of Seamcarve (from last class)?
- The algorithm
- Iterate over all pixels from bottom to top
- Populate `costs` and `dirs` arrays
- Create seam by choosing minimum value in top row and tracing downward
- How many operations per pixel?
- A constant number of operations per pixel (4)
- Constant number of operations per pixel means algorithm is linear
- $O(n)$ where $n$ is number of pixels
- Also could have counted # of nested loops in pseudocode…
Seamcarve
- How can we possibly go from
- Exponential running time with brute force
- Linear running time with Seamcarve?
- What’s the secret to this magic trick?
Dynamic Programming!
Dynamic Programming
- Idea
- re-use computation you’ve already done or in other words
- avoid re-computing what you’ve already computed
- Seamcarve observation
- many seams cross paths
- so we don’t need to re-compute entire sums of importance if we’ve already computed terms before
- that’s the purpose of the costs matrix!
- Dynamic programming
- “store information you’ve computed to avoid re-computing it”
Fibonacci (Recursive)
- Defined by the recursive relation
- \( F_0 = 0, \ F_1 = 1 \)
- \( F_n = F_{n-1} + F_{n-2} \)
- We can implement this with a recursive function
```python
function fib(n):
if n = 0:
return 0
if n = 1:
return 1
return fib(n-1) + fib(n-2)
```
Visualization of Fibonacci (Recursive)
- Each node of tree is a recursive call of Fib( )
- Each level of the tree is a level of the recursion
```python
def fib(n):
if n == 0:
return 0
if n == 1:
return 1
return fib(n-1) + fib(n-2)
```
Fibonacci (Recursive)
Big-O runtime of recursive \texttt{fib} function?
Fibonacci (Recursive)
Big-O runtime of recursive \texttt{fib} function?
Fibonacci (Recursive)
Big-O runtime of recursive $\text{fib}$ function?
Activity #5
Fibonacci (Recursive)
- How many times does \texttt{fib} get called for \texttt{fib(4)}?
- 8 times
- At each level it makes twice as many recursive calls as last
- For \texttt{fib(n)} it makes approximately $2^n$ recursive calls
- Algorithm is $O(2^n)$
```python
function fib(n):
if n = 0:
return 0
if n = 1:
return 1
return fib(n-1) + fib(n-2)
```
Fibonacci: Dynamic Programming
- How many times does \texttt{fib(1)} get computed?
- Instead of recomputing Fibonacci numbers over and over again
- Compute them \textbf{once} and store them for later
```python
function dynamicFib(n):
fibs = [] // make an array of size n
fibs[0] = 0
fibs[1] = 1
for i from 2 to n:
fibs[i] = fibs[i-1] + fibs[i-2]
return fibs[n]
```
Fibonacci: Dynamic Programming
- What’s the runtime of `dynamicFib()`?
- Calculates Fibonacci numbers from 0 to n
- Performs $O(1)$ ops for each one
- Runtime is clearly $O(n)$
- We again reduced runtime of algorithm
- From exponential to linear
- with dynamic programming!
Readings
- Dasgupta et al. section 0.2 (pp. 12-15)
- Fibonacci example (without mentioning dynamic programming)
- Dasgupta et al. section 0.3 (pp. 15-17)
- Describes Big-O notation
- Dasgupta et al. chapter 6 (pp. 169-199)
- Goes into Dynamic Programming
- This chapter builds significantly on earlier ones so might be challenging now (but we’ll see much of it this semester)
Announcements
- Homework 1 due this Friday at 5pm!
- Thursday is in-class Python lab!
- If you are able to work on your own laptop
- Go to Salomon DECl (here!)
- Otherwise, go to the Sunlab.
- Make sure you can log into your CS account before attending lab
- See SunLab consultant if you have any account issues!
- Sections started yesterday
- if you are not signed up, you could be in trouble!
References
- Slide #10
- the portrait on the left is a drawing; really!
- Slide #25
- Usain Bolt (constant): 8-time Olympic gold medalist and greatest sprinter of all time
- Sally Pearson (linear): 2012 Olympic world champion in 100m hurdles, 2011 and 2017 World Champion
- Wilson Kipsang (quadratic): former marathon world-record holder, Olympic marathon bronze medalist
- Eliud Kipchoge (quadratic): 2016 Olympic marathon gold medalist, greatest marathoner of the modern era
|
{"Source-Url": "http://cs.brown.edu/courses/cs016/static/files/lectures/slides/02_analysisOfAlgos.pdf", "len_cl100k_base": 4535, "olmocr-version": "0.1.50", "pdf-total-pages": 65, "total-fallback-pages": 0, "total-input-tokens": 88976, "total-output-tokens": 6917, "length": "2e12", "weborganizer": {"__label__adult": 0.0005297660827636719, "__label__art_design": 0.0006127357482910156, "__label__crime_law": 0.0006570816040039062, "__label__education_jobs": 0.01276397705078125, "__label__entertainment": 0.00016355514526367188, "__label__fashion_beauty": 0.0002968311309814453, "__label__finance_business": 0.0004298686981201172, "__label__food_dining": 0.0007696151733398438, "__label__games": 0.001277923583984375, "__label__hardware": 0.0015134811401367188, "__label__health": 0.0011577606201171875, "__label__history": 0.0005817413330078125, "__label__home_hobbies": 0.00026726722717285156, "__label__industrial": 0.001010894775390625, "__label__literature": 0.0007843971252441406, "__label__politics": 0.0005435943603515625, "__label__religion": 0.0008597373962402344, "__label__science_tech": 0.11175537109375, "__label__social_life": 0.0003333091735839844, "__label__software": 0.0052642822265625, "__label__software_dev": 0.85546875, "__label__sports_fitness": 0.0013113021850585938, "__label__transportation": 0.0013065338134765625, "__label__travel": 0.00036072731018066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15063, 0.02389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15063, 0.56703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15063, 0.75985]], "google_gemma-3-12b-it_contains_pii": [[0, 95, false], [95, 236, null], [236, 306, null], [306, 376, null], [376, 446, null], [446, 673, null], [673, 709, null], [709, 898, null], [898, 1113, null], [1113, 1320, null], [1320, 1377, null], [1377, 1766, null], [1766, 1794, null], [1794, 2336, null], [2336, 2751, null], [2751, 3116, null], [3116, 3433, null], [3433, 3648, null], [3648, 3927, null], [3927, 4140, null], [4140, 4366, null], [4366, 4611, null], [4611, 4935, null], [4935, 5247, null], [5247, 5614, null], [5614, 6168, null], [6168, 6303, null], [6303, 6358, null], [6358, 6427, null], [6427, 6491, null], [6491, 6842, null], [6842, 6842, null], [6842, 7165, null], [7165, 7198, null], [7198, 7231, null], [7231, 7251, null], [7251, 7558, null], [7558, 7933, null], [7933, 8320, null], [8320, 8620, null], [8620, 8644, null], [8644, 8956, null], [8956, 9301, null], [9301, 9472, null], [9472, 9506, null], [9506, 9520, null], [9520, 9520, null], [9520, 9890, null], [9890, 10147, null], [10147, 10566, null], [10566, 10816, null], [10816, 11308, null], [11308, 11500, null], [11500, 11925, null], [11925, 12223, null], [12223, 12488, null], [12488, 12561, null], [12561, 12634, null], [12634, 12720, null], [12720, 13105, null], [13105, 13502, null], [13502, 13788, null], [13788, 14173, null], [14173, 14575, null], [14575, 15063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 95, true], [95, 236, null], [236, 306, null], [306, 376, null], [376, 446, null], [446, 673, null], [673, 709, null], [709, 898, null], [898, 1113, null], [1113, 1320, null], [1320, 1377, null], [1377, 1766, null], [1766, 1794, null], [1794, 2336, null], [2336, 2751, null], [2751, 3116, null], [3116, 3433, null], [3433, 3648, null], [3648, 3927, null], [3927, 4140, null], [4140, 4366, null], [4366, 4611, null], [4611, 4935, null], [4935, 5247, null], [5247, 5614, null], [5614, 6168, null], [6168, 6303, null], [6303, 6358, null], [6358, 6427, null], [6427, 6491, null], [6491, 6842, null], [6842, 6842, null], [6842, 7165, null], [7165, 7198, null], [7198, 7231, null], [7231, 7251, null], [7251, 7558, null], [7558, 7933, null], [7933, 8320, null], [8320, 8620, null], [8620, 8644, null], [8644, 8956, null], [8956, 9301, null], [9301, 9472, null], [9472, 9506, null], [9506, 9520, null], [9520, 9520, null], [9520, 9890, null], [9890, 10147, null], [10147, 10566, null], [10566, 10816, null], [10816, 11308, null], [11308, 11500, null], [11500, 11925, null], [11925, 12223, null], [12223, 12488, null], [12488, 12561, null], [12561, 12634, null], [12634, 12720, null], [12720, 13105, null], [13105, 13502, null], [13502, 13788, null], [13788, 14173, null], [14173, 14575, null], [14575, 15063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15063, null]], "pdf_page_numbers": [[0, 95, 1], [95, 236, 2], [236, 306, 3], [306, 376, 4], [376, 446, 5], [446, 673, 6], [673, 709, 7], [709, 898, 8], [898, 1113, 9], [1113, 1320, 10], [1320, 1377, 11], [1377, 1766, 12], [1766, 1794, 13], [1794, 2336, 14], [2336, 2751, 15], [2751, 3116, 16], [3116, 3433, 17], [3433, 3648, 18], [3648, 3927, 19], [3927, 4140, 20], [4140, 4366, 21], [4366, 4611, 22], [4611, 4935, 23], [4935, 5247, 24], [5247, 5614, 25], [5614, 6168, 26], [6168, 6303, 27], [6303, 6358, 28], [6358, 6427, 29], [6427, 6491, 30], [6491, 6842, 31], [6842, 6842, 32], [6842, 7165, 33], [7165, 7198, 34], [7198, 7231, 35], [7231, 7251, 36], [7251, 7558, 37], [7558, 7933, 38], [7933, 8320, 39], [8320, 8620, 40], [8620, 8644, 41], [8644, 8956, 42], [8956, 9301, 43], [9301, 9472, 44], [9472, 9506, 45], [9506, 9520, 46], [9520, 9520, 47], [9520, 9890, 48], [9890, 10147, 49], [10147, 10566, 50], [10566, 10816, 51], [10816, 11308, 52], [11308, 11500, 53], [11500, 11925, 54], [11925, 12223, 55], [12223, 12488, 56], [12488, 12561, 57], [12561, 12634, 58], [12634, 12720, 59], [12720, 13105, 60], [13105, 13502, 61], [13502, 13788, 62], [13788, 14173, 63], [14173, 14575, 64], [14575, 15063, 65]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15063, 0.0158]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
736df685ceab4376f866ee1a1f5e2a3eeaf771bd
|
Assessing the Adherence of an Industrial Autonomous Driving Framework to ISO 26262 Software Guidelines
Hamid Tabani, Leonidas Kosmidis, Jaume Abella, Francisco J. Cazorla
Barcelona Supercomputing Center
ABSTRACT
The complexity and size of Autonomous Driving (AD) software are comparably higher than that of software implementing other (standard) functionalities in the car. To make things worse, a big fraction of AD software is not specifically designed for the automotive (or any other critical) domain, but the mainstream market. This brings uncertainty on to which extent AD software adheres to guidelines in safety standards. In this paper, we present our experience in applying ISO 26262 – the applicable functional safety standard for road vehicles – software safety guidelines to industrial AD software, in particular, Apollo, a heterogeneous Autonomous Driving framework used extensively in industry. We provide quantitative and qualitative metrics of compliance for many ISO 26262 recommendations on software design, implementation, and testing.
KEYWORDS
Critical Systems, Autonomous Driving, ISO 26262
1 INTRODUCTION
The potential socio-economic benefits of Autonomous Driving (AD) have motivated automotive industry to assess the feasibility of developing autonomous cars. In doing so, intuitively, Advanced-Driver Assistance Systems (ADAS) such as lane keeping can serve the automotive industry as a 'test case' to master the complexity in the specification, design, implementation, verification, and validation of advanced software-controlled functionalities. However, AD brings its own set of challenges. In particular, safety argumentation for hardware and software in AD and ADAS differs. In ADAS, it builds on the ability of the human driver to take the system to a safe state on the event of a software/hardware failure. That is, the human acts as a backup safety mechanism and is responsible for taking the right corrective action. Instead, for AD (especially full AD or Level 5), there can be no driver. This increases the complexity
of AD software that must handle safety under driving conditions. To that end, the software implements a complex control building on artificial intelligence (AI) algorithms and manages big amounts of sensor data. This translates into a huge increase in computing performance requirements, 100x from 2016 to 2024 according to ARM prospects [6], and the use of advanced performance-improving processor designs, e.g., GPUs (Graphics Processing Units). Also, software (and hardware) must be designed and validated for a high Automotive Safety Integrity Level (ASIL)\(^1\). In particular, for high levels of autonomous driving in which AD systems will control safety-related driving aspects, AD systems will reach ASIL-D and must be designed to remain (fail) operational regardless of the presence of a fault. This requires robust mitigation techniques to decrease risks and/or meet specific failure probability bounds.
For road vehicles, ISO 26262 guides the production of software, listing for each process of the life-cycle i) the safety requirements, ii) the activities required to meet those requirements, and iii) the evidence that is required to demonstrate that the requirements are fulfilled, with traceability as a fundamental element to link high-level requirements, low-level requirements, and analyzes. In short, if the designed analyzes and tests are passed (e.g., 100% code coverage is achieved), an assessment of the quality of the implementation and its fit for purpose can be made. For AD software frameworks, the challenge lies on the fact that so far they have been designed with low focus on safety as dictated by safety standards. First of all, during the initial phases of the ISO 26262 safety life-cycle, it is required to define the AD safety goals, with already some efforts trying to formalize them [11]. Then, safety requirements are allocated, via a technical safety concept, to software and other architectural components so that safety goals are not violated. However, the extent to which AD software frameworks adhere to ISO 26262 Part 6, which specifies the product development at the software level, has not been assessed.
In this paper, with focus on Apollo [5], an AD framework currently running on a variety of commercial vehicles, we show AD’s adherence to ISO 26262 in terms of: i) software architectural design; ii) software modeling and coding guidelines; iii) software unit design and implementation; and iv) code coverage, the most common form of structural coverage analysis. To our knowledge, this is the first analysis of a complete industrial AD software in the literature regarding adherence to ISO 26262 guidelines for software. Hence, our analysis assesses the gaps between AD software current implementation and the requirements for its certification.
In our analysis, we cover some issues hampering AD heterogeneous software adherence to ISO 26262 that can be solved with
\(^1\)The automotive functional safety standard ISO 26262, defines 4 ASIL varying from ASIL-A (lowest criticality) to ASIL-D (highest criticality). Besides, the Quality Management (QM) category covers those components that cannot cause safety risks upon a failure.
limited software engineering effort and those that are much deeper and require research innovations to be successfully addressed. As an illustrative example of the latter, we show that how the GPU part (i.e., the GPU code) of AD applications is programmed is at odds with ISO 26262 requirements. This occurs because programming languages for the mainstream market, e.g., CUDA, make intrinsic use of features not recommended in ISO 26262 (e.g., pointers and dynamic memory). Overall, we provide several key insights and research directions to address the main challenges brought by AD software in terms of safety assurance.
The rest of the paper is organized as follows: Section 2 introduces Apollo and its structure. Section 3 describes the main contribution of our work: the adherence of Apollo to ISO 26262 software guidelines, with explicit references to the (little) related work. Finally, Section 4 presents the main conclusions of this work.
2 APOLLO INDUSTRIAL AD FRAMEWORK
Autonomous cars, also known as driver-less cars and self-driving cars, are vehicles that can guide themselves toward a specified destination without human intervention. AD software combines several input sensors such as video cameras, short-range and long-range radars and laser sensors to detect the surrounding area and track the moving objects around the car.
AD systems implement very precise navigation techniques in order to locate the position of the vehicle. By providing the position of the vehicle and enough information about the surrounding area, the AD system plans the future paths using the specified routing and generates control commands for the vehicle to follow the specified paths. These are the main stages of Apollo and also other state-of-the-art AD frameworks [2, 4, 5]. All of them have similar design and implementation characteristics, so the conclusions we derive for Apollo in this work hold to a large extent for all AD frameworks.
Over 110 industrial partners, encompassing top-tier AI companies and car manufacturers, already contribute to Apollo as a large industrial project [5]. Apollo AD is implemented in a large number of commercial automotive companies, and it is already deployed on several prototype vehicles, including autonomous trucks. This is the primary reason for the selection of Apollo for our study.
AD software comprises several modules [7], see Figure 1, with compute-intensive ones already implemented to use GPUs.
- **Object Detection** identifies objects of interest surrounding the car using the LIDAR, camera, and radar sensors.
- **Object Tracking** is responsible for observing moving objects like other vehicles, bicycles, and pedestrians over time. In some AD frameworks, including Apollo, object detection, and tracking are parts of a bigger module which is called Perception.


**2.1 Object Detection**
Object Detection is a crucial component of AD systems. It involves identifying and localizing objects in the surrounding environment. This process is essential for the AD system to make informed decisions about its actions. Object Detection in Apollo employs advanced computer vision techniques, including deep learning algorithms, to identify objects with high accuracy.
**2.2 Object Tracking**
Object Tracking is the process of monitoring the movement of objects over time. This is critical for the AD system to anticipate the actions of other vehicles and pedestrians, ensuring safe navigation. Apollo utilizes sophisticated tracking algorithms to accurately predict the trajectories of detected objects, enabling the system to react appropriately.
**2.3 Localization**
Localization is the process of determining the position of the vehicle in its environment. It is fundamental for the AD system to navigate accurately. Apollo leverages precise localization techniques, including GPS and multi-sensor fusion, to provide reliable location information.
**2.4 Perception**
Perception is the overarching module that integrates Object Detection and Object Tracking. It is responsible for understanding the environment and making informed decisions about the vehicle’s actions. Perception in Apollo is designed to ensure safe and efficient navigation by analyzing the surrounding scene and making appropriate control commands.
**2.5 Control**
Control involves generating control commands such as accelerating, braking, and steering. These commands are based on the information provided by the Perception module, ensuring that the vehicle’s actions are coherent with the environment and the desired route. Apollo’s Control module is optimized for real-time execution, ensuring that decisions are made quickly and accurately.
**2.6 Planning**
Planning involves calculating the optimal route to the destination. This process takes into account various factors such as traffic conditions, road restrictions, and the desired time of arrival. Apollo’s Planning module leverages advanced algorithms to generate safe and efficient paths, taking into consideration real-time data and constraints.
**2.7 Prediction**
Prediction is the process of anticipating future movements and events. In AD systems, this is crucial for making proactive decisions. Apollo’s Prediction module utilizes machine learning techniques to predict the actions of other vehicles and pedestrians, enabling the AD system to react appropriately and safely.
**2.8 Decision Making**
Decision Making involves selecting the best course of action based on the perceived environment and predicted future states. Apollo’s Decision Making module utilizes advanced algorithms to process the data from Perception and Planning and make informed decisions, ensuring safe and efficient navigation.
**2.9 Communication**
Communication is essential for exchanging information between the AD system and other vehicles, infrastructure, and the environment. Apollo’s Communication module leverages established protocols and technologies to ensure secure and reliable data exchange, facilitating seamless interaction with other systems.
**2.10 Execution**
Execution involves the implementation of control commands to execute the planned actions. This includes activating actuators such as brakes and steering wheels. Apollo’s Execution module is optimized for real-time execution, ensuring that control commands are issued promptly and accurately.
**3 SOFTWARE SAFETY GUIDELINES**
Part 6 of ISO 26262 specifies the requirements for software development. ISO 26262 covers software’s: (a) safety requirements specification, (b) architectural design, (c) unit design, and implementation, (d) unit testing, (e) integration and testing, and (f) safety requirements verification. In this work, we focus on a subset of them. In particular, we cover different aspects of (b), (c), (d), and (e).
**3.1 Software Architectural Design**
ISO 26262 specifies coding and modeling guidelines for the product development phase at the software level as shown in Table 1. ISO
While no exact cyclomatic complexity limit fits all domains, in (e.g., worst-case execution time and response time) estimation. We can see that, in general, the degree of complexity impacts the already costly verification activities. As reference ranges we use: 1-10 (low); 11-20 (moderate); 21-50 (risky); and >50 (unstable). We have analyzed the source code of the different software modules using the Lizard cyclomatic complexity analysis tool [3]. It measures the number of independent paths in a target source. For instance, a program with a single if statement has a cyclomatic complexity of two and a program with two nested if conditions result in complexity of three.
The crosses and diamonds in Figure 3 respectively show the total number of lines of code (LOC) and the number of functions in each module. Note that the entire Apollo framework is composed of more than 220k LOC, which means that it has a considerable size and this analysis is not trivial. All modules are in the order of tens of thousands of LOC and hundreds or even thousands of functions.
The bars in Figure 3 show the number of functions in different modules of Apollo with a cyclomatic complexity over a given value. While no exact cyclomatic complexity limit fits all domains, in critical systems, it is especially delicate since increasing complexity impacts the already costly verification activities. As reference ranges we use: 1-10 (low); 11-20 (moderate); 21-50 (risky); and >50 (unstable). We can see that, in general, the degree of complexity of the code is high. Modules have in the order of dozens of functions with moderate or higher complexity, amounting 554 for the entire Apollo framework. Such high code complexity challenges the functional verification of the code as well as its timing analysis (e.g., worst-case execution time and response time) estimation.
Observation 1. The CPU part of AD frameworks is not programmed according to any safety-related guideline. In our view, it is possible with moderate effort to change the code to adhere to a language subset like MISRA C.
Observation 2. The CPU part of AD frameworks is not programmed according to any safety-related guideline. In our view, it is possible with moderate effort to change the code to adhere to a language subset like MISRA C.
More interestingly, we found that for the GPU code there is no standard (language subset) defined to simplify the safety assessment. It follows that there are no tools to assess whether a particular code sample adheres to it.
Observation 3. The CPU part of AD frameworks is not programmed according to any safety-related guideline. In our view, it is possible with moderate effort to change the code to adhere to a language subset like MISRA C.
More interestingly, we found that for the GPU code there is no standard (language subset) defined to simplify the safety assessment. It follows that there are no tools to assess whether a particular code sample adheres to it.
Observation 4. The CPU part of AD frameworks is not programmed according to any safety-related guideline. In our view, it is possible with moderate effort to change the code to adhere to a language subset like MISRA C.
More interestingly, we found that for the GPU code there is no standard (language subset) defined to simplify the safety assessment. It follows that there are no tools to assess whether a particular code sample adheres to it.
Table 1: Modeling/coding guidelines (ISO26262_6 Table 1)
<table>
<thead>
<tr>
<th>Criticality Level</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td>1) Enforcement of low complexity</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>2) Use language subsets</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>3) Enforcement of strong typing</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>4) Use unambiguous graphical representation</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>5) Use style guides</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>6) Use naming conventions</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
</tbody>
</table>
There are respectively used to access dynamically created arrays of floating-point data containing layer outputs and biases. Note that CUDA memory allocations, cudaMalloc, transfer the data to the device memory. Operations to copy the data back from device memory are not shown in this code excerpt. Hence, CUDA programs build on pointers as an indispensable feature to allow the programmer to allocate and maintain two separate sets of pointers explicitly, one for the host memory and one for the device memory.
Observation 2. Use of language subsets. In this work, we focus on MISRA [8], the guideline for the use of the C language in vehicle-based software, which stipulates 143 rules (MISRA C:2012). Since AD applications are not programmed targeting any critical market in particular, they naturally do not adhere to MISRA C.
Observation 3. No guideline or language subset exist for GPU code to facilitate code safety assessment in critical systems.
In this line, we assessed whether some of the features required for CPU code according to MISRA C apply to GPU (CUDA) code. Similar to [14], our analysis shows that CUDA programming heavily builds on the use of pointers and dynamic memory allocation, while ISO 26262 highly recommends not to use dynamic objects or variables. As an illustrative example, Figure 4 shows an excerpt of object detection function scale_bias_gpu that has a typical CUDA program structure. As shown, output and biases pointers are respectively used to access dynamically created arrays of floating-point data containing layer outputs and biases. Note that CUDA memory allocations, cudaMalloc, transfer the data to the device memory. Operations to copy the data back from device memory are not shown in this code excerpt. Hence, CUDA programs build on pointers as an indispensable feature to allow the programmer to allocate and maintain two separate sets of pointers explicitly, one for the host memory and one for the device memory.
Observation 4. CUDA code intrinsically uses features not recommended in ISO 26262 (e.g., use of pointers and dynamic memory).
Unlike Observation 2 that can be handled with relatively small effort, Observations 3 and 4 carry deeper changes. In particular, the use of GPU programming languages that are more friendly to certification imposes significant code modifications. In this line, alternative initiatives like the Brook Auto GPU programming language [14] help in simplifying certification: in the same way that MISRA C constraints C, Brook Auto defines a subset of the Brook stream-programming language rules that are certification friendly, without limiting the expressiveness of the language. For instance, Brook Auto does not expose pointers to the programmer, and takes care of those tasks automatically, reducing the possibility of human errors. Furthermore, Brook Auto achieves competitive performance results in low-level GPU languages [14].
3.1.3 Enforcement of strong typing. While there is no single definition of "strong typing", C and C++ languages are generally agreed as "less strongly typed" than some other languages due to their support for implicit and explicit casting. In Apollo, we have observed more than 1,400 explicit castings, which conforms to the requirements of the ISO 26262 standard.
Observation 5. Most of the AD frameworks and modules are programmed in C or C++, requiring the programmer to identify and resolve any type of castings in the code.
3.1.4 Use of defensive implementation techniques. Defensive implementation improves software and source code in many aspects. For instance, the software must behave predictably despite unexpected inputs or user inputs. This requires that all the functions should check the validity of their input parameters before using them. Furthermore, all the callers of a function should handle all possible return values from the called function. Our analysis of the source code of Apollo shows that defensive programming techniques are not used.
Observation 6. AD frameworks do not implement defensive programming techniques. However, with limited effort, this feature can be added to the code.
3.1.5 Use of established design principles. This category is very broad, but many design principles are related to properties that are statically checkable. For example, design guidelines may impose restrictions on the use of global variables, or exception handling. Although the code properly uses C++ exception handling in most of the cases, we observe the use of global variables frequently. Their use impacts functional validation and testing since it becomes more challenging determining value ranges.
Observation 7. AD software uses global variables. This requires changing the code to eliminate them or more complex argumentation to support their use and correct behavior.
3.1.6 Use unambiguous graphical representation. Since all the AD frameworks are written in C/C++, this requirement is not applicable.
3.1.7 Use style guides. Style guides typically cover topics such as code layout, capitalization, comments, and white space. For Apollo source code, we used a style guide tool to process the code, and it verifies that the proper coding style is very well achieved.
Observation 8. AD software follows style guides. In particular, Apollo software adopted the Google C++ style guide, and contributors have to validate their code using defined style checkers.
3.1.8 Use naming conventions. According to several coding guidelines including Google C++ guidelines, the names of all types, classes, structs, type aliases, enums, and type template parameters should have the same naming convention. The Apollo code adheres to all these properties.
Observation 9. AD software adheres to the properties of coding guidelines. Apollo software uses a very well-structured implementation, and all coding guidelines are followed.
3.2 Software Unit Testing: CPU Code Coverage
Software unit testing is an important requirement in ISO 26262. In this section, we focus on metrics of code coverage that provide evidence of correct execution by showing that different parts of the program have been sufficiently exercised by the tests.
For the CPU code, we focus on standard statement, branch, and Modified Condition/Decision (MC/DC) coverage metrics. In particular, we run several real-scenario tests and use Rapi da System’s RapiCover commercial tool [12] to measure the object detection code coverage in the Apollo’s Object Detection module.
In Figure 5, the X-axis list the files in the different modules of YOLO. Each file covers all the functions implemented in it. In our experiments, we excluded all those functions that were not called. Despite that, statement, branch, and MC/DC coverage are very low. Average coverage is 83%, 75%, and 61% for statement, branch, and MC/DC respectively, and as low as 19%, 37%, and 10% respectively for individual files.
While ISO 26262 does not specify a particular coverage figure, its parent standard, IEC61508 (Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems) recommends 100% coverage for all metrics. In ISO 26262, either branch or code statement are highly recommended (‘++’) for all ASIL.
Observation 10. Code coverage for AD software is low with available tests. Thus, additional test cases are required to reach much higher coverage (preferably 100% coverage).
3.3 Software Unit Testing: GPU Code Coverage
For GPU code we make the following observations:
Observation 11. Tool support in the real-time domain to measure code coverage of GPU code is very limited.
To our knowledge, no code coverage qualified tool exists to analyze GPU code coverage. Since GPU code of AD software is as critical as CPU code – ASIL-D fail-operational for fully autonomous cars – and hence, requires undergoing the same analysis and testing, automated tools to measure GPU code coverage are needed.
In order to provide some GPU code coverage numbers, we modified the code in such a way that it runs in the CPU or emulates the CUDA API in the CPU. While this approach is not applicable for
safety considerations as one the arguments is that code coverage needs to be performed on a representative target and compiler, we use it to get some figures on GPU code coverage. In particular, we used cuda4cpu [1] and applied it to 2D and 3D stencil computation GPU kernels, as a representative of the open-source code. On the resulting code, we applied code coverage tools obtaining the branch and statement coverage figures shown in Figure 6. The reported values show that full code coverage is not achieved either for statements or branches.
3.3.1 Other challenges. Other elements that can hamper carrying out testing in an efficient manner include the use of closed-source libraries.
Observation 12. Heterogeneous AD software makes extensive use of performance-optimized closed-source CUDA libraries, which hampers assessing compliance against ISO 26262.
As it can be seen in Figure 2, AD software, or the high-level DNN-related libraries it uses, make use of low-level libraries optimized to run certain functions in the GPU. The latter, for competitiveness reasons, are only offered in closed-source form by the chip vendor. These libraries include:
- cuBLAS. Implementation of BLAS (Basic Linear Algebra Subprograms) on top of the NVIDIA CUDA runtime.
- TensorRT. NVIDIA TensorRT high-performance deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference.
Regarding certification, having closed-source libraries require their owners to go through the certification process and adapt their libraries to fit ISO 26262 requirements. While in theory library users can use black-box testing, this however, has severe implications for ISO 26262 applicability and, in its current form, it is not satisfactory.
Alternatives include the adoption by suppliers of a ‘safety culture’ based on openness or make that library users deploy state-of-the-art open-source libraries. In this work, we advocate for the latter option. However, in order to make it attractive, used open-source libraries must provide competitive performance with respect to well-known closed-source libraries.
Case Study. In AD software, including Apollo, the perception module is the main module that extensively uses DNN-related libraries. Object detection algorithms can be implemented using either cuBLAS or cuDNN closed-source libraries. The latter library is specifically designed for deep neural networks and can use NVIDIA’s tensor cores. Therefore, depending on the capabilities of the target architecture on which Apollo will run, one of these two libraries can be selected by the programmer.
For the sake of generality, we take both implementations into account. Then, we implement and run Apollo’s object detection module using NVIDIA’s CUTLASS [9], an open-source collection of CUDA C++ template abstractions for implementing high-performance matrix-multiplication, and ISAAC [13], an input-aware auto-tuning framework and code-generator for compute-bound HPC kernels.
As Figure 7 shows, the implementations based on CUTLASS and ISAAC provide competitive performance in comparison to cuBLAS and cuDNN, which are the libraries used in the baseline. Also note that the same operations run on the CPU cores using highly optimized libraries (ATLAS and OpenBLAS) with two orders of magnitude higher execution time, which demonstrates the inability of CPUs for such compute-intensive workloads.
As another set of illustrative examples, we compare different general matrix multiplication (GEMM) kernels, widely use in YOLO, implementations using cuBLAS and CUTLASS. In order to construct device-wide GEMM kernels, CUTLASS primitives exhibit performance comparable to cuBLAS for scalar GEMM computations as Figure 8(a) shows.
Similarly, we compare the performance of convolution kernel implementations using cuDNN and ISAAC for a variety of domains. As shown in Figure 8(b), ISAAC provides very competitive performance in comparison with cuDNN for a variety of workloads. We can see that open-source libraries can provide comparable performance.
3.4 Software Modeling and Coding Guidelines
Software architectural design, early in the product design phase, provides a design that realizes the software safety requirements. Table 2 itemizes particular requirements for architectural design.
<table>
<thead>
<tr>
<th>Criticality Level</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td>1) Hierarchical structure of SW components</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>2) Restricted size of software components</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>3) Restricted size of interfaces</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>4) High cohesion in each software component</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>5) Restricted coupling between SW components</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>6) Appropriate scheduling properties</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
<tr>
<td>7) Restricted use of interrupts</td>
<td>++</td>
<td>++</td>
<td>++</td>
<td>++</td>
</tr>
</tbody>
</table>
3.4.1 Hierarchical structure of SW components. In ISO 26262, design begins at top level components, which are broken down further to reach the lowest unit of implementation. For instance, the lowest units for software are the functions. To satisfy this requirement, there are several commercial and open-source software tools that
provide a hierarchical structure and dependencies across software components.
3.4.2 Other architectural design parameters. The rest of the items in Table 2 can be measured using existing tools and metrics and assessed against specific thresholds. For instance, a limit for the size of software components is not explicitly specified in the standard. Main modules of Apollo have from 5k to 60k lines of code. Provided that a module should be limited to a maximum size, it can be reorganized or redesigned to stay below the maximum size.
Observation 13. AD frameworks do not comply with many of the principles for software architectural design defined by ISO 26262 such as the restricted size of components and interfaces. In our view, AD software can be made compliant with these ISO 26262 principles, although with non-negligible effort.
3.5 Software Unit Design and Implementation
Software unit design and implementation build on several guidelines to ensure features such as simplicity, correct order of execution, consistency of interfaces, and data/control flows as presented in Table 3. Our analysis of Apollo’s AD software shows that many of these guidelines are not followed:
Table 3: SW unit design & implement. (ISO26262_6 Table 8)
<table>
<thead>
<tr>
<th>Criticality Level</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td>1) One entry and one exit point in functions</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>2) No dynamic objects or variables, or else online test during their creation</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>3) Initialization of variables</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>4) No multiple use of variable names</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>5) Avoid global variables or justify usage</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>6) Limited use of pointers</td>
<td>o</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>7) No implicit type conversions</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>8) No hidden data flow or control flow</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>9) No unconditional jumps</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>10) No recursions</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
</tbody>
</table>
(1) Functions have several entries and exit points (e.g. 41% of the functions in the object detection module).
(2) Most data structures are allocated dynamically. Since the input data parameters, like the size of the networks and images, are unknown statically, the corresponding data structures are allocated dynamically. As discussed in the previous section, CUDA builds on the use of pointers and dynamic memory.
(3) Using static code analysis tools and compiler options, we have identified several variables as uninitialized.
(4) The use of numerous libraries and namespaces complicates analyzing that all the variables have unique names to minimize programming mistakes. However, commercial custom code checks are developed for this purpose.
(5) We identified the use of global variables (e.g. ≈900 in the perception module). Although their use can be eliminated by modifying the application, however, according to the standard, justified usage of such variables may be permitted.
(6) The limited use of pointers has been already discussed.
(7) Type conversions have also been discussed in detail.
(8) Hidden data-control flow partially relates to code coverage as already described.
(9) We have observed that AD frameworks use several unconditional jumps. However, by applying minor modifications to the code, they can be eliminated.
(10) We have identified a few uses of recursive functions for well-known purposes such as processing trees. However, in general, the recursive code can be transformed into loop-base iterative code.
Observation 14. Apollo AD software does not comply with the principles for unit design and implementation. While code can be modified to cover most of these requirements, some of those requirements, however, require specific techniques as presented in [14] for avoiding the use of pointers in GPU code and/or significant modifications.
4 CONCLUSION
In this paper, we present our experience with the assessment of the safety properties of an advanced, industrial GPU-accelerated AD framework. We have identified complexities and missing features in AD software to adhere to ISO 26262 principles for software (Part 6), and we have proposed different approaches to handle those complexities and missing features. We have highlighted several challenges for certification of GPU code according to the requirements of the ISO 26262 standard. Overall, our analysis contributes to enabling the use of heterogeneous AD software to control high-integrity vehicle functionalities by assessing the gaps between its current implementation and the requirements for its certification.
ACKNOWLEDGMENT
This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773). This work has also been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. MINECO partially supported Jaime Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717), and Leonidas Kosmidis under Juan de la Cierva-Formacion postdoctoral fellowship (FJCI-2017-34095).
REFERENCES
|
{"Source-Url": "https://upcommons.upc.edu/bitstream/handle/2117/133938/Assessing%20the%20adherence%20of%20an%20industrial.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 7599, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19616, "total-output-tokens": 8189, "length": "2e12", "weborganizer": {"__label__adult": 0.0008444786071777344, "__label__art_design": 0.0005483627319335938, "__label__crime_law": 0.0009489059448242188, "__label__education_jobs": 0.0006661415100097656, "__label__entertainment": 0.00016045570373535156, "__label__fashion_beauty": 0.0003292560577392578, "__label__finance_business": 0.0004341602325439453, "__label__food_dining": 0.0007176399230957031, "__label__games": 0.0018291473388671875, "__label__hardware": 0.0061798095703125, "__label__health": 0.0008196830749511719, "__label__history": 0.0005097389221191406, "__label__home_hobbies": 0.00017821788787841797, "__label__industrial": 0.0024623870849609375, "__label__literature": 0.00039005279541015625, "__label__politics": 0.0006399154663085938, "__label__religion": 0.000896453857421875, "__label__science_tech": 0.1514892578125, "__label__social_life": 0.00013625621795654297, "__label__software": 0.0102691650390625, "__label__software_dev": 0.80908203125, "__label__sports_fitness": 0.0007181167602539062, "__label__transportation": 0.00934600830078125, "__label__travel": 0.0003871917724609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36510, 0.0277]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36510, 0.54085]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36510, 0.90067]], "google_gemma-3-12b-it_contains_pii": [[0, 5250, false], [5250, 12371, null], [12371, 19177, null], [19177, 24367, null], [24367, 29663, null], [29663, 36510, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5250, true], [5250, 12371, null], [12371, 19177, null], [19177, 24367, null], [24367, 29663, null], [29663, 36510, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36510, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36510, null]], "pdf_page_numbers": [[0, 5250, 1], [5250, 12371, 2], [12371, 19177, 3], [19177, 24367, 4], [24367, 29663, 5], [29663, 36510, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36510, 0.17059]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
9ce84dce52e3816e3c174721e2e7046cc6c321e7
|
A MAS metamodel-driven approach to process composition
M. Cossentino$^{1,2}$, S. Gaglio$^{1,3}$, S. Galland$^2$, N. Gaud$^2$, V. Hilaire$^2$, A. Koukam$^2$, and V. Seidita$^3$
$^1$ Istituto di Calcolo e Reti ad Alte Prestazioni, Consiglio Nazionale delle Ricerche, Palermo, Italy
cossentino@pa.icar.cnr.it
$^2$ Systems and Transport Laboratory (SeT) - Belfort, France
stephane.galland,nicolas.gaud,vincent.hilaire,abder.koukam@utbm.fr
$^3$ DINFO - Università degli studi di Palermo, Palermo, Italy
gaglio,seidita@dinfo.unipa.it
Abstract. The construction of ad-hoc design processes is more and more required today. In this paper we present our approach for the construction of a new design process following the Situational Method Engineering paradigm. We mainly focus on the selection and assembly activities on the base of what we consider a key element in agent design processes: the MAS metamodel and its elements.
1 Introduction
Multi-Agent systems metamodels (MMMs henceafter) and the composition of new design process achieved greater attention in the last years in the agent community for different reasons. As regards MMMs it is evident that their study enabled a deeper understanding of underlying design processes while, at the same time, the growing importance of Model Driven Engineering approaches required a great effort in the study and modelling of systems on the basis of their metamodels. Besides the effort spent on the study of techniques, methods and tools for the production of the right design process meeting specific process requirements (ad-hoc design process for specific situation and development context for solving a specific class of problems), is today more and more increasing. In this field, Situational Method Engineering (SME) [1], provides means for constructing ad-hoc Software Engineering Processes (SEP) following an approach based on the reuse of portions of existing design processes (often called method fragments$^4$). Our work is mainly focused on the use of SME [2–4] for the construction of customized multi-agent oriented design processes.
In this paper, we mainly show the importance of the MMM in the selection of process fragments that will constitute the new SEP; and we explore the way for constructing the MMM that could guide in the selection and assembly phases when a new desing process is under construction. This article also reports an
$^4$ From now on in this paper we will use the term Process Fragment or simply Fragment
experiment of creation of a new process (called ASPECS\textsuperscript{5}); this is not a classical toy problem but rather we are dealing with the construction of a large process for the design of agent-oriented systems. The MAS metamodel of this new process\textsuperscript{5} is mainly composed by elements coming from the PASSI\textsuperscript{6} and CRIO\textsuperscript{7} existing design processes and supports Janus as an implementation platforms of holonic agents.
The paper is organized as follows: the next section gives a brief description of the proposed approach. Section 3 lists the requirements from which we started for developing the new process and quickly overview the resulting ASPECS design process. Section 4 reports the experiment done. And finally some conclusion statements are provided in section 5.
2 The Proposed Approach
The proposed design process composition approach shown in Figure 1 is organized in three main phases: process analysis, process design and process deployment.
Process Analysis deals with requirements elicitation and analysis of the process to be developed. It produces a set of elements, mainly a portion of the MMM, affecting the Process Fragments Selection and Assembly activities. Finally in the Process Deployment phase the new SEP is instantiated, used to solve a problem and then evaluated. The results of the evaluation are useful for defining new requirements for the next SEP (if any) or improving the designed one. It is worth to note that we consider the process of defining a new design process as an iterative and incremental one.
\textsuperscript{5} ASPECS: Agent-oriented Software Process for Engineering Complex Systems.
Process Requirements Analysis is the first activity a method designer undertakes in his work. It has inputs coming from the type of problem to solve. The new process has in fact to be tuned for a specific solution strategy to a class of problem, and the development context that is a description of the available resources such as human, tools, languages, available skills, and competencies that are available in the SEP enactment group.
These inputs are used to define the process life cycle that establishes the structure the designer has to follow during process fragments assembly activity, the system metamodel concepts and the other process elements (available stakeholders, required activities or work products) used for selecting the process fragments from the fragments repository.
The metamodel contains all the concepts and their relationships. It can be used to design and describe the system under study. It is organized in three different domains, each one being associated to a phase of the development process. The first domain is dedicated to the analysis and provides concepts to describe the problem independently of a given solution. The second provides concepts for the design of a solution independently of a given implementation. And the last one provides platform-specific concepts. The list of metamodel concepts is used for the process fragments selection from the repository [8][9]. The process fragments are extracted from existing design processes or created from scratch.
The process fragments assembly activity results in the new SEP. This activity consists in putting together the selected process fragments following the structure of the identified process life cycle. This activity is still one of the most important unsolved points in the SME field and some proposal have been done in [10][11]. It is a very complex work where the method designer has to collate all the elements gathered in the previous activities and to merge them using his experience and skills.
During Process Deployment the system designer adopts the new design process with the help of a CASE tool for solving a specific problem. After that the designed system is used and experimented, an evaluation activity occurs in order to evaluate the new design process; the gathered information can be used as a new process requirement for a next iteration.
In section 4 an example on how we apply this process is given. In the construction of the ASPECS process we applied the process described before with a specific study on the procedure adopted for the definition of the MAS metamodel and the process fragments selection. We regard this part of the work as one of the main scientific contributions of our approach and therefore we are going to detail it. In this paper the main focus is on the early construction of a core part of the MAS metamodel and then on its use as a guide towards the selection and assembly of fragments. The procedure we defined (Figure 2) starts from the identification of the core part of the MAS metamodel and is done by evaluating the contributions that could come from existing design processes or development platforms (in our case they were PASSI, CRIIO, JANUS). In fact it is logical to expect that people already skilled with the concepts related to some existing process or platform prefer to reuse them rather than to build everything
from scratch. Parts of those metamodels have been reused in order to satisfy the new process requirements that will be described in the experimental part of the paper (section 4), by now, just to exemplify, it is sufficient to consider that we reused part of the PASSI metamodel because we aimed at obtaining a FIPA-compliant communication structure. In the following subsections we detail the most important steps of this process: the new MAS metamodel construction and the new process design phase where fragments are retrieved from repository and assembled.
2.1 Core MAS metamodel definition
As already said, in this work we composed the new metamodel on the basis of portions of metamodels coming from PASSI, CRIIO and Janus. In so doing we are aware that defining the core MAS metamodel means defining a relevant part of the 'philosophy' that will be behind the new design process. For this reason we performed this activity during meetings involving stakeholders. We tried to deduct the list of elements by the portions of the cited processes that could satisfy the new process requirements. Of course this was not sufficient and it was therefore necessary to add new concepts for dealing with the specific case. For instance a lot of work has been done (while building the ASPECS process) in the definition of the holonic structure of an agent society using an organizational approach as well as on the specification of possible roles that could be played by agents inside an holon (Head, Representative, Part/Multipart and StandAlone). These are crucial choices that conditioned the entire process and they have been largely debated before adoption. The work for designing the new process based on the defined core metamodel can be represented as a cycle composed of three subphases as illustrated in Figure 2: (i) prioritization of MAS Metamodel Elements (MMMEs); (ii) identification and assembly of process fragments defining the MMMEs; (iii) extension of the metamodel until the complete process is defined. The process is detailed in the following algorithm:

//Subphase 1: MAS metamodel elements prioritization
1. Select a metamodel domain (in the order: problem, agency, solution) and consider the
resulting metamodel as a graph with nodes (MMMEs) and directed links (relationships)
2. Define List_elements as a list of MMMEs and associated priority p: List_elements (MMME, p)
a. p<1;
b. List_elements <- null;
3. Produce a linearization of the MMME nodes according to a topological sort in
List_elements, p is the index of each node in the list
// Subphase 2: Assembly of fragments related to the core MAS metamodel
4. Select/Build fragments for defining (i.e. instantiating) the selected MMME(s) by doing:
a. i<-1;
b. Selected_el<-List_elements.select(i);
c. if Selected_el.count>1 then select one element according to the easiest
identifiability of reusable fragment or new fragment creation
d. select/build fragment for element Selected_el.
5. Assembly the fragment in the new process (eventually modify it if required)
6. Select the next metamodel domain (if any) and repeat from 2
//Subphase 3: MAS Metamodel Extension
7. If the process is not completed (i.e. not all design activities from requirements
elicitation to coding, testing and deployment have been defined)
a. Introduce new MMMEs according to criteria discussed below
b. Repeat from 1.
It is worth to note that the previously defined algorithm is based on the
following assumptions: (i) MMMEs are organized in three domains: problem, agency, solution. In the first domain we put elements belonging to the model of the problem in terms of requirements, in the agency domain we collect elements defining an agent-based solution to the problem defined in the previous domain, in the solution domain we list elements related to the implementation of the solution in one or more available platforms. (ii) In each process fragment, four different actions can be done on (one or more) MMMEs: new MMME definition (instantiation), creation of new relationships among MMMEs, existing MMME quotation, existing relationship quotation. (iii) Each process fragment has a concrete, tangible output in form of one (or more) workproduct(s) belonging to the same work product type such as structural/behavioral diagram, text document, composite document.
The extension of the core MAS metamodel towards the completion of the process obtained by composing fragments, is a crucial activity that should be strongly affected by the awareness of the new process requirements and the relationships among requirements and MMMEs. In extending the initial core metamodel some other criteria should be considered: First, opportunity of reusing some existing fragments could lead to the introduction of MMMEs related to them. This is a kind of bottom-up criterion that privileges the reuse of well known and tested fragments.
3 The ASPECS process: requirements and results
This section presents the requirement under which the ASPECS process has been developed and the resulting process itself with the aim of short-circuiting the beginning and the end of the process we discussed so far.
3.1 Requirements for the construction of ASPECS
The design of the ASPECS methodology has been constrained by a set of requirements that according to the inputs of the process requirements analysis phase presented in Figure 1, can be classified as follows:
(i) Problem Type: the scope of the new design process was defined to be the development of very large MASs for the solution of complex problems suitable for an hierarchical decomposition.
(ii) Development context: the development of the ASPECS methodology can be seen as a joint work of people coming from two different experiences: people working at the SET laboratory who had a strong background in the design and implementation of holonic systems with a strong accent on organizational aspects of MASs (CRIIO process) and one new lab member who was the main author of a process (PASSI, [6]) for the design of MASs where agents were mostly peers and whose important features were: the use of ontologies, a requirements-driven agent identification, the adoption of patterns and tools for supporting design/-coding activities. Participants to this project soon agreed to preserve some key elements of their backgrounds and skills in order to enable an easier transition to the new design process. As regards agents implementation, in the SET lab, the development of a new coding platform Janus was undergoing and its adoption in the new design process was, of course, highly desirable.
These requirements concurred to the definition of the process we describe in the next subsection.
3.2 The resulting design process
ASPECS\textsuperscript{6} combines an organizational approach with an holonic perspective. Its target scope can be found in complex systems and especially hierarchical complex systems. The principle of ASPECS consists in analyzing and decomposing the structure of complex systems by means of an hierarchical decomposition. The ASPECS process consists in four phases that are briefly described below.
The Analysis phase is based on the identification of a hierarchy of organizations, which the global behavior may represent the system under the chosen perspective. It starts with requirements analysis and requirements are identified using classical techniques such as use cases. Domain knowledge and vocabulary associated to the target application are then collected and explicitly described in the problem ontology. Each requirement is then associated to an organization, that represents a global behavior able to fulfill the associated requirements. The context of each organization is defined by a set of concepts of the problem ontology. The organization identification gives raise to a first hierarchy of organizations, that will then be extended and updated during the iterative process to obtain the global organization hierarchy representing the system structure and behaviors. The identified organizations are decomposed into a set of interacting sub-behaviors modeled by roles. The goal of a role is to contribute to the fulfillment of (a part of) the requirements of the organization within which it
\textsuperscript{6} A complete description of ASPECS may be found at \url{http://set.utbm.fr/index.php?pge=352&lang=fr}
is defined. In order to design modular and reusable organization models, roles should be specified without making any assumptions on the architecture of the agent that may play them. To meet this objective, the concept of capacity was introduced. A capacity is an abstract description of a know-how, a competence of an agent or a group of agents. The role requires certain skills to define its behavior, which are modeled by capacity. The capacity can then be invoked in one of the tasks that comprise the behavior of the role. In return, an entity that wants to access a role, should provide a concrete realization for each of capacities that the role requires. The analysis phase ends with the capacity identification activity that aims at determining if a role requires a capacity and then adapting its behavior description. At this step a new iteration may possibly start. If all capacities required by roles at the lowest level of the hierarchy are considered to be manageable by atomic easy-to-implement entities, the process may stop.
The Agent Society Design phase aims at designing a society of agents whose global behaviour is able to provide an effective solution to the problem described in the previous phase and to satisfy associated requirements. The objective is, now, to provide a model of the agent society involved in the solution in terms of social interactions and dependencies among entities (holons and/or agents). Previously identified elements such as ontology, roles and interactions, are refined. At the end of the design phase, the hierarchical organization structure is mapped to a holarchy (hierarchy of holons) in charge of its execution. Each of the previously identified organizations is instantiated in forms of groups. Corresponding roles are then associated to holons or agents. This last activity also aims at describing the various rules that govern the decision making inside composed holons as well as the holons’ dynamics in the system. All of these elements are finally merged to obtain the complete set of holons (composed or not) involved in the solution. In this way, the complete holarchy of the solution is described.
The Implementation phase aims at implementing the agent-oriented solution designed in the previous phase by adapting it to the chosen implementation platform, in our case, Janus. Based on Janus, the implementation phase details activities that allow the description of the solution architecture and the production of associated source code and tests. It also deals with the reuse of previously developed solutions.
The Deployment phase is the final one and aims at detailing how to deploy an application over various Janus kernels. This phase starts with the description of the deployment configuration and details how the previously developed application will be concretely deployed: this includes studying distribution aspects, holons physical location(s) and their relationships with external devices and resources and tests.
4 Building ASPECS
In this section we describe the process we adopted for building ASPECS. We report the initially created core metamodel, the definition of the precedence order of the metamodel elements, the selection/assembly of process fragments and the
extension of the metamodel with the consequent selection of new fragments in
an iterative process. This process is the instantiation of the general process
described in section 2 and complements the theoretical part of this paper with
the experiment we did in composing ASPECS.
4.1 The core metamodel
A part of the initial core metamodel defined for the ASPECS process can be
seen in Figures 3(a) and 3(b). It has been composed by reusing the following
elements from the above-described metamodels:
- From PASSI: Requirement, Scenario, Ontology, Ontology Element, Concept,
Predicate, Action, Agent, Role (renamed AgentRole in Figure 3(b)), Commu-
nication and Message.
- From CRIO: Capacity (renamed Abstract Capacity in Figure 3(b)), Role
(renamed AbstractRole), Interaction and Organization
- From Janus: elements from Janus only belong to the implementation model
that has not been reported here for the sake of conciseness.
Some interesting issues, raised from the composition of these elements in the
new metamodel, are explained below:
- Elements coming from CRIO have been integrated in the new metamodel
with only minor changes in their definitions as can be seen in section 4.3 for
Organization and Interaction definitions.
- Two different concepts (Role in PASSI and CRIO) had the same name but
different definitions; since the CRIO Role concept is an analysis level concept
whereas the PASSI Role is mostly a design abstraction, we introduced a
transformation relationship between them (i.e. the CRIO AbstractRole is
transformed in the PASSI AgentRole when moving from Problem to Agency
domain). Each of the two elements thus remains in the domain it belonged
to in its original approach.
- PASSI Requirement is usually related to the agent concept. This represents
the fact that in PASSI agents are responsible for satisfying requirements. In
the ASPECS design process this responsibility is given to the organization
as it comes from the CRIO design process. Therefore the two concepts have been related and their definitions have been consequently modified.
– The PASSI Agent element as already discussed is no more related to Requirements but it becomes an abstract entity used to give to AgentRoles a individuality (shared knowledge and capacity). Holon realizes agents in a concrete way and it is the Solution domain abstraction corresponding to the JHolon implementation class supported by Janus.
– Ontology has the same structure of the PASSI corresponding concept but it is now positioned in the Problem domain. This is the consequence of a precise choice: adopting ontological exploration of the problem domain as a tool for deepening the understanding of the problem to be solved.
– Capacity has been introduced as an agency domain abstraction for representing what the role is capable to do. It is related to Service that is one possible realization of the Capacity. This means that several different services can implement the same Capacity. This structure required a change in the original PASSI specification of Service although maintaining its general meaning.
From these and other similar considerations we built the core metamodel for the ASPECS process. It wasn’t an easy and short activity but rather it was performed during several meetings, involving debates with other people not directly belonging to the team of ASPECS developers but involved in previous experiences of usage of agent-oriented methodologies and related platforms.
In the next subsection we discuss the prioritization of the MMMEs representing the order we expect to instantiate these elements in the fragments that will compose the new design process.
4.2 Prioritization of MAS metamodel elements
The priority order of the MMMEs was defined by applying the already discussed heuristics; the resulting list is: (i) Requirement, (ii) Ontology and all the related elements: Concept, Action, Predicate, (iii) Organization, (iv) AbstractRole and Interaction, (v) Scenario, (vi) RolePlan and RoleTask (not reported in Figure 3(b) because of space concerns) (vii) AbstractCapacity.
This list covers all the elements of the Problem domain and the choice done reflects both the ASPECS design process requirements and some new decisions: Requirement, Ontology and Scenario were the sinks of the graph. Requirement is the first element because of the idea of following a PASSI-like requirements identification phase. Ontology soon follows since we aimed at using the ontological exploration of the domain as a tool for deepening the understanding of the problem. Scenario is positioned later because of its specific meaning: we suppose that text descriptions of user-system interaction stories (sometimes known as scenarios) are provided as an input to our design process. When we talk about scenario here we mean a formalization of this textual descriptions including a detailed list of roles (AbstractRoles) involved in the Interaction(s). Obviously this new and formal description can be done only after the definition of AbstractRole and Interaction. Organization is positioned early in the list since we aim
at maintaining the PASSI philosophy of an initial agentification of requirements. AbstractRole and Interaction are positioned soon after Organization. RolePlan and RoleTask will not be discussed here because of space concerns but their positioning in the list is again a consequence of the adopted heuristics. Finally AbstractCapacity is introduced since its inputs are now satisfied. Similarly we obtained a priority order list for the MMMEs elements of the following domains (Agency and Solution).
After this step we are ready to start with the selection of fragments from the repository or the construction of new ones in order to define the elements according to the prescribed order. This process will be discussed in the next subsection.
4.3 Definition of an initial draft of the process
In performing the fragments selection activity, we refer to our repository of fragments [8]; it includes fragments extracted from PASSI, Agile PASSI, TROPOS, and Adelfe. For the presented experiment we used only fragments coming form PASSI. Since several MMMEs required by this novel approach (for instance Holon) are not present in the repository, we expect to produce several new process fragments, hoping of reusing and modifying some existing ones when possible.
According to the previous discussed list of MMMEs, the first process fragment of the process is supposed to draw a model of system requirements by starting from text usage scenarios. This is exactly what the first fragment (Domain Requirements Description) of PASSI does and it was thus reused. The definition of the Ontology is again done in an existing PASSI fragment and it was reused as well. The next MMME to be defined is Organization. In this fragment we aim at creating a relationship between each organization and the requirements it is responsible for. This is very near to the work done in the PASSI Agent Identification fragment that can therefore be easily adapted to cope with this new situation. The resulting fragment will be labelled Organization Identification.
The next fragment is devoted to define instances of AbstractRole and Interaction. These two elements are defined in a static view and then used to depict the dynamics of the system in the following view (where Scenario is designed). The resulting fragments will be Interaction and Role Identification (newly defined fragment adopting a class diagram where elements are instantiated), Scenario description (reused PASSI Role Identification fragment, a series of sequence diagrams describing roles interaction during scenarios). RolePlan and RoleTask are defined in a RolePlan fragment that is obtained by reusing the PASSI Task Specification diagram. Finally Capacity Identification is reused from the CRIOL process and it adopts a static view to define capacities and relate them to organizations and roles.
In a similar way we defined the remaining part of the process. In this discussion we omitted the details of each fragment and the difficulties found in defining the new ones as well as modifying the reused ones while adapting them to cope with the new specific issues. This choice draws from the specific scope of this paper that is concerned in demonstrating the role of the MAS metamodel.
elements as a guide towards the definition of the new design process rather than other aspects of this whole work. In the next section we discuss some examples of extension of the initial core MAS meta-model done in order to refine the initial sketch of the process.
4.4 Completion of the process and extension of the core metamodel
We view the construction of a new design process as an iterative-incremental activity that can be decomposed in the following steps: (i) Construction of a process stub (including several fragments, for instance up to reach the phase size). (ii) Test of the process portion. (iii) Evaluation of results. (iv) Next iteration planning in terms of new process requirements to be addressed, changes to be done in the existing process stub, and new parts of the metamodel to be included in the process.
In the case of the ASPECS methodology, we performed the first significant test activity after completing the System Requirements phase. This test consisted in using the new design process stub for designing a couple of simple applications. This allowed us to familiarize with the process and to appreciate its qualities. We only proposed one minor change: the explicit introduction of non-functional requirements in the early stages of the process (this implied an extension of the metamodel). After that, according to the 4-steps process discussed at the beginning of this subsection, we designed a new portion of the metamodel, more specifically, the core part of the Agency domain metamodel (see Figure 3(b)). We are not going to detail the work done on this part of the process, we will only discuss one interesting point: the extension of the initially defined core metamodel represented in Figure 3(b) to cope with some new process requirements identified during the iteration. After completing this portion of metamodel, the corresponding process stub and included process fragments, we started performing some tests and during them we realized that the new process had some limits: it was not possible to represent not FIPA-compliant agent interactions (for instance environment mediated). They had not been initially listed among the new process requirements but they were already supported by the Janus platform and sometimes used in previous projects developed in the lab. Another issue arose from the consideration that it was not possible to design simple (non holonic) agents like the conventional PASSI ones. This was seen as a limit since it limits the possibility of integrating in the same design complex holonic hierarchies with simple agents devoted to deal with minor parts of the problem. In order to solve these issues we changed and extended the core metamodel by including a Conversation and an AtomicAgent MMMEs.
The extended metamodel has been fully realized by a set of fragments and then the process stub tested and evaluated as already described. The work continued in an iterative way until the complete process was defined and thoroughly tested.
5 Conclusion
Based on the Situational Method Engineering, this paper reports an experiment of creation of a new process called ASPECS. The proposed approach starts from the identification of the new process requirements in terms of development context and problem type. The requirements are used for defining an initial core version of the MAS metamodel. The elements of this metamodel are then ordered in a precedence list and in this order the fragments, which are able to deal with, are retrieved from the repository and assembled in the new process. The resulting MAS metamodel of ASPECS [5] is mainly composed by elements coming from the PASSI [6] and CRIO [7] existing design processes and supports Janus as an implementation platforms of holonic agents. In previous works applying SME, the method engineer usually selects a set of process fragments that he considers as the best for fitting a particular situation and then modify or adapt them, specifically the most reusable part according to his own experience. In contrary to these approaches, the approach described in this paper aims at being as much free as possible from the designer skills providing a set of reusable guidelines for fragments selection and assembly.
References
|
{"Source-Url": "http://www.multiagent.fr/extensions/ICAPManager/pdf/CossentinoGaglioGallandGaudHilaireKoukamSeidita2008.pdf", "len_cl100k_base": 6307, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25960, "total-output-tokens": 7401, "length": "2e12", "weborganizer": {"__label__adult": 0.0003254413604736328, "__label__art_design": 0.0006127357482910156, "__label__crime_law": 0.0003209114074707031, "__label__education_jobs": 0.0010995864868164062, "__label__entertainment": 5.84721565246582e-05, "__label__fashion_beauty": 0.00016236305236816406, "__label__finance_business": 0.0003209114074707031, "__label__food_dining": 0.0003056526184082031, "__label__games": 0.0005555152893066406, "__label__hardware": 0.00054931640625, "__label__health": 0.0004000663757324219, "__label__history": 0.00035881996154785156, "__label__home_hobbies": 9.40561294555664e-05, "__label__industrial": 0.0004558563232421875, "__label__literature": 0.0003135204315185547, "__label__politics": 0.0002923011779785156, "__label__religion": 0.0004296302795410156, "__label__science_tech": 0.023681640625, "__label__social_life": 9.03606414794922e-05, "__label__software": 0.006153106689453125, "__label__software_dev": 0.96240234375, "__label__sports_fitness": 0.000274658203125, "__label__transportation": 0.0004453659057617187, "__label__travel": 0.00020003318786621096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33775, 0.01539]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33775, 0.36027]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33775, 0.92698]], "google_gemma-3-12b-it_contains_pii": [[0, 2494, false], [2494, 4186, null], [4186, 7567, null], [7567, 9663, null], [9663, 12776, null], [12776, 15988, null], [15988, 19243, null], [19243, 21193, null], [21193, 24376, null], [24376, 27622, null], [27622, 30744, null], [30744, 33775, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2494, true], [2494, 4186, null], [4186, 7567, null], [7567, 9663, null], [9663, 12776, null], [12776, 15988, null], [15988, 19243, null], [19243, 21193, null], [21193, 24376, null], [24376, 27622, null], [27622, 30744, null], [30744, 33775, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33775, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33775, null]], "pdf_page_numbers": [[0, 2494, 1], [2494, 4186, 2], [4186, 7567, 3], [7567, 9663, 4], [9663, 12776, 5], [12776, 15988, 6], [15988, 19243, 7], [19243, 21193, 8], [21193, 24376, 9], [24376, 27622, 10], [27622, 30744, 11], [30744, 33775, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33775, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
747ac91ce5e716f4ea3db4ccd4d123bab7afca7e
|
Dynamic Data Access Object Design Pattern
(CECIIS 2008)
Zdravko Roško, Mario Konecki
Faculty of Organization and Informatics
University of Zagreb
Pavlinska 2, 42000 Varazdin, Croatia
zrosko@yahoo.com, mario_konecki@yahoo.com
Abstract. Business logic application layer accessing data from any data source (databases, web services, legacy systems, flat files, ERPs, EJBs, CORBA services, and so forth) uses the Dynamic Data Access Object which implements the Strategy[1] design pattern and hides most of the complexity away from an application programmer by encapsulating its dynamic behavior in the base data access class. By using the data source meta data, it automates most of the functionality it handles within the application. Application programmer needs only to implement specific „finder” functions, while other functions such as „create, store, remove, find, removeAll, storeAll, createAll, findAll” are implemented by the Dynamic Data Access Object base class for a specific data source type.
Currently there are many Object Relational Mapping products such as Hibernate, iBatis, EJB CMP containers, TopLink, which are used to bridge objects and relational database. Most of the time this approach (Object Relational Mapping) makes one more unnecessary layer of the complexity. Dynamic Data Access Object helps application programmers to avoid the usage of the Object Relational Mapping when there is no benefit of using it. Dynamic Data Access Object is an implementation of „pass through” instead of Object Relational Mapping application behavior at the data access layer.
Keywords. Strategy, Data Access Object, Sovereign Value Object, EJB, Value List Handler, Result Set, Object Relational Mapping, Caching, Meta Data, Transaction, Connection, Business Object, Facade.
1 Introduction
This paper presents a pattern that help to desing the data access layer for any data source (not just relational) such as CICS, JMS/MQ, iSeries, SAP, Web Services, and so forth. Dynamic Data Access Object (DDAO) is an implementation of the Strategy design pattern [1] which defines a family of algorithms, encapsulate each one, and make them interchangeable through an interface. Having many options available (EJB, Object Relational Mapping, POJO, J2EE DAO, etc.) to use while accessing a data source, including persistent storage, legacy data and any other data source, the main question for development is: what to use to bridge the business logic layer and the data from a data source ? Assuming that the data access code is not coded directly into the business logic layer (Entity Bean, Session Bean, Servlet, JSP Helper class, POJO) to avoid tight coupling to the data source, the DDAO is an option to use if one needs to avoid XML configuration, complex environment setup, unnecessary code redundancy, poor performance.
If the data source get changed during the life time of an application, there is no need to change the business logic layer code, but to introduce a new DDAO class for specific data source, which gets attached to the business entity object (BO). Having separated DDAO and BO, it makes reuse of the DDAO possible by other parts of the business logic layer of an application.
DDAO also simplify the code development, unit test, integration test, and simultaneous access of multiple types of data sources such as relational, legacy and so forth, from one BO.
2 Example
Figure 1. shows the implementation of the interface which is used by the BO instead of the usage of a specific DDAO. An interface is implemented by each DDAO implementation for a specific BO, and represents the implementation of the Strategy design pattern.
Figure 2. shows an implementation of the DDAO for a JDBC data source. All functionality such as INSERT, DELETE, UPDATE, is implemented at the base class level (J2EEDataAccessObjectJdbc) while an application's DDAO implements a finder methods only for handling specific SQL queries.
3 Context
To make an application flexible, the creation of DDAO is done out of BO. DDAO abstracts business logic from the knowledge where the data resides, how to access the data, where to cache the data, etc. Figure 3. shows the context where the BO is used to access the DDAO for the data manipulation purpose. Typical Java application uses the data entered by the client or data retrieved from the data source. Data is transfered from the model to the business object in the form of value object [2]. From the presentation layer the data is transfered by using a business logic proxy which encapsulates the communication to the business logic layer. It is not necessary that business logic layers resides on the separate CPU process context, which means that it can run within the same process as the client presentation layer. Communication from the proxy to the logical server component is done by the J2EETransport component (it will be the subject for a new paper form the authors). Value object implements a command design pattern which is used to invoke SampleBankComponentFacade from the transport layer. When BO gets called, it can use DDAO to do all data source related logic, and return the data back to the calling client.
4 Problem
Hibernate, iBatis and other available implementations of object relational mapping or Data Access Object (DAO) pattern, relay more on programmers to produce the data access code, then on the data source meta data for SQL statement generation, or other meta data sources if not JDBC data source is used. DDAO does not require complex configuration files at all except one for each data source destination to identify the host, port, user and couple of other parameters. DDAO has the default implementation for specific data sources such as CICS, JMS, iSeries, etc.
If we build the application which does require simple and fast access to the data sources and which uses a data source capabilities such as stored procedures, CICS programs, JMS programmed logic, we do not need an object relational mapping, but instead we need to have a mechanism to handle access, connection, transaction, caching, for each of the specific data source types on a unique and manageable way.
The DDAO architecture enables the transaction management for more then one transaction destination (RDMs, CICS, JMS, etc.) at the same time during a client logical unit of work. It is possible to build a transaction by using DDAO which uses CICS, JMS and JDBC data source at the same logical unit of work.
During analysis you usually define the data model of the application in third normal form. However, the entire system performs poorly if you also use third normal form as the physical table layout [6]. Using EJB bean managed persistence or other types of object relational mapping leads to a limited tuning activities needed to achieve acceptable performance.
Most of the DAO implementations require a set up of complex development environment which includes containers and other specific environment set up, depending on the DAO implementation being used.
5 Solution
The DDAO access the data source while the BO is not coupled to a specific vendor implementation or API. DDAO contains all the source code developer needs to change, in case data source vendor or API get changed. BO stays the same not depending on specific data source.
The DDAO has the following functions:
- Contains the logic to access a data source
- Uses a connection from a connection pool
- Caches the data access statements (eg. SQL)
- Manages cached data from the data source
- Converts data from the data source to a specific format such as Sovereign Value Object (SVO) [2], Service Data Object (SDO)[3], Data Transfer Object (DTO)[4], and etc.
- Validates the fields lengths and types for the fields candidates for being stored to a data source by using its meta data.
- Enables transaction management (flat transactions only) for JDBC and not JDBC data source (JMS, CICS, etc.). DDAO is defined within the transaction context which transparently handles the transaction, leaving application programmer free from the transaction handling complexity. DDAO can handle JTA, JDBC, JMS, CICS, and other transaction types during one logical unit of work.
- Simplify the application development environment set up. DDAO development does not require any kind of containers or other specific environment set up. Once the DDAO is developed, unit tested, it could be deployed to any thread safe environment such as servlet container, EJB container or as a Java application or Java Applet (not recommended).
Figure 4 illustrates DDAO layer. DDAO layer manages reading, writing, updating and deleting data at the data source in both cases whether data is stored or created by the data source at the run time.
Figure 4. DDAO layer perspective
While making a decision which persistence method to use, we can consider what needs to happen at the data access object layer. Grading each persistence method according to how well it achieves the goals, we can use the following rating system[7]:
- High (best rating): Gets high marks toward achieving the stated goal
- Medium (middle rating): Moderately achieves the stated goal
- Low (lowest rating): does not achieve the stated goal very well
Figure 5. lists the goals and ratings of DDAO and several others persistence methods.
<table>
<thead>
<tr>
<th>Goal</th>
<th>JDBC</th>
<th>EJB/ BMP</th>
<th>EJB/ CMP</th>
<th>O/R</th>
<th>DDAO</th>
</tr>
</thead>
<tbody>
<tr>
<td>Minimize learning curve</td>
<td>High</td>
<td>Low</td>
<td>Low</td>
<td>Med</td>
<td>High</td>
</tr>
<tr>
<td>Minimize code and configuration files written and maintained</td>
<td>Low</td>
<td>Low</td>
<td>Low</td>
<td>Med</td>
<td>Med</td>
</tr>
<tr>
<td>Maximize ability to tune</td>
<td>High</td>
<td>Med</td>
<td>Low</td>
<td>Low</td>
<td>High</td>
</tr>
<tr>
<td>Minimize development effort</td>
<td>High</td>
<td>Low</td>
<td>Low</td>
<td>Med</td>
<td>High</td>
</tr>
<tr>
<td>Maximize code portability</td>
<td>Med</td>
<td>Med</td>
<td>High</td>
<td>Med</td>
<td>Med</td>
</tr>
<tr>
<td>Minimize vendor reliance</td>
<td>High</td>
<td>Med</td>
<td>Med</td>
<td>Low</td>
<td>High</td>
</tr>
<tr>
<td>Maximize availability and failover</td>
<td>Low</td>
<td>High</td>
<td>High</td>
<td>Low</td>
<td>Med</td>
</tr>
<tr>
<td>Manageable via JTA</td>
<td>High</td>
<td>Low</td>
<td>Low</td>
<td>Med</td>
<td>High</td>
</tr>
<tr>
<td>Handles other then JTA transaction</td>
<td>Low</td>
<td>Low</td>
<td>Low</td>
<td>Low</td>
<td>Low</td>
</tr>
<tr>
<td>Access other then JDBC data sources</td>
<td>Low</td>
<td>Low</td>
<td>Low</td>
<td>Low</td>
<td>High</td>
</tr>
</tbody>
</table>
Figure 5. Ratings of data persistence methods
6 Structure
Figure 6. shows the class diagram for the DDAO pattern.
DDAO includes the following classes:
- J2EEDataAccessObjectFactory is a class that is responsible for creating and returning DataAccessObject instances for Business Objects. It holds a mapping that BO query to get the Data Access Object that handles its persistence. The default mapping assumed by the naming convention does not require any programmers action at all or an entry to the mapping configuration file. Just for the specific naming of the data access objects for particular BO, an entry needs to be entered to the configuration file. The J2EEDataAccessObjectFactory uses Business Object name to choose a specific class that subclass the DataAccessObject and serves as a DDAO for the BO.
- J2EEDataAccessObject is a common abstract class for all Data Access Objects. The standard access methods daoStore, daoLoad, daoCreate, daoFind, daoRemove, daoRemoveAll, daoLoadNext, etc., are defined within the class. DDAO method naming and class structure by design are ready to be converted to an EJB Bean Managed Persistence class, in case it is required by a specific application.
- J2EEDataAccessObjectJdbc is the JDBC implementation of J2EEDataAccessObject which defines all its base class methods specific for the JDBC data source.
- J2EEDataAccessException is the Data Access layer unchecked exception. Unchecked exceptions may be thrown from any method without being specified as part of the method's public interface (and a handler need not be present)[5]. It is recommended that all DDAO inheriting from J2EEDataAccessObject catch the other types of the exceptions and throw the J2EEDataAccessException. Business logic layer need not to handle any other type of exception while accessing data access layer. J2EEBusinessObject asks the J2EEDataAccessObjectFactory class for an J2EEDataAccessObject instance to which it delegates persistence operations.
Figure 7. shows the DDAO data source access classes for caching, connection management and configuration properties access. Each DDAO inherits from the specific data source class, which is connected to its J2EEPersistenceService and has its own configuration file attached. For example, if there are two JDBC data sources and one JMS data sources to be accessed, there needs to exist three J2EEPersistenceService implementations which inherit from the J2EEPersistenceService and have just simple getConnection method implemented, plus a configuration file for each of them configured.
7 Dynamics
Figure 8. shows the message flow between the objects at the data access layer. BO is a pass-through entity object which is being instantiated only once and used by all clients on a thread-safe way during the time the business logic layer is up and running. BO does not keep the state even though this option exists for the specific needs (not explained here). BO can keep the state by using a thread specific storage design pattern [8]. BO asks J2EEDataAccessObjectFactory to create the Data Access Object and use it to read and write the data to the data source. Methods such as daoStore, daoCreate, daoRemove, daoLoad and daoFind are implemented by the base J2EEDataAccessObjectJdbc class, while the rest of query methods are implemented by the concrete (BankAccountJdbc) classs. BankAccountBO does not access BankAccountJdbc class directly but using its BankAccountDao interface, which makes the DDAO replaceable by other data access class in case the data source get changed.
8 Implementation
Figure 9. show a part of the source code from the \texttt{J2EEDataAccessObjectJdbc} class. Methods \texttt{daoCreate} and \texttt{daoCreateMany} are inherited by application specific DDAO. \texttt{daoCreate} gets the JDBC connection from the connection pool for its target data source, do prepares data specific for an environment or locale (\texttt{prepareSqlCreate}), prepares (just once per connection session) or retrieves from the local cache the INSERT statement, checks the fields length (\texttt{validateFieldLength}) using the meta data column size information and maps the client data from \texttt{J2EEValueObject}[2] to the table columns before executing INSERT SQL statement.
```java
class J2EEDataAccessObjectJdbc extends J2EEDataAccessObject {
...
public J2EEValueObject daoCreate(J2EEValueObject value) throws J2EEDataAccessException {
J2EEValueObject res = new J2EEValueObject();
J2EEConnectionJDBC co = null;
try {
prepareData(value, getTableName());
co = getConnection();
Connection jco = co.getJdbcConnection();
String sql = prepareSqlCreate(value, getTableName());
PreparedStatement pstmt = jco.prepareStatement(sql);
J2EEResultSet columnsInfo = co.getColsInfo(getTableName());
Vector columns = columnsInfo.getRows();
validateFieldLength(columnsInfo, value);
Hashtable hashKeys = co.getPrimaryKeysInHashtable(getTableName());
int i = 0;
for (Enumeration e = columns.elements();
e.hasMoreElements();
i++) {
J2EEValueObject model = (J2EEValueObject) e.nextElement();
String tableColumn = co.getColumnName(row);
.......
}
} catch (SQLException e) {
throw new J2EEDataAccessException(e);}
}
}
```
9 Applicability
DDAO could be used when
- Access CICS, JMS, Web Services, CORBA service and other data sources. DDAO defines a common class for each of these data source types, which inherits from \texttt{J2EEDataAccessObject} and handles connection and transaction logic for the specific data source. By inheriting the same base class as other DDAO classes, it is possible to switch the data sources if they get changed and it frees the application programmer from the concerns about connection pool and transaction logic.
- Building the application for the multiple deployment options (pure Java application, applet or an application server component within a Servlet od EJB container).
- Using Eclipse or other IDE tools to develop data access logic without the need to have a container configured, but pure JDK environment.
- Building a complex enterprise size application to access legacy, RDBMS, and other data sources for implementing a single logical unit of work combined from many different data source types.
10 Variants
DDAO method assumes one or more DDAO per business entity object. If one BO needs to access more than one data source destination, one needs to create as many DDAO and configure \texttt{J2EEDataAccessObjectFactory} to handle this situation. Multiple DDAO are kept within a BO as a list of DDAO and accessed accordingly when needed by the BO. When CICS programs are called from DDAO, COBOL copy books are used to produce meta data which are used by the DDAO to handle calls to a CICS programs. Similarly when accessing iSeries programs from a DDAO, C include header files are used to generate meta data used by iSeries DDAO to access C programs.
11 Consequences
DDAO is a very lightweight persistence solution which ease the development of data-driven applications by abstracting the low-level details involved in data source communication (loading driver, managing connections, managing transaction, etc.) as well as providing higher-level capabilities (data type conversion management, support for static and dynamic queries or program calls, mapping attributes to columns or other data source types, etc.). DDAO includes a code generation tool which helps to generate specific DDAO by using a data source meta data (JDBC meta data, PDML, COBOL copy books, etc.).
13 Related patterns
Figure 11. shows the message flow between the objects at the data access layer.

Transaction management, while using DDAO, is done up front at the start of the business logic layer, by using current thread identification as a technique for separating client transaction contexts. There is no need to transfer a connection or transaction objects from business logic layer down to the data access layer by sending it as a parameter. All connections used by the dynamic data access objects are kept within the transaction object for current client invocation. At the end of the logical unit of work, the transaction which could include JDBC, JMS, CICS, etc., is closed by issuing a commit or rollback command outside of DDAO implementation.
It is possible to use the transaction commands directly inside the DDAO implementation, but it rarely makes the sense. CICS, JMS and other data source connection and transaction handling, is a subject of a new paper which will be published by the authors of this paper. DDAO supports JTA, EJB, or its own flat transaction service. DDAO's own transaction service does not support chained or nested transaction types. If using DDAO transaction service, it is up to the user to decide if some of the logical unit of work needs to issue a transaction commit or rollback command while inside an existing transaction. In that case, the existing transaction needs to be closed before a new transaction is opened. DDAO uses Value List Handler [4] to return subsets of the result to the client as requested.
References
|
{"Source-Url": "https://bib.irb.hr/datoteka/634819.CECIIS_2008_Rosko-Konecki_Paper.pdf", "len_cl100k_base": 4381, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20109, "total-output-tokens": 4859, "length": "2e12", "weborganizer": {"__label__adult": 0.0003445148468017578, "__label__art_design": 0.0002827644348144531, "__label__crime_law": 0.0002963542938232422, "__label__education_jobs": 0.0003895759582519531, "__label__entertainment": 3.5703182220458984e-05, "__label__fashion_beauty": 0.00012671947479248047, "__label__finance_business": 0.00022614002227783203, "__label__food_dining": 0.00029277801513671875, "__label__games": 0.00026345252990722656, "__label__hardware": 0.0005774497985839844, "__label__health": 0.0003299713134765625, "__label__history": 0.00016617774963378906, "__label__home_hobbies": 6.747245788574219e-05, "__label__industrial": 0.00029087066650390625, "__label__literature": 0.00014007091522216797, "__label__politics": 0.0002211332321166992, "__label__religion": 0.00032901763916015625, "__label__science_tech": 0.003177642822265625, "__label__social_life": 6.0677528381347656e-05, "__label__software": 0.003025054931640625, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.0002541542053222656, "__label__transportation": 0.0004591941833496094, "__label__travel": 0.0002005100250244141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20807, 0.02233]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20807, 0.59172]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20807, 0.82867]], "google_gemma-3-12b-it_contains_pii": [[0, 3398, false], [3398, 5765, null], [5765, 10147, null], [10147, 13660, null], [13660, 17240, null], [17240, 20807, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3398, true], [3398, 5765, null], [5765, 10147, null], [10147, 13660, null], [13660, 17240, null], [17240, 20807, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20807, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20807, null]], "pdf_page_numbers": [[0, 3398, 1], [3398, 5765, 2], [5765, 10147, 3], [10147, 13660, 4], [13660, 17240, 5], [17240, 20807, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20807, 0.09917]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
36f1ea180159946e835971e97cdd42e50fe3ca50
|
TMSS Parsing Test
Equipment Organizational Maintenance Manual Set
MIL-M-83495A Schematic Diagram DTD
MAXIMA Corporation
DISA Contract #DCA100-93-D-0065
Quick Short Test Report
20 May 1994
TMSS Parsing Test
Equipment Organizational Maintenance Manual Set
MIL-M-83495A Schematic Diagram DTD
For:
MAXIMA Corporation
DISA Contract #DCA100-93-D-0065
Quick Short Test Report
20 May 1994
Prepared By
Air Force CALS Test Bed
Wright-Patterson AFB, OH 45433
AFCTB Contact
Gary Lammers
(513) 427-2295
AFCTN Contact
Mel Lammers
(513) 427-2295
DISCLAIMER
This document was prepared as an account of the work sponsored by the Air Force. Neither the United States Government, the Air Force, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, nor represents that its use would not infringe on privately owned rights. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the Air Force. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the Air Force, and shall not be used for advertising or product endorsement purposes.
Available to the public from the
National Technical Information Service
U.S. Department of Commerce
5285 Port Royal Road
Springfield, VA 22161
This report and those involved in its preparation do not endorse any product, process, or company stated herein. Use of these means by anyone does not imply certification by the Air Force CALS Test Network (AFCTN).
Air Force CALS Test Bed
Notification of Test Results
20 May 1994
This notice documents the results of an Air Force CALS Test Bed (AFCTB) Quick Short Test Report (QSTR) evaluation of data submitted by:
MAXIMA Corporation
Identified as follows:
Title: Equip. Organizational Maint. Manual Set
MIL-M-83495A Schematic Diagram DTD
Parsing Test
Program: DISA
Program Office: DISA
Contract No.: DCA100-93-D-0065
QSTR No.: AFCTB-ID 94-057
Received on the following media: 3.5" Diskette
The results of the QSTR evaluation are as follows:
MIL-STD-1840A Media Format: N/A
MIL-D-28000A IGES: N/A
MIL-M-28001B SGML: Pass
MIL-R-28002A Raster: N/A
MIL-D-28003 CGM: N/A
Formal results with associated disclaimer are documented and available from the AFCTB.
Air Force CALS Test Bed
HQ ESC/AV-2P
4027 Colonel Glenn Highway, Suite 300
Dayton, OH 45431-1672
Phone: 513-257-3085 FAX: 513-257-5881
## Contents
1. Introduction .................................................. 1
1.1. Background .............................................. 1
1.2. Purpose .................................................. 2
2. Test Parameters .............................................. 3
3. 1840A Analysis .............................................. 4
3.1. External Packaging ...................................... 4
4. SGML Analysis ................................................ 4
4.1. Exoterica Validator ..................................... 5
4.2. Exoterica XGML Normalizer ............................. 6
4.3. Public Domain sgmls .................................... 6
4.4. Sema Mark-it ............................................ 6
5. Conclusions and Recommendations ............................ 7
6. Appendix A - Detailed SGML Analysis ....................... 8
6.1. Exoterica Validator ..................................... 8
6.2. Exoterica XGMLNormalizer Parser ....................... 10
6.3. Public Domain sgmls Log. ............................... 10
6.4. Sema Mark-it Log. ....................................... 10
1. Introduction
1.1 Background
The Department of Defense (DoD) Air Force Continuous Acquisition and Life-cycle Support (CALS) Test Network (AFCTN) is conducting tests of the military standard for the Automated Interchange of Technical Information, MIL-STD-1840A, and its companion suite of military specifications. The AFCTN is a DoD sponsored confederation of voluntary participants from industry and government managed by the Electronic Systems Center (ESC).
The primary objective of the AFCTN is to evaluate the effectiveness of the CALS standards for technical data interchange and to demonstrate the technical capabilities and operational suitability of those standards. Two general categories of tests are performed to evaluate the standards; formal and informal.
Formal tests are large and comprehensive, which follow a written test plan, require specific authorization from the DoD, and may take months to prepare, execute, and report.
Informal tests are quick and short, used by the AFCTN technical staff, to broaden the testing base. They include representative samples of the many systems and applications used by AFCTN participants. They also allow the AFCTN staff to gain feedback from many industry and government interpretations of the standards, to increase the base of participation in the CALS initiative, and respond to the many requests for help that come from participants. Participants take part voluntarily, benefit by receiving an evaluation of their latest implementation (interpretation) of the standards, interact with the AFCTN technical staff, gain experience using the standards, and develop increased confidence in them. The results of informal tests are reported in Quick Short Test Reports (QSTRs) that briefly summarize the standard(s) tested, the hardware and software used, the nature of the test, and the results.
1.2 Purpose
The purpose of this informal test is to analyze Air Force Document Type Definitions (DTDs) for Standard Generalized Markup Language (SGML) syntax, using several commercial and public domain SGML parsers, prior to placing them in the Defense Information Systems Agency (DISA) Asset Source for Software Engineering Technology (ASSET) repository.
2. Test Parameters
Test Plan: AFCTB 94-057
Date of Evaluation: 20 May 1994
Evaluator: George Elwood
Air Force CALS Test Bed
DET 2 HQ ESC/AV-2P
4027 Colonel Glenn Hwy
Suite 300
Dayton OH 45431-1672
Data Originator: Kay Hill
MAXIMA Corporation
2372 Lakeview Drive
Beavercreek OH 45431
(513) 427-5888
Data Description: Technical Manual Test 1 DTD
Data Source System:
Text/SGML
SOFTWARE
Evaluation Tools Used:
MIL-M-28001 (SGML)
PC 486/50
Exotérica XGMLNormalizer v1.2e3.2
Exotérica Validator v2.2 ex1
McAfee & McAdam Sema Mark-it v2.3
Public Domain sgmls v1.1
Standards Tested: MIL-M-28001B
3. 1840A Analysis
3.1 External Packaging
The 3.5" diskette was hand delivered to the Air Force CALS Test Bed (AFCTB) not enclosed in any container.
The files received by the AFCTB were not MIL-STD-1840A files. The files were not named per the standard conventions, as the stated purpose of the evaluation was the basic data structure.
4. SGML Analysis
The AFCTB has several parsers available for evaluating submitted DTD and text files. These tools are not used to generate a pass/fail but to report how commercially available software can handle the files. These products are used in the development of technical publications and are good indicators of usability. The use of these products is not an endorsement nor an indication of CALS capability. All operations were performed using the default settings unless specified in the report.
The goal was to configure the DTD under analysis as it would normally be used. However, for this analysis it was necessary to modify the DTD adding the "DOCTYPE doc [" statement to the start and the companion "]>" to the end of the DTD under evaluation, as stated in the comments of the DTD. If the parser did not support a formal file pointing to the location of the ISO character sets, this was added.
The DTD under evaluation, MIL-M-83495A, Appendix G, Schematic Diagram DTD, called for a companion MIL-M-38784C DTD. MIL-M-38784C, Amendment three DTD, delivered on the same diskette as the DTD to be evaluated, was used. This DTD was used without change except to insert the location of the ISO character set files.
<!-- MIL-M-83495A Schematic Diagram DTD -->
D495AG0.ORG --> D784CB0.A3
The following notes apply to the parsing analysis as indicated in the ensuing paragraphs.
Note 1. Entity parameters defined more than once is a valid SGML construct. The warnings in this analysis do not indicate any problems.
Note 2. The occurrences of warnings "references defined, but not used," appeared because portions of the called DTD, i.e., MIL-M38784C, were not used. They were not used because the entity definitions of the DTD under analysis altered the execution path in the called DTD. The warnings in this analysis do not indicate any problems.
4.1 Exoterica Validator
The DTD file was evaluated using the Exoterica Validator exl parser. The basic DTD was modified by placing the concrete syntax file and "<!DOCTYPE doc [" at the start, and the "]>" at the end. The parser reported 54 errors which related to the missing instance and are not considered error for this analysis.
The parser reported 27 total warnings of two types. The first type warning was a parameter entity declared more than once. See Note 1 above.
<!-- **Warning** in "i:\dtd\784c.dtd" (entity "$m38784c"), line 37, used in "\xgml\9457.dtd", line 124: A parameter entity name has been declared more than once. The entity is "$shortitleuse". -->
The second type warning related to elements defined but not used in any content model. See Note 2 above.
<!-- **Warning** in "\xgml\9457.dtd", line 215: An element is not allowed in the document instance because it does not appear in any accessible content model or it is completely excluded. The element is "ADDRESS". -->
4.2 Exoterica XGML Normalizer
The DTD file was parsed using the Exoterica XGMLNormalizer parser. The concrete syntax and "<!DOCTYPE doc [" were added to the start, and "]>" to the end of the file. No errors or warnings were issued by this utility.
4.3 Public Domain sgmls
The DTD file was evaluated using the Public Domain sgmls parser version 1.1. The file was modified by adding the "<!DOCTYPE doc [" to the start, and "]>" to the end of the file as stated in the comments in the DTD. The D784CB0.A3 file was used as the point to file. Four warnings were reported by the parser, all of which were duplicate definitions. See Note 1 above.
sgmls: In file included at \ws\9457.dtd, line 52:
Warning at i:\dtd\784c.dtd, line 37 in declaration parameter 4:
Duplicate specification occurred for "%shortitleuse"; duplicate ignored.
4.4 Sema Mark-it
The DTD file was evaluated using McAfee & McAdam's Sema Mark-it v2.3 parser. The file was modified by adding the concrete syntax and "<!DOCTYPE doc [" to the start, and "]>" to the end of the file. The base line D784CB0.A3 file was used.
5. Conclusions and Recommendations
The file D495AG0.ORG, MIL-M-83495A, Appendix G, conforms to the SGML syntax of ISO Standard 8879, as required by the CALS MIL-M-28001B specification.
The errors, warnings, and comments reported by the parsers used in this evaluation do not indicate any syntactical variances, nor indicate any problem that would invalidate the effective application of this DTD.
6. Appendix A - Detailed SGML Analysis
6.1 Exoterica Validator
<!-- **Warning** in "i:\dt\784c.dtd" (entity "%m38784c"), line 37,
used in "\xgm\9457.dtd", line 124:
A parameter entity name has been declared more than once.
The entity is "%shortitleuse".
<!--ENTITY % shortitleuse "IGNORE" >
^^^^^^^^^^^^^^^-->
->
<!-- **Warning** in "i:\dt\784c.dtd" (entity "%m38784c"), line 45,
used in "\xgm\9457.dtd", line 124:
A parameter entity name has been declared more than once.
The entity is "%shortitlet".
<!--ENTITY % shortitlet " " >
^^^^^^^^^^^-->
->
<!-- **Warning** in "i:\dt\784c.dtd" (entity "%m38784c"), line 107,
used in "\xgm\9457.dtd", line 124:
A parameter entity name has been declared more than once.
The entity is "%fnt".
<!--ENTITY % fnt "(idinfo, warnpage?, chginssht?, lep, verstat?,
^^^^-->
->
<!-- **Warning** in "i:\dt\784c.dtd" (entity "%m38784c"), line 165,
used in "\xgm\9457.dtd", line 124:
A parameter entity name has been declared more than once.
The entity is "%bodyele".
<!--ENTITY % bodyele "(chapter, chapter+, ddchapter?)" >
^^^^^^^^^-->
->
<!-- **Warning** in "\xgm\9457.dtd", line 215:
An element is not allowed in the document instance because it does not
appear in any accessible content model or it is completely excluded.
The element is "ADDRESS".
<i-- PART OF MESSAGE TEXT REMOVED HERE >>>>>>
The element is "CHAPTER".
The element is "CHGINSSHT".
The element is "CHGLIST".
The element is "CHGREC".
The element is "DDEXCHAP".
The element is "DDBDESC".
The element is "DDINDEX".
The element is "DDINTRO".
The element is "DDLIST".
The element is "DDSHEET".
The element is "DOC".
The element is "DOCPART".
The element is "ILLUSLIST".
The element is "INSERTPG".
The element is "INTRO".
The element is "PGNO".
The element is "PREFACE".
The element is "REMOVEPG".
The element is "SECTION".
The element is "SHORTTITLE".
The element is "VOLUME".
The element is "WARNPAGE".
<<<<< ERRORS FROM THIS POINT ON RELATE TO NO DOCUMENT INSTANCE AND ARE NOT CONSIDERED FOR THIS REPORT >>>>>
<!-- **Error** in "\xgml\9457.dtd", line 216:
The document instance must consist of at least one tag or data character.
The following element can start: "DOCSD". -->
<<<<< PART OF LOG FILE REMOVED HERE >>>>>
<!-- **Error** in "\xgml\9457.dtd", line 216:
An end tag that has been declared inomissible ("-".) must not be omitted.
The element is "DOCSD". -->
<!-- Capacity points/limits:
TOTALCAP =114919/200000
ENTCAP =12832/200000
ENTCHCAP =11437/70000
ELEMCAP =5408/70000
GRPCAP =31264/70000
EXGRPCAP =768/70000
EXNMACAP =1312/70000
ATTCAP =33664/200000
ATTCHCAP =887/70000
AVGRPCAP =17088/70000
NOTCAP =96/70000
NOTCHCAP =163/70000
IDCAP =0/70000
IDREFCAP =0/70000
MAFCAP =0/70000
LKSETCAP =0/70000
LKNMACAP =0/70000
-->
54 errors and 27 warnings reported. -->
6.2 Exoterica XGMLNormalizer Parser
No reported errors or warnings.
6.3 Public Domain sgmls Log
sgmls: In file included at \ws\9457.dtd, line 52:
Warning at i:\dt\784c.dtd, line 37 in declaration parameter 4:
Duplicate specification occurred for "%shortitleuse"; duplicate ignored
sgmls: In file included at \ws\9457.dtd, line 52:
Warning at i:\dt\784c.dtd, line 45 in declaration parameter 4:
Duplicate specification occurred for "%shortitle"; duplicate ignored
sgmls: In file included at \ws\9457.dtd, line 52:
Warning at i:\dt\784c.dtd, line 109 in declaration parameter 4:
Duplicate specification occurred for "%frmt"; duplicate ignored
sgmls: In file included at \ws\9457.dtd, line 52:
Warning at i:\dt\784c.dtd, line 165 in declaration parameter 4:
Duplicate specification occurred for "%bodyele"; duplicate ignored
6.4 Sema Mark-it Log
ATTcnt --100-- 48
ATTsPLEN --960-- 0
BseqLEN --960-- 0
DTAGLEN --16-- 0
DTEMPLATE --16-- 0
ENTLVL --16-- 2
GRPCNT --32-- 22
GRPGTCNT --96-- 27
GRFLVL --16-- 5
LITLEN --3048-- 730
NAMELEN --32-- 13
NORMSEP --2-- 0
PILEN --240-- 0
TAGLEN --960-- 0
TAGLVL --24-- 0
MSDLVL --24-- 2
TOTALCAP --200000-- 122138
ENTCAP --200000-- 16064
ENTCHCAP --70000-- 10595
ELEMCAP --70000-- 5408
GRPCAP --70000-- 36064
EXGRFCAP --70000-- 800
EXNMCP --70000-- 1312
ATTCAP --200000-- 33664
ATTCHCAP --70000-- 887
AVGRFCAP --70000-- 17088
NOTCAP --70000-- 96
NOTCHCAP --70000-- 160
IDCAP --70000-- 0
IDREFCAP --70000-- 0
MPCAP --70000-- 0
LKSETCAP --70000-- 0
LKINMCAP --70000-- 0
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a312965.pdf", "len_cl100k_base": 4178, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 18291, "total-output-tokens": 5435, "length": "2e12", "weborganizer": {"__label__adult": 0.0004892349243164062, "__label__art_design": 0.0004227161407470703, "__label__crime_law": 0.0014324188232421875, "__label__education_jobs": 0.0012664794921875, "__label__entertainment": 0.00010794401168823242, "__label__fashion_beauty": 0.00031185150146484375, "__label__finance_business": 0.0013647079467773438, "__label__food_dining": 0.00037932395935058594, "__label__games": 0.0007138252258300781, "__label__hardware": 0.027252197265625, "__label__health": 0.0003676414489746094, "__label__history": 0.0004360675811767578, "__label__home_hobbies": 0.00016880035400390625, "__label__industrial": 0.005218505859375, "__label__literature": 0.00026535987854003906, "__label__politics": 0.0005736351013183594, "__label__religion": 0.0005674362182617188, "__label__science_tech": 0.2281494140625, "__label__social_life": 9.053945541381836e-05, "__label__software": 0.060699462890625, "__label__software_dev": 0.66796875, "__label__sports_fitness": 0.00029468536376953125, "__label__transportation": 0.0010900497436523438, "__label__travel": 0.00016605854034423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15813, 0.1298]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15813, 0.14888]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15813, 0.80125]], "google_gemma-3-12b-it_contains_pii": [[0, 194, false], [194, 543, null], [543, 1825, null], [1825, 2728, null], [2728, 3922, null], [3922, 5778, null], [5778, 6135, null], [6135, 6733, null], [6733, 8372, null], [8372, 9933, null], [9933, 11027, null], [11027, 11426, null], [11426, 13041, null], [13041, 14304, null], [14304, 15578, null], [15578, 15813, null]], "google_gemma-3-12b-it_is_public_document": [[0, 194, true], [194, 543, null], [543, 1825, null], [1825, 2728, null], [2728, 3922, null], [3922, 5778, null], [5778, 6135, null], [6135, 6733, null], [6733, 8372, null], [8372, 9933, null], [9933, 11027, null], [11027, 11426, null], [11426, 13041, null], [13041, 14304, null], [14304, 15578, null], [15578, 15813, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 15813, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15813, null]], "pdf_page_numbers": [[0, 194, 1], [194, 543, 2], [543, 1825, 3], [1825, 2728, 4], [2728, 3922, 5], [3922, 5778, 6], [5778, 6135, 7], [6135, 6733, 8], [6733, 8372, 9], [8372, 9933, 10], [9933, 11027, 11], [11027, 11426, 12], [11426, 13041, 13], [13041, 14304, 14], [14304, 15578, 15], [15578, 15813, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15813, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
62cbcfb6cc1eac26844803bc51fd552c85d6191c
|
[REMOVED]
|
{"len_cl100k_base": 6197, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 31414, "total-output-tokens": 8519, "length": "2e12", "weborganizer": {"__label__adult": 0.0004074573516845703, "__label__art_design": 0.0004291534423828125, "__label__crime_law": 0.0008373260498046875, "__label__education_jobs": 0.0007472038269042969, "__label__entertainment": 0.0001493692398071289, "__label__fashion_beauty": 0.00018405914306640625, "__label__finance_business": 0.0008106231689453125, "__label__food_dining": 0.0003893375396728515, "__label__games": 0.0007390975952148438, "__label__hardware": 0.0025653839111328125, "__label__health": 0.0010423660278320312, "__label__history": 0.000385284423828125, "__label__home_hobbies": 0.00011074542999267578, "__label__industrial": 0.0005002021789550781, "__label__literature": 0.00034427642822265625, "__label__politics": 0.0003664493560791016, "__label__religion": 0.0005240440368652344, "__label__science_tech": 0.31494140625, "__label__social_life": 0.0001583099365234375, "__label__software": 0.0657958984375, "__label__software_dev": 0.607421875, "__label__sports_fitness": 0.0002760887145996094, "__label__transportation": 0.0005936622619628906, "__label__travel": 0.00029277801513671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38727, 0.03155]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38727, 0.36781]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38727, 0.89499]], "google_gemma-3-12b-it_contains_pii": [[0, 2798, false], [2798, 8330, null], [8330, 13948, null], [13948, 14494, null], [14494, 19833, null], [19833, 21932, null], [21932, 24563, null], [24563, 27930, null], [27930, 31367, null], [31367, 36716, null], [36716, 38727, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2798, true], [2798, 8330, null], [8330, 13948, null], [13948, 14494, null], [14494, 19833, null], [19833, 21932, null], [21932, 24563, null], [24563, 27930, null], [27930, 31367, null], [31367, 36716, null], [36716, 38727, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38727, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38727, null]], "pdf_page_numbers": [[0, 2798, 1], [2798, 8330, 2], [8330, 13948, 3], [13948, 14494, 4], [14494, 19833, 5], [19833, 21932, 6], [21932, 24563, 7], [24563, 27930, 8], [27930, 31367, 9], [31367, 36716, 10], [36716, 38727, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38727, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
068d5203f6aa4f2cb5b5cecec366aa22df6f2af3
|
IMPROVING THE ENERGY EFFICIENCY AND THE PERFORMANCE OF AN ANDROID 7 CORE APPLICATION
Author: Olivier PHILIPPO, GREENSPECTOR
Date: Jan 10th, 2017
THANKS FOR DOWNLOADING!
Be sure to check out our others resources about software ecodesign:
YES, I WANT TO LEARN MORE!
+33 (0) 9 51 44 55 79
contact@greenspector.com
EXECUTIVE SUMMARY
The GREENSPECTOR team has been asked by a customer to help in optimizing an Android core application. The final goal was to reduce the energy consumption of the app, or to improve its performance, or both.
The application audited was the AOSP 7.0 core application “System UI” running on a Nexus smartphone. The project lasted 2 weeks in November 2016.
We first conducted a set of measures on the smartphone, using GREENSPECTOR metering features and a dedicated GREENSPECTOR energy probe. This showed that System UI had a substantial impact on the device, especially through Status Bar and Recent Apps features. Indeed, the energy consumption impact was measured to be between 2.2 and 2.9 times higher than the reference scenario.
During the audit, several issues were identified which correction could potentially help to decrease this impact:
- A high number of triggered events, which create unnecessary treatments and redraws. These treatments impact the platform resources even when in idle mode.
- A high frequency (and thus impact) of the animation and of the movement tracking feature. The animation performance is designed too high for the user to perceive its quality.
- A heavy layout, which creates a lot of consuming treatments and redraws enforced during animations.
In the second part of the audit, we modified the source code of System UI to apply some of these changes. We were able to obtain significant gains:
- Removal of unnecessary triggerings of redraws.
- Reduction from 250 ms to 150 ms of CPU treatments when showing the Status Bar.
- Reduction of the number of calls to several methods (up to 100 calls during sliding actions),
- Reduction of energy consumed during Show/Hide Status bar: - 28 μAh/s (- 9%).
The goal of this audit, which was to improve the application as per energy efficiency and performance standards, has been reached in a short time frame.
We spent 3 man.days on this code refactoring task, including some initial time necessary to understand the code. The overall audit duration was 9 days for 2 consultants. This is very positive, since we estimate that the gains could be more important with a better knowledge of the source code, and some more time to apply the corrections to the application.
We demonstrated with this case study that, provided you use the right method – which involves energy consumption measurement - you may try and optimize any application, be it a part of the Android core. Our approach based on software eco-design principles allowed us to identify areas of progress in a short time frame. The implementation of the key recommendations will permit not only a reduction of the energy consumption but also an increase of the hardware lifespan.
1. AUDIT DESCRIPTION
1.1. System description
The chosen platform was:
- Device: a Nexus Smartphone (Model: confidential)
- OS: Android 7.0 (Build AOSP on Angler - NRD90M)
1.2. Application
The application under test was a core Android application:
- System UI
- Version: from AOSP repository (1.0.3)
1.3. Use cases
The chosen use cases were features frequently used by the users:
- Open the status bar in minimal mode (use case name: ShowMiniStatusbar)
- Open the status bar entirely (ShowAllStatusBar)
- Hide the status bar (HideStatusBar)
- Show / Hide all the status Bar (ShowHideStatusbar)
- Open Recent Apps (OpenRecentApps)
- Clear All Recent Apps (ClearAllRecentApps)
- Show/ Hide Recent Apps (ShowHideRecentApps)
Another use case was used to measure the platform consumption in idle mode:
- Idle Mode (Reference)
All these test cases were automated using UIAutomator.
2. METHODOLOGY & TEST BENCH SETTING
2.1. Methodology
General description of the methodology
For this audit, we used a classical top-down approach. We first launched a set of measures to identify the most consuming features, then we went deeper into the analysis where there were “hotspots” of resources consumption. An optimization phase was then conducted, followed by the assessment of each progress done.
In the end, the hotspots having been corrected, the developers could switch to correcting their source code with respect to an eco-design set of rules.
Features measurement
The energy measurement allowed us to identify which features consume the most energy. Without that identification, we couldn’t focus the correction effort, and this could lead to working on some parts which have no or very little impact on energy consumption.
We focused on the following tests because they were designed for energy measurement (duration of tests > 1 minute):
- Show / Hide all the status Bar (ShowHideStatusBar)
- Show/ Hide Recent App (ShowHideRecentApps)
- Idle Mode (Reference)
Hotspot profiling
When a “hotspot” had been detected (which means that a highly consuming test had been identified), we then used classical profiling tools to get a better understanding about the underlying behaviour. In parallel we used GREENSPECTOR Code Analysis feature, to determine if some important code eco-design rules could be infringed and thus participate in causing the hotspot.
Action plan, optimization and measurement
When the auditing part was done, we set an action plan which aimed at reducing the energy consumption. The plan focused first on improving the “hotspots”, starting by the hotspot with the highest Impact ratio.
After each correction, we performed another profiling test, in order to check if the hotspot had been corrected or if it was still present. When a hotspot was suppressed, the next hotspot in the list became the next priority.
Please note that, given the very short time frame of this audit, the hotspot optimizations were applied as “quick and dirty modifications”, even if not fully functional, in order to see if such a modification was interesting. “Cleaner” modifications could easily be performed with the same principles, given some more development time.
We applied the modifications as increments, which allowed to check the gain of each improvement.
Source code Analysis
After the removal of the main hotspots, we focused on the correction of the source code, using the code eco-design rules for Android.
2.2. Testbench description
GREENSPECTOR tools
The test bench was composed of the GREENSPECTOR server installed on our customer’s premises, and the Android probe developed for this Nexus device and this Android version. These tools are available to all customers of GREENSPECTOR.
Illustration 4 : Greenspector dashboard
Testing environment
We used the following tools:
- Greenspector Server: to conduct the audit, gather and analyze the data.
- Greenspector Android Meter API: to link the smartphone probe to the test case run.
- Android tools: Traceview, Systrace and Layout Hierarchy.
- Git: to work on the refactory steps.
The testing protocol was the following:
- Charge the smartphone between 95% and 100% (to have the same energy behavior)
- Reboot the smartphone (to put it in a stable state)
- Unplug the USB cable or any other energy supply
- Run all the tests (always in the same order)
- Re-run the protocol x times to have stable measures.
3. AUDIT
3.1. Feature Measurement and identification of hotspots
We launched test runs for each of the functional cases that we had selected. As mentioned, prior to running the functional test cases, we ran a Reference (or “idle”) test case to establish the reference consumption for our platform.
The initial version of the application was measured with the methodology explained previously. For the energy we got the following results:
The impacts of Show/Hide Status bar and Show Recent App are significant. Their consumption ratio, as compared to the Reference test, are respectively 2.4 and 1.9.
For shorter use cases, the test durations were not long enough to get accurate energy measures on this Nexus device, which communicates its energy status only every 30 seconds. However, we were able to launch these tests and measure another key metric, which is the CPU consumption.
The relative impact of each feature in terms of CPU consumption was the following:
This first and simple step has allowed us:
- To compare the functional cases with respect to the consumption of the platform when idle (which is much more relevant than to try and assess absolute figures);
- To compare the functional cases between them (which makes it possible to prioritize the rest of the work).
3.2. Profiling
An optimization work is not an exhaustive and planned approach. We are in a constant search of balance between the hoped-for gains, and the workload that would be needed to obtain these gains.
In our case, the search for the big rocks had already begun: thanks to the measures carried out, we were able to target the most consuming test cases.
Hence, when we started deeper profiling with expert tools (Android Systrace, Android Traceview, HierarchyViewer…) we already knew where and how to use them. These tools being very accurate on narrow points and their understanding being rather arduous, the foremost step saved us a lot of time.
The energy measurement showed a hotspot on Show/Hide Status bar, so we began the profiling by this feature. The Systrace tool gave us a list of methods with the time spent. We analyzed and filtered this list to obtain the methods of system UI:
Illustration 6: CPU measure in Greenspector
"Hence, we used this good old 80/20 rule, or what we like to call "looking for the big rocks": if your road is blocked by a rock, you don’t have to mind the sand in your shoes for the moment.
Illustration 7: extraction of Traceview information
The cross analysis of this list and a peek at the code allowed us to conclude that several methods were called many times, and that 2 types of treatments were responsible of this behavior: Refreshing the tiles (icon and text of the status bar), and movement tracking/animations. The refreshing of tiles is done each time an event occurs. Therefore, it generates some false triggerings and thus a lot of treatments and redraws:
Between user actions (show mini, show all and hide) we see 4 peaks which correspond to the triggering of the tiles redrags. We don’t know here if they are really needed but we analyzed in the code and in the profiling that a lot of events with no real impact in the viewing were firing these redrags.
The cost of one peak is not negligible because of treatments and redrags:
Moreover, this treatments appear also during the sliding of the status bar, and not only during idle time.
For the animation, 2 big impacts were identified: treatments of movement (algorithm in Threads) and redrags. The redraw has a cost because of the size of the layout - which is big. The Hierarchy Viewer tool of Android permitted us to analyze the layout:
There are 12 levels, which is a lot, and all tiles are complex:
This layout has an impact on energy consumption, because lots of treatments are needed to update and draw the layer. In Systrace, we see that in the timing:
The frame rate is 60 fps (every 16 ms). The update and redraw of tiles take more that 16ms. This triggers warning in Systrace. For the quick bar, the layout has a lower impact:
### 3.3. Action Plan
**Code analysis (detailed analysis)**
The action plan followed the methodology by prioritizing the improvements with respect to their expected gains. We took action on the code by tackling first the most consuming methods:
**Code**
```
Method
Layout (all xml files)
com.android.systemui as TouchAnimator
com.android.systemui as TouchAnalyzer
android.systemui as QSTileStateCopy?
com.android.systemui as QSTileStateCopy?
Layout (all xml files)
com.android.systemui as TouchAnalyzer
android.systemui as QSTileStateCopy?
com.android.systemui as QSTileStateCopy?
Terminal handlers
com.android.systemui as Tile handlerUpdateState
com.android.systemui as Tile handlerUpdateState
com.android.systemui as Tile handlerUpdateState
com.android.systemui as CellularTile handlerUpdateState
com.android.systemui as CellularTile handlerUpdateState
com.android.systemui as Tile handlerUpdateState
com.android.systemui as CellularTile handlerUpdateState
com.android.systemui as CellularTile handlerUpdateState
com.android.systemui as statusbarStackNotificationStackScrollLayout.get
com.android.systemui as statusbarStackNotificationStackScrollLayout.get
com.android.systemui as statusbarStackNotificationStackScrollLayout.get
com.android.systemui as statusbarStackNotificationStackScrollLayout.get
ShortcutView apply
Compress the icon to optimize the performance.
```
**Best practices**
- Optimize the layout (>10 level). Usage of relative layout will increase several method performance.
- Optimize inner class access
- Reduce call number (>1000)
- Don’t use first in interpolation
- Optimize inner class access
- Don’t use Objects.equals. Overhead of call and test to null (return a == b || (a != null && a.equals(b)))
- Don’t make copy if variable don’t change
- Reduce call number (>10). Call especially by handleRefreshState.
1 – Simplify the layout
The layout is complex (several levels, usage of linear layout…). There are a lot of treatments going on during every layout redraw and measure, especially during animation. Simplifying the layout would allow for an important reduction of the energy consumed.
2 – Reduce too many treatments and redraws with event messages reception
Lots of events are fired during the opening and closing of the status bar: wifi status modification, radio… No gathering of these events is done, so it creates a lot of unnecessary treatment.
3 – Reduce too many method calls and redraws with MotionEvent event
Animations fire a lot of events and treatments. The number of events is too high and gives a too high level of performance for the animations, because produced at a rate too high for the user’s perception. Decreasing the events numbers will permit to do less treatments and redraws. Also, the FPS is rated at 60, which is too high for System UI and can be decreased.
Illustration 14: Detail Action plan on code
This action plan could be summarized as follows:
1 - Simplify the layout
The layout is complex (several levels, usage of linear layout…). There are a lot of treatments going on during every layout redraw and measure, especially during animation. Simplifying the layout would allow for an important reduction of the energy consumed.
2 - Reduce too many treatments and redraws with event messages reception
Lots of events are fired during the opening and closing of the status bar: wifi status modification, radio… No gathering of these events is done, so it creates a lot of unnecessary treatment.
Moreover, the treatments fired with these events are heavy (update and redraw even if there is no change of state). It is necessary to reduce the number of calls, by reducing the frequency of treatment, by making smarter event firing, and so on.
4 - Analyze the impact of BatteryMeterDrawable by removing it
The battery Meter is a heavy object and its generation is called several times. Removing it will permit us to understand its energy bug and to know if it is necessary to optimize it.
5 - Optimize the redraw (Global redraw and poor caching of lazy update)
After the optimization of the number of calls (cf. actions 2 and 4), the remaining calls can be optimized. Indeed, the items are cached but the update is not optimized.
6 - Analyze a potential Bug
During the test runs, we detected a potential bug: The energy and the memory increased during the test period. It is a potential memory / energy leak, to be investigated.
7 - Optimize source code as per eco-design code rules
After optimizing the main hotspots, focus can switch to improving the code with respect to code eco-design rules. These eco-design rules are those included in the GREENSPECTOR code scan tool for Java/Android language.
4. REFACTORING
4.1. Action 1 – Layout optimization
After discussion with the customer team, this best practice, although it was a big improvement on energy, was too difficult to apply (need to redesign the layout, impact on several parts of the code...) and we decided not to implement it. However, it can still be applied later to improve the energy efficiency, or in new projects.
4.2. Action 2 - Reduction of the number of refresh events
Modification
For the original version, some problems on tiles update have been detected. In fact, after every opening of the StatusBar, every tile was redrawn at least once even if its value had not changed. Then, the WifiTile was redrawn once more, the CellularTile was redrawn three times, the BluetoothTile was redrawn once and finally the BatteryTile was redrawn three times at every battery level update.
As the methodology explained, we suppressed several refreshes which made the application not fully functional, but which permitted to confirm the reduction in the number of method calls.
To fix this problem, we looked at the code of each tiles that are updated too many times like BatteryTile for example. In the original code, the method on BatteryLevelChanged was called many times, even if the battery level was the same than for the last call. No check was done on this for this method. Therefore, every call of this function implied a redraw of the BatteryTile. So we added a condition at the first line of this method to check if the data had really changed, and if not, just stop the treatment here because a redraw would be useless.
For the other tiles, the problem was really similar. For example in the CellularTile, we added a condition to stop the method setNoSims if the tile already knows if there is or not a SIM...
We called the version resulting from these changes “version UpdateReduce”.
**Profiling**
The profiling with Systrace gave us the slices number and the time taken by showing the QuickBar:
For the UpdateReduce version, we have:
For the all show/Hide, the original:
And the UpdateReduce:
The results are as follows:
<table>
<thead>
<tr>
<th>Test Name</th>
<th>Metrics</th>
<th>Original</th>
<th>Optimized</th>
<th>Gain</th>
</tr>
</thead>
<tbody>
<tr>
<td>ShowMinStatusBar</td>
<td>Number of Slice</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>com.android.systemUI timing (ms)</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ShowHideStatusBar</td>
<td>Number of Slice</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>com.android.systemUI timing (ms)</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
com.android.systemUI was not the first consumer anymore but it was the RenderThread, which manages the rendering of the status bar during the animation. One other interesting indirect result was the suppression of false triggering of refreshes (redraw and treatment event if there is no visual modification):
One other interesting indirect result was the suppression of false triggering of refreshes (redraw and treatment event if there is no visual modification):
Illustration 15: Original Version - False triggering in idle
Between actions (show mini, show all and hide) remember that we could see 4 peaks which corresponded to the triggering of the refreshing of the tiles.
Illustration 16: ReduceUpdate Version - No False triggering in idle
In the UpdateReduce version, there is no such peak anymore. If a real update of the tiles is needed, refresh will be done (and a peak will happen). We can confirm this with Systrace, on the methods which have an excluded time greater than 0.5 ms:
Illustration 17: Original Version - Top consuming methods
In yellow, we can see the method linked with the refresh event.
<table>
<thead>
<tr>
<th>Method</th>
<th>Included Time (ns)</th>
<th>Excluded Time (ns)</th>
<th>Calls</th>
</tr>
</thead>
<tbody>
<tr>
<td>com.android.systemui.qs.TouchAnimator$FloatKeyframeSet.interpolate</td>
<td>6868</td>
<td>18437</td>
<td>1375</td>
</tr>
<tr>
<td>com.android.systemui.qs.TouchAnimator.setPosition</td>
<td>5181</td>
<td>221621</td>
<td>349</td>
</tr>
<tr>
<td>com.android.systemui.qs.TouchAnimator.setTimeValue</td>
<td>4461</td>
<td>184838</td>
<td>1375</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.NotificationStackScrollLayout.getNotGoneChildCount</td>
<td>2113</td>
<td>3697</td>
<td>80</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.expandableNotificationRow.get intrinsicHeight</td>
<td>1235</td>
<td>3282</td>
<td>127</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.expandableNotificationRow.isExpanded</td>
<td>1101</td>
<td>27320</td>
<td>33</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.signal.ClusterView.apply</td>
<td>853</td>
<td>30650</td>
<td>10</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.NotificationStackScrollLayout.updateViewShadows</td>
<td>853</td>
<td>4141</td>
<td>15</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.StackScrollAlgorithm.initAlgorithmState</td>
<td>809</td>
<td>3245</td>
<td>15</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.StackScrollAlgorithm.updatePositionsForState</td>
<td>783</td>
<td>4827</td>
<td>15</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.StackScrollState.applyState</td>
<td>733</td>
<td>15383</td>
<td>65</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.phone.QuickStatusBarHeader.setsetPosition</td>
<td>711</td>
<td>73522</td>
<td>83</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.NotificationStackScrollLayout.updateContentHeight</td>
<td>701</td>
<td>2831</td>
<td>16</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.StackScrollState.getViewStateForView</td>
<td>679</td>
<td>8285</td>
<td>200</td>
</tr>
<tr>
<td>com.android.systemui.qs.QSAnimator.onRelativePositionInt</td>
<td>652</td>
<td>980</td>
<td>95</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.NotificationStackScrollLayout.updateBackgroundBounds</td>
<td>644</td>
<td>41</td>
<td>16</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.NotificationStackScrollLayout.getLayoutMinHeight</td>
<td>606</td>
<td>3645</td>
<td>73</td>
</tr>
<tr>
<td>com.android.systemui.qs.QSAnimator.onsetPosition</td>
<td>588</td>
<td>187069</td>
<td>34</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.StackScrollState.apply</td>
<td>581</td>
<td>18959</td>
<td>15</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.StackScrollState.resetViewState</td>
<td>565</td>
<td>4039</td>
<td>65</td>
</tr>
<tr>
<td>com.android.systemui.qs.QSAnimator.updateAnimators</td>
<td>561</td>
<td>2005</td>
<td>1</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.policy.WiFiSignalController.notifyListeners</td>
<td>555</td>
<td>6830</td>
<td>4</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.NotificationStackScrollLayout.getFirstChildNotGone</td>
<td>529</td>
<td>900</td>
<td>73</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.stack.StackScrollAlgorithm.updateClipping</td>
<td>526</td>
<td>2213</td>
<td>16</td>
</tr>
<tr>
<td>com.android.systemui.statusbar.phone.NotificationPanelView.getHeaderTranslation</td>
<td>513</td>
<td>2876</td>
<td>33</td>
</tr>
</tbody>
</table>
Illustration 18: UpdateReduce Version - Top consuming methods
There are no more methods related to event. We reduced drastically the number of calls!
<table>
<thead>
<tr>
<th>Method</th>
<th>Number of Calls</th>
<th>Number of Calls</th>
</tr>
</thead>
<tbody>
<tr>
<td>com.android.systemui.qs.QS_tileState.copyTo</td>
<td>29</td>
<td>3</td>
</tr>
<tr>
<td>com.android.systemui.BatteryMeterDrawable.<init></td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>com.android.systemui.qs.tiles.CellularTile.handleUpdateState</td>
<td>12</td>
<td>3</td>
</tr>
<tr>
<td>com.android.systemui.BatteryMeterDrawable.draw</td>
<td>7</td>
<td>4</td>
</tr>
<tr>
<td>com.android.systemui.qs.QS_tile$h.handleMessage</td>
<td>34</td>
<td>12</td>
</tr>
</tbody>
</table>
Illustration 19: Number of calls of refresh methods
Measurement
The measurements are the following for the ShowHideStatusBar:
Illustration 20: Comparison with original for ShowHideStatusBar
(screenshot from GREENSPECTOR “Evolution” tab)
The gain in energy consumption is -10.75 μAh/s (-3.56 %). This is good, but we were expecting better results, judging upon the profiling aspects. One explanation is that the ShowHideStatusBar energy test has been extended to include an idle period, which lowers the mean consumption during the test. More tests could be conducted on other Android devices with more accurate energy probes, this flavor of the Nexus showing its limitations here.
When we checked the gain of all the tests, we had:
<table>
<thead>
<tr>
<th>Platform CPU</th>
<th>11.35 %</th>
<th>11.17 %</th>
<th>-0.18 %</th>
<th>-1.61 %</th>
</tr>
</thead>
<tbody>
<tr>
<td>Process Data</td>
<td>0 B</td>
<td>0 B</td>
<td>0 B</td>
<td>0 %</td>
</tr>
<tr>
<td>Process Memory</td>
<td>120.74 MB</td>
<td>118.38 MB</td>
<td>-2.36 MB</td>
<td>-1.96 %</td>
</tr>
</tbody>
</table>
Illustration 21: Global gain of all test for CPU and Memory
We had some gains in CPU and Memory. We decreased the RAM consumption by 2 MB. However it is interesting to note that we also slightly decreased the pressure on the CPU.
4.3. Action 3 – Animation and redraw optimization
Modification
To reduce the number of animation treatments, one simple modification was to reduce the amount of input events. For that, in dispatchTouchEvent in com.android.systemui.statusbar.phone.StatusBarWindowView we took one action to delete one event out of 3. The result is not visible for the user. If we wanted to keep the same performance (and not to loose events), then the same gain (and even more) could be obtained by optimizing the treatment of animations and the layout.
We added the modification to the modifications of Action 2 (incremental modification).
Profiling
The Systrace analysis of the optimized version gives the following metrics for Showing the status bar:
<table>
<thead>
<tr>
<th>Name</th>
<th>Wall Duration</th>
<th>Occurrences</th>
</tr>
</thead>
<tbody>
<tr>
<td>android_systemui</td>
<td>264,650 ms</td>
<td>330</td>
</tr>
<tr>
<td>surfaceflinger</td>
<td>157,982 ms</td>
<td>150</td>
</tr>
<tr>
<td>RenderThread</td>
<td>157,793 ms</td>
<td>250</td>
</tr>
</tbody>
</table>
The metrics for original version was:
The result :
The global gain for the test ShowHide Status bar is the following:
We got a gain of -28 μAh/s (-9.3%)!
We also reduced the pressure on the memory management, as shown by the 2 graphs below. On the original version, the garbage collector runs every 1 minute and 20 seconds. On the optimized version, it runs every 2 minutes 30. There are less objects to destroy, so the GC is not called as much as before.
4.4. **Action 4 - BatteryMeterDrawable optimization**
After the action 2, we had reduced and suppressed a lot of calls. The BatteryMeterDrawable object was still heavy, but there were no more calls on Show/Hide of status bar. This action was deemed as not relevant for the sort duration of the audit.
4.5. **Action 5 – Optimize the redraw (Global redraw and poor caching of lazy update)**
During the audit, we did not have enough time to work on this action. However, decreasing the triggering of refreshes had made this action less important (although still necessary).
4.6. **Action 6 – Analysis of Energy leak bug**
We identified two potentials bugs:
- There is a memory leak. Even if we see that the garbage collector does its job, the memory is continuously increasing from test run to test run. The energy of all tests also continuously increases. We had identified this during the measurement step, and we had adapted our protocol of measurement not to suffer from that. Let it live, and the idle consumption goes from 85μAh/s to 150μAh/s!
Illustration 26: Evolution of reference energy consumption for original version (showing a leak)
- When the status bar is expanded and the user locks the screen, it seems that the listener which permits to update the tiles stays open. This creates unnecessary treatments and Redraws which happen even though the screen is off.
4.7. **Action 7 – Improve the code as per code eco-design rules**
During the audit, we lacked the necessary time to conduct this action. We agreed with the Customer team that this one will be done by its development teams. Indeed, the use of GREENSPECTOR tools (including the Eclipse plug-in and Android Studio plug-in) allows any developer to easily scan their code and apply the “green” rules.
**Note:** The scanning had shown that no “high priority” green rules had been infringed. Green rules are interesting with a mid and long term vision. Like maintainability and other best practices which improve the code quality, it will improve the energy consumption of software as coding goes along.
5. CONCLUSIONS
5.1. Performance is not enough. Watch your efficiency.
Mobile hardware has become more and more powerful. The smartphone used during this audit boasted as many as 8 CPUs and one GPU. Indeed, the application uses all the available CPUs and there is no performance problem. This meets Wirth's Law: "Software is getting slower more rapidly than hardware becomes faster." ([https://en.wikipedia.org/wiki/Wirth's_law](https://en.wikipedia.org/wiki/Wirth's_law)).
Hardware provides more and more power and resources to the software, hence the software uses all the resources. The only limitations for the software are defined by two factors: the user's perception of performance (aka speed), and the available hardware resources.
Getting back to this audit, we saw that the performance level of the SystemUI app was quite good. But this was done at the cost of using all the available resources without limitation, leading to an app consuming way more battery than it could, or should. We showed that the same performance level may be achieved with a lower consumption of energy, thus granting the user a longer battery life.
One of our usual proposals in order to reduce the consumption of resources is to limit it with a budget. Therefore we introduce a third limitation factor, which will permit to better control the behavior of the software. This concept is already applied in the performance domain with the RAIL model. But the current performance models need to be improved. For example, there is this reflection on the RAIL model proposed by Paul Irish and Paul Lewis: Add B (for battery) and an M (for memory), turning into BLAIMR, PRIMAL. Just as we have a performance budget, we need an energy consumption budget. Set your own target, like “this software should not double the discharge rate of the battery”, or “this software should not increases by more than 10% the discharge rate of the battery when in Idle mode…”, and so on.
Timing or Speed performance is not the only solution to improve the software efficiency. This leads to over-consuming software and to empty batteries. Performance models need to be improved and to integrate energy and resources consumptions.
5.2. Measure, Measure, Measure
Our goal was to reduce the power consumption of SystemUI, an Android core application. We have succeeded in a very short time frame, and above all we have shown that it was possible to go way further.
We managed to do it because we had both:
- A good method: look for the big stones, proceed by elimination, and above all MEA-SU-RE. It’s by measurement that knowledge arrives, and by measurement that progress is evaluated. Once the big stones are identified, you spend your time and your expertise much more efficiently.
- A good tool: since you have to measure, let it be easy to do, and let the findings be relevant. We have shown that GREENSPECTOR’s API offered a nice versatility for in-house Android developers, and that GREENSPECTOR’s interface allowed to easily follow the findings and progresses.
Still some work to do
The SystemUI app can still be improved in many ways. Event programming is overused and has some drawbacks: no clear management of the impact of event triggering, redraws happening way too often… The performance of the UI is also too high: maintaining a 60 FPS is over-quality.
We managed to improve the energy consumption by 2 quick actions. The memory was improved also. But the impact on the system is still too high. An impact less that 2 times the reference consumption would be more acceptable (See? This is an energy consumption budget!). Improvements can be continued in order to reach this goal…
Want to learn more about software ecodesign?
www.greenspector.com
+33 (0) 9 51 44 55 79
contact@greenspector.com
|
{"Source-Url": "https://greenspector.com/wp-content/uploads/2020/01/Improving.The_.Energy.Efficiency.And_.The_.Performance.Of_.An_.Android.7.Core_.App_.Ebook_.pdf", "len_cl100k_base": 7718, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 33695, "total-output-tokens": 8223, "length": "2e12", "weborganizer": {"__label__adult": 0.00031447410583496094, "__label__art_design": 0.00043392181396484375, "__label__crime_law": 0.0002321004867553711, "__label__education_jobs": 0.00021696090698242188, "__label__entertainment": 5.072355270385742e-05, "__label__fashion_beauty": 0.00012123584747314452, "__label__finance_business": 0.00023031234741210935, "__label__food_dining": 0.00021016597747802737, "__label__games": 0.0003941059112548828, "__label__hardware": 0.001705169677734375, "__label__health": 0.00022161006927490232, "__label__history": 0.00014889240264892578, "__label__home_hobbies": 7.140636444091797e-05, "__label__industrial": 0.0002727508544921875, "__label__literature": 0.00011515617370605467, "__label__politics": 0.00012576580047607422, "__label__religion": 0.00022614002227783203, "__label__science_tech": 0.007293701171875, "__label__social_life": 5.7637691497802734e-05, "__label__software": 0.0072784423828125, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.00019752979278564453, "__label__transportation": 0.0003216266632080078, "__label__travel": 0.00016129016876220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32701, 0.02757]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32701, 0.16763]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32701, 0.92023]], "google_gemma-3-12b-it_contains_pii": [[0, 146, false], [146, 315, null], [315, 3059, null], [3059, 3942, null], [3942, 5422, null], [5422, 6817, null], [6817, 8427, null], [8427, 9936, null], [9936, 11103, null], [11103, 11503, null], [11503, 15265, null], [15265, 18015, null], [18015, 19165, null], [19165, 19911, null], [19911, 24481, null], [24481, 26432, null], [26432, 26838, null], [26838, 28919, null], [28919, 31958, null], [31958, 32701, null]], "google_gemma-3-12b-it_is_public_document": [[0, 146, true], [146, 315, null], [315, 3059, null], [3059, 3942, null], [3942, 5422, null], [5422, 6817, null], [6817, 8427, null], [8427, 9936, null], [9936, 11103, null], [11103, 11503, null], [11503, 15265, null], [15265, 18015, null], [18015, 19165, null], [19165, 19911, null], [19911, 24481, null], [24481, 26432, null], [26432, 26838, null], [26838, 28919, null], [28919, 31958, null], [31958, 32701, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32701, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32701, null]], "pdf_page_numbers": [[0, 146, 1], [146, 315, 2], [315, 3059, 3], [3059, 3942, 4], [3942, 5422, 5], [5422, 6817, 6], [6817, 8427, 7], [8427, 9936, 8], [9936, 11103, 9], [11103, 11503, 10], [11503, 15265, 11], [15265, 18015, 12], [18015, 19165, 13], [19165, 19911, 14], [19911, 24481, 15], [24481, 26432, 16], [26432, 26838, 17], [26838, 28919, 18], [28919, 31958, 19], [31958, 32701, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32701, 0.16667]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
158baee52eafd4db9f2c5f7e244c1e59ac168a87
|
Building Highly Available Database Applications for Apache Derby
Emmanuel Cecchet
Principal architect - Emic Networks
Chief architect – ObjectWeb consortium
Motivations
- Database tier should be
- scalable
- highly available
- without modifying the client application
- database vendor independent
- on commodity hardware
Scaling the database tier – Master-slave replication
- Cons
- failover time/data loss on master failure
- read inconsistencies
- scalability
Scaling the database tier – Atomic broadcast
- Cons
- atomic broadcast scalability
- no client side load balancing
- heavy modifications of the database engine
Scaling the database tier – SMP
- Cons
- Cost
- Scalability limit
Well-known hardware +
database vendors here
© Emic 2005
Scaling the database tier – Shared disks
• Cons
– still expensive hardware
– availability
Outline
- RAIDb
- C-JDBC
- Derby and C-JDBC
- Scalability
- High availability
RAIDb concept
- Redundant Array of Inexpensive Databases
- RAIDb controller
- gives the view of a single database to the client
- balance the load on the database backends
- RAIDb levels offers various tradeoff of performance and fault tolerance
RAIDb levels
- RAIDb-0
- partitioning
- no duplication and no fault tolerance
- at least 2 nodes
RAIDb levels
- RAIDb-1
- mirroring
- performance bounded by write broadcast
- at least 2 nodes
RAIDb levels
- RAIDb-2
- partial replication
- at least 2 copies of each table for fault tolerance
- at least 3 nodes
RAIDb levels composition
• RAIDb-1-0
– no limit to the composition deepness
Outline
• RAIDb
• C-JDBC
• Derby and C-JDBC
• Scalability
• High availability
C-JDBC overview
- Middleware implementing RAIDb
- 100% Java implementation
- open source (LGPL)
- Two components
- generic JDBC driver (C-JDBC driver)
- C-JDBC Controller
- Read-one, Write all approach
- provides eager (strong) consistency
- Supports heterogeneous databases
Using C-JDBC as an open source driver for Derby
Application server
C-JDBC JDBC driver
JVM
C-JDBC controller
Embedded Derby
JVM
© Emic 2005
Emmanuel Cecchet | ApacheCon EU 2005
www.emicnetworks.com
Inside the C-JDBC Controller
C-JDBC Controller
Virtual database 1
- Authentication Manager
- Request Manager
- Recovery Log
- Scheduler
- Query result cache
- Load balancer
- Database Backend
- Connection Manager
- Derby JDBC driver
- Database dumps management
- Checkpointing service
Virtual database 2
- Authentication Manager
- Request Manager
- Recovery Log
- Scheduler
- Query result cache
- Load balancer
- Database Backend
- Connection Manager
- Derby JDBC driver
Client application (Servlet, EJB, ...)
- C-JDBC driver
Client application (Servlet, EJB, ...)
- C-JDBC driver
Client application (Servlet, EJB, ...)
- C-JDBC driver
JMX administration console
- HTTP
- RMI
© Emic 2005
Emmanuel Cecchet | ApacheCon EU 2005
www.emicnetworks.com
Virtual Database
- gives the view of a single database
- establishes the mapping between the database name used by the application and the backend specific settings
- backends can be added and removed dynamically
- configured using an XML configuration file
Authentication Manager
- Matches real login/password used by the application with backend specific login/ password
- Administrator login to manage the virtual database
Scheduler
- Manages concurrency control
- Specific implementations for RAIDb 0, 1 and 2
- Pass-through
- Optimistic and pessimistic transaction level
- uses the database schema that is automatically fetched from backends
Request cache
- 3 optional caches
- tunable sizes
- parsing cache
- parse request skeleton only once
- INSERT INTO t VALUES (?,?,?,?)
- metadata cache
- column metadata
- fields of a request
- result cache
- caches results from SQL requests
- tunable consistency
- fine grain invalidations
- optimizations for findByPk requests
Load balancer 1/2
- **RAIDb-0**
- query directed to the backend having the needed tables
- **RAIDb-1**
- read executed by current thread
- write executed in parallel by a dedicated thread per backend
- result returned if one, majority or all commit
- if one node fails but others succeed, failing node is disabled
- **RAIDb-2**
- same as RAIDb-1 except that writes are sent only to nodes owning the updated table
Load balancer 2/2
- Static load balancing policies
- Round-Robin (RR)
- Weighted Round-Robin (WRR)
- Least Pending Requests First (LPRF)
- Request sent to the node that has the shortest pending request queue
- Efficient even if backends do not have homogeneous performance
Connection Manager
- C-JDBC driver provides transparent connection pooling
- Connection pooling for a backend
- no pooling
- blocking pool
- non-blocking pool
- dynamic pool
- Connection pools defined on a per login basis
- resource management per login
- dedicated connections for admin
Recovery Log
- Checkpoints are associated with database dumps
- Record all updates and transaction markers since a checkpoint
- Used to resynchronize a database from a checkpoint
- JDBCRecoveryLog
- store log information in a database
- can be re-injected in a C-JDBC cluster for fault tolerance
Functional overview (read)
```java
connect myDB
login, password
execute
SELECT * FROM t
```
Functional overview (write)
execute \text{INSERT INTO} \quad \text{...}
Failures
execute **INSERT INTO t ...**
- No 2 phase-commit
- parallel transactions
- failed nodes are automatically disabled
Outline
- RAIDb
- C-JDBC
- Derby and C-JDBC
- Scalability
- High availability
Highly available web site
- Apache clustering
- L4 switch, RR-DNS, One-IP techniques, LVS, …
- Tomcat clustering
- mod_jk (T4), mod_proxy/mod_rewrite (T5), session replication
- Database clustering
- C-JDBC
Result cache
- Cache contains a list of SQL→ResultSet
- Policy defined by queryPattern→Policy
- 3 policies
- EagerCaching: variable granularities for invalidations
- RelaxedCaching: invalidations based on timeout
- NoCaching: never cached
<table>
<thead>
<tr>
<th>RUBiS bidding mix with 450 clients</th>
<th>No cache</th>
<th>Coherent cache</th>
<th>Relaxed cache</th>
</tr>
</thead>
<tbody>
<tr>
<td>Throughput (rq/min)</td>
<td>3892</td>
<td>4184</td>
<td>4215</td>
</tr>
<tr>
<td>Avg response time</td>
<td>801 ms</td>
<td>284 ms</td>
<td>134 ms</td>
</tr>
<tr>
<td>Database CPU load</td>
<td>100%</td>
<td>85%</td>
<td>20%</td>
</tr>
<tr>
<td>C-JDBC CPU load</td>
<td>-</td>
<td>15%</td>
<td>7%</td>
</tr>
</tbody>
</table>
Configuring C-JDBC as a Derby driver (1/3)
- **copy c-jdbc-driver.jar** in client application classpath
```
Class.forName("org.objectweb.cjdbc.driver.Driver");
Connection c = DriverManager.getConnection("jdbc:cjdbc://host/db", "login", "password");
```
- **copy derby.jar** in `$CJDBC_HOME/drivers`
C-JDBC Controller
- SingleDB configuration
- `jdbc:derby:path;create=true`
- Embedded Derby
Client application
- `jdbc:cjdbc://host/db`
- `org.objectweb.cjdbc.driver.Driver`
- C-JDBC Driver
© Emic 2005
Emmanuel Cecchet | ApacheCon EU 2005
www.emicnetworks.com
Configuring C-JDBC as a Derby driver (2/3)
```xml
<?xml version="1.0" encoding="UTF8"?>
<!DOCTYPE C-JDBC PUBLIC "-//ObjectWeb//DTD C-JDBC 1.1//EN" ...>
<C-JDBC>
<VirtualDatabase name="xpetstore">
<AuthenticationManager>
<Admin> <User username="admin" password=""/> </Admin>
<VirtualUsers> <VirtualLogin vLogin="user" vPassword="x"/> </VirtualUsers>
</AuthenticationManager>
<DatabaseBackend name="derby"
driver="org.apache.derby.jdbc.EmbeddedDriver"
url="jdbc:derby:c:/xpetstore;create=true"
connectionTestStatement="values 1">}
<ConnectionManager vLogin="user" rLogin="APP" rPassword="APP">
<VariablePoolConnectionManager initPoolSize="1" maxPoolSize="50"/>
</ConnectionManager>
</DatabaseBackend>
</VirtualDatabase>
</C-JDBC>
```
Configuring C-JDBC as a Derby driver (3/3)
```xml
<RequestManager>
<RequestScheduler>
<SingleDBScheduler level="passThrough"/>
</RequestScheduler>
<RequestCache>
<MetadataCache/>
<ParsingCache/>
<ResultCache granularity="table"/>
</RequestCache>
<LoadBalancer>
<SingleDB/>
</LoadBalancer>
</RequestManager>
</VirtualDatabase>
</C-JDBC>
```
Highly available web site
- Multiple databases
- choosing RAIDb level
- recovery log for
- adding nodes dynamically
- recovering from failures
Derby clustering with C-JDBC
Client application
```
org.objectweb.cjdbc.driver.Driver
C-JDBC Driver
```
jdbc:cjdbc://host1/db
C-JDBC Controller
RAIDb configuration
```
com.ibm.db2.jcc.DB2Driver
DB2 Universal Driver for Derby
```
```
org.objectweb.cjdbc.driver.Driver
C-JDBC Driver
```
jdbc:cjdbc://host2:1527/db;
create=true;retrieveMessagesFromServerOnGetMessage=true;
jdbc:cjdbc://host3/db
Network Server
Derby
```
jdbc:derby:path;create=true
```
Embedded Derby
C-JDBC Controller
SingleDB configuration
Emmanuel Cecchet | ApacheCon EU 2005
© Emic 2005
Configuring C-JDBC with Derby Network server
• Virtual database configuration file
```xml
<DatabaseBackend name="derby1"
driver="com.ibm.db2.jcc.DB2Driver"
url="jdbc:derby:net://localhost:1527/xpetstore;create=true;retrieveMessagesFromServerOnGetMessage=true;"
connectionTestStatement="values 1"> <ConnectionManager .../> </DatabaseBackend>
```
Configuring C-JDBC with Derby/C-JDBC
- Virtual database configuration file
```xml
<DatabaseBackend name="derby2"
driver="org.objectweb.cjdbc.driver.Driver"
url="jdbc:cjdbc://host/xpetstore"
connectionTestStatement="values 1"> <ConnectionManager .../> </DatabaseBackend>
```
Configuring C-JDBC Clustering with Derby (1/2)
```xml
<RequestManager>
<RequestScheduler>
<RAIDb-1Scheduler level="passThrough"/>
</RequestScheduler>
<RequestCache>
<MetadataCache/>
<ParsingCache/>
<ResultCache granularity="table"/>
</RequestCache>
<LoadBalancer>
<RAIDb-1>
<RAIDb-1-LeastPendingRequestFirst/>
</RAIDb-1>
</LoadBalancer>
```
Configuring C-JDBC Clustering with Derby (2/2)
```xml
<RecoveryLog>
<JDBCRecoveryLog
driver="com.ibm.db2.jcc.DB2Driver"
create=true;retrieveMessagesFromServerOnGetMessage=true;"
login="APP" password="APP">
<RecoveryLogTable tableName="RECOVERY"
idColumnType="BIGINT NOT NULL" sqlColumnName="sqlStmt"
sqlColumnType="VARCHAR(8192) NOT NULL"
extraStatementDefinition="", PRIMARY KEY (id)="/>
<CheckpointTable tableName="CHECKPOINT"/>
<BackendTable tableName="BACKENDTABLE"/>
<DumpaTable tableName="DUMPTABLE"/>
</JDBCRecoveryLog>
</RecoveryLog>
</RequestManager>
</VirtualDatabase>
```
Controller replication
<VirtualDatabase name="myDB">
<Distribution/>
...
</VirtualDatabase>
Outline
- RAIDb
- C-JDBC
- Derby and C-JDBC
- Scalability
- High availability
C-JDBC vertical scalability
- allows nested RAIDb levels
- allows tree architecture for scalable write broadcast
- necessary with large number of backends
- C-JDBC driver re-injected in C-JDBC controller
C-JDBC vertical scalability
- RAIDb-1-1 with C-JDBC
- no limit to composition deepness
C-JDBC controller
RAIDb-1
C-JDBC driver
C-JDBC controller
RAIDb-1
Derby JDBC driver
C-JDBC controller
RAIDb-1
Derby JDBC driver
C-JDBC controller
RAIDb-1
Derby JDBC driver
Vertical scalability
- Addresses JVM scalability issues
- Distributing large number of connections on many backends
TPC-W benchmark *(Amazon.com)*
- Nearly linear speedups with the shopping mix
Outline
• RAIDb
• C-JDBC
• Derby and C-JDBC
• Scalability
• High availability
Powered by C-JDBC
Controller replication
- Prevent the controller from being a single point of failure
- Group communication for controller synchronization
- C-JDBC driver supports multiple controllers with automatic failover
```
jdbc:c-jdbc://node1:25322,node2:12345/myDB
```
Controller replication
jdbc:cjdbc://node1,node2/myDB
Total order reliable multicast
Mixing horizontal & vertical scalability
C-JDBC controller
Full replication
DB native JDBC driver
C-JDBC driver
DB 1
DB 2
C-JDBC controller
Full replication
DB native JDBC driver
C-JDBC driver
DB 3
DB 4
C-JDBC controller
Full replication
DB native JDBC driver
DB 5
DB 6
DB 7
Building initial checkpoint
- Dump initial Derby database using any tools (tar, zip, …)
- Initial checkpoint inserted in RecoveryLog
Logging
• Backend is enabled
• All database updates are logged (SQL statement, user, transaction, …)
Adding new backends 1/3
- Add new backends while system online
- Restore dump corresponding to initial checkpoint
Adding new backends 2/3
- Replay updates from the log
![Diagram of adding new backends 2/3]
© Emic 2005
Emmanuel Cecchet | ApacheCon EU 2005
www.emicnetworks.com
Adding new backends 3/3
- Enable backends when done
Making new checkpoints (1/3)
- Disable one backend to have a coherent snapshot
- Mark the new checkpoint entry in the log
- Dump with tar/zip
Making new checkpoints (2/3)
- Replay missing updates from log
![Diagram showing C-JDBC Controller, Recovery Log, Derby JDBC driver, and JDBC Recovery Log with dump for initial checkpoint and dump for last checkpoint.]
Making new checkpoints (3/3)
- Re-enable backend when done
C-JDBC Controller
Recovery Log
Derby JDBC driver
JDBC
JVM
C-JDBC driver
enabled
dump for initial checkpoint
dump for last checkpoint
Derby enabled
Derby enabled
Derby enabled
Making new checkpoints (3/3)
• Re-enable backend when done
Handling failures
- A node fails!
- Automatically disabled but administrator fix needed
- dump for initial checkpoint
- dump for last checkpoint
- dump for last checkpoint
- dump for initial checkpoint
© Emic 2005
Emmanuel Cecchet | ApacheCon EU 2005
www.emicnetworks.com
Recovery 1/3
- Restore latest dump
Recovery 2/3
- Replay missing updates from log
Recovery 3/3
- Re-enable backend when done
---
**Diagram:**
- C-JDBC Controller
- Recovery Log
- Derby JDBC driver
- JVM
- C-JDBC driver
- dump for initial checkpoint
- dump for last checkpoint
- Derby enabled
- Derby enabled
- Derby enabled
---
© Emic 2005
Emmanuel Cecchet | ApacheCon EU 2005
www.emicnetworks.com
Demo – xPetstore/Derby
- open source implementation of Petstore
- servlet version
- C-JDBC used as a driver for Derby remote access
C-JDBC today
• Web site
– ~200,000 hits/month
– ~44,000 downloads
• Community
– 27 committers both industrial & academics
– c-jdbc@objectweb.org: ~300 subscribers, 200-300 msgs/month
– translation in japanese, italian, german, chinese, turkish, french
• RPM on JPackage.org
Current limitations
- JDBC only
- Distributed joins
- Out parameters for stored procedures
- Some JDBC 3.0 extensions
- XA support through XAPool only
- network partition/reconciliation not supported
Conclusion
• RAIDb
– RAID-like scheme for databases
• C-JDBC
– open source middleware for database replication
– performance scalability
– high availability
• Derby & C-JDBC
– open source driver for Derby
– high availability solution for Derby
Q&A
Thanks to all users and contributors ...
http://c-jdbc.objectweb.org
Bonus slides
HORIZONTAL SCALABILITY
Horizontal scalability
- JGroups for controller synchronization
- Groups messages for writes only
Client 1 Client... Client n
Controller1
DB1 DB2 DB3 DB4
Client 1 Client... Client n
Controller1 Controller2
DB1 DB2 DB3 DB4
Horizontal scalability
Centralized write approach issues
- Issues with transactions assigned to connections
Horizontal scalability
- General case for a write query
```plaintext
DistributedVirtualDatabase@controller1
execWriteRequest(AbstractWriteRequest)
Scheduler
Distributed scheduling handler
Load Balancer
Distributed load balancer handler
Notify scheduler of completion
Write query handler
Send request to all controllers
Wait for scheduler replies
Notify controllers of completion success or not
Scheduler
Distributed scheduling handler
Load Balancer
Distributed load balancer handler
Notify scheduler of completion
Return result
DistributedVirtualDatabase@controller2
DistributedRequestManager
DistributedVirtualDatabase@controller3
DistributedRequestManager
Distributed load balancer handler
Notify scheduler of completion
```
Horizontal scalability
Solution: No backend sharing
- 1 multicast + n unicast [+ 1 multicast]
Horizontal scalability
- Issues with JGroups
- resources needed by a channel
- instability of throughput with UDP
- performance scalability
- TCP better than UDP but
- unable to disable reliability on top of TCP
- unable to disable garbage collection
- ordering implementation is sub-optimal
- Need for a new group communication layer optimized for cluster
Horizontal scalability
- JGroups performance on UDP/FastEthernet
![Graph showing performance results with different node counts and network loads.]
Budget High Availability
- High availability infrastructure “on a budget”
- Typical eCommerce setup
- http://www.budget-ha.com
OpenUSS: University Support System
- eLearning
- High availability
- Portability
- Linux, HP-UX, Windows
- InterBase, Firebird, PostgreSQL, HypersonicSQL
- http://openuss.sourceforge.net
Flood alert system
- Independent nodes synchronized with C-JDBC
- VPN for security issues
- http://floodalert.org
J2EE benchmarking
- Large scale J2EE clusters
- http://jmob.objectweb.org
Browsing mix performance
![Graph showing throughput in requests per minute vs. number of nodes. The graph compares different configurations: Single DB, RAIDb-0, RAIDb-1 RR, RAIDb-1 LPRF, RAIDb-2 RR, RAIDb-2 LPRF. Each configuration is represented by a different marker or line style. The x-axis represents the number of nodes, ranging from 0 to 6. The y-axis represents throughput in requests per minute, ranging from 0 to 900.]
Shopping mix performance
Throughput in requests per minute
Number of nodes
- Single DB
- RAID-0
- RAID-1 RR
- RAID-1 LPRF
- RAID-2 RR
- RAID-2 LPRF
Ordering mix performance
Throughput in requests per minute vs. Number of nodes
- Single DB
- RAIDb-0
- RAIDb-1 RR
- RAIDb-1 LPRF
- RAIDb-2 RR
- RAIDb-2 LPRF
|
{"Source-Url": "http://c-jdbc.ow2.org/current/doc/2005-07-20_C-JDBC_ApacheCon05.pdf", "len_cl100k_base": 4723, "olmocr-version": "0.1.49", "pdf-total-pages": 85, "total-fallback-pages": 0, "total-input-tokens": 145581, "total-output-tokens": 7854, "length": "2e12", "weborganizer": {"__label__adult": 0.0002560615539550781, "__label__art_design": 0.00022327899932861328, "__label__crime_law": 0.00020253658294677737, "__label__education_jobs": 0.0005707740783691406, "__label__entertainment": 5.799531936645508e-05, "__label__fashion_beauty": 8.45789909362793e-05, "__label__finance_business": 0.0005931854248046875, "__label__food_dining": 0.00018656253814697263, "__label__games": 0.0005230903625488281, "__label__hardware": 0.0024738311767578125, "__label__health": 0.00018358230590820312, "__label__history": 0.00013530254364013672, "__label__home_hobbies": 6.520748138427734e-05, "__label__industrial": 0.0006580352783203125, "__label__literature": 8.869171142578125e-05, "__label__politics": 0.00011986494064331056, "__label__religion": 0.00026702880859375, "__label__science_tech": 0.01435089111328125, "__label__social_life": 3.808736801147461e-05, "__label__software": 0.0202789306640625, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.00018787384033203125, "__label__transportation": 0.0003459453582763672, "__label__travel": 0.00011986494064331056}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18304, 0.02118]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18304, 0.06139]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18304, 0.61972]], "google_gemma-3-12b-it_contains_pii": [[0, 159, false], [159, 335, null], [335, 483, null], [483, 650, null], [650, 779, null], [779, 874, null], [874, 953, null], [953, 1204, null], [1204, 1308, null], [1308, 1410, null], [1410, 1535, null], [1535, 1614, null], [1614, 1693, null], [1693, 1979, null], [1979, 1979, null], [1979, 2179, null], [2179, 2957, null], [2957, 3216, null], [3216, 3385, null], [3385, 3609, null], [3609, 3959, null], [3959, 4385, null], [4385, 4667, null], [4667, 4968, null], [4968, 5269, null], [5269, 5362, null], [5362, 5435, null], [5435, 5566, null], [5566, 5645, null], [5645, 5859, null], [5859, 6599, null], [6599, 7166, null], [7166, 7969, null], [7969, 8344, null], [8344, 8500, null], [8500, 9071, null], [9071, 9430, null], [9430, 9712, null], [9712, 10098, null], [10098, 10784, null], [10784, 10881, null], [10881, 10960, null], [10960, 11165, null], [11165, 11429, null], [11429, 11546, null], [11546, 11625, null], [11625, 11723, null], [11723, 11984, null], [11984, 12070, null], [12070, 12348, null], [12348, 12482, null], [12482, 12584, null], [12584, 12699, null], [12699, 12864, null], [12864, 12917, null], [12917, 13060, null], [13060, 13281, null], [13281, 13589, null], [13589, 13864, null], [13864, 13900, null], [13900, 13948, null], [13948, 14272, null], [14272, 14482, null], [14482, 14482, null], [14482, 14769, null], [14769, 14970, null], [14970, 15229, null], [15229, 15304, null], [15304, 15317, null], [15317, 15340, null], [15340, 15580, null], [15580, 15690, null], [15690, 16435, null], [16435, 16530, null], [16530, 16902, null], [16902, 17052, null], [17052, 17052, null], [17052, 17181, null], [17181, 17374, null], [17374, 17490, null], [17490, 17565, null], [17565, 17565, null], [17565, 17995, null], [17995, 18146, null], [18146, 18304, null]], "google_gemma-3-12b-it_is_public_document": [[0, 159, true], [159, 335, null], [335, 483, null], [483, 650, null], [650, 779, null], [779, 874, null], [874, 953, null], [953, 1204, null], [1204, 1308, null], [1308, 1410, null], [1410, 1535, null], [1535, 1614, null], [1614, 1693, null], [1693, 1979, null], [1979, 1979, null], [1979, 2179, null], [2179, 2957, null], [2957, 3216, null], [3216, 3385, null], [3385, 3609, null], [3609, 3959, null], [3959, 4385, null], [4385, 4667, null], [4667, 4968, null], [4968, 5269, null], [5269, 5362, null], [5362, 5435, null], [5435, 5566, null], [5566, 5645, null], [5645, 5859, null], [5859, 6599, null], [6599, 7166, null], [7166, 7969, null], [7969, 8344, null], [8344, 8500, null], [8500, 9071, null], [9071, 9430, null], [9430, 9712, null], [9712, 10098, null], [10098, 10784, null], [10784, 10881, null], [10881, 10960, null], [10960, 11165, null], [11165, 11429, null], [11429, 11546, null], [11546, 11625, null], [11625, 11723, null], [11723, 11984, null], [11984, 12070, null], [12070, 12348, null], [12348, 12482, null], [12482, 12584, null], [12584, 12699, null], [12699, 12864, null], [12864, 12917, null], [12917, 13060, null], [13060, 13281, null], [13281, 13589, null], [13589, 13864, null], [13864, 13900, null], [13900, 13948, null], [13948, 14272, null], [14272, 14482, null], [14482, 14482, null], [14482, 14769, null], [14769, 14970, null], [14970, 15229, null], [15229, 15304, null], [15304, 15317, null], [15317, 15340, null], [15340, 15580, null], [15580, 15690, null], [15690, 16435, null], [16435, 16530, null], [16530, 16902, null], [16902, 17052, null], [17052, 17052, null], [17052, 17181, null], [17181, 17374, null], [17374, 17490, null], [17490, 17565, null], [17565, 17565, null], [17565, 17995, null], [17995, 18146, null], [18146, 18304, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18304, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18304, null]], "pdf_page_numbers": [[0, 159, 1], [159, 335, 2], [335, 483, 3], [483, 650, 4], [650, 779, 5], [779, 874, 6], [874, 953, 7], [953, 1204, 8], [1204, 1308, 9], [1308, 1410, 10], [1410, 1535, 11], [1535, 1614, 12], [1614, 1693, 13], [1693, 1979, 14], [1979, 1979, 15], [1979, 2179, 16], [2179, 2957, 17], [2957, 3216, 18], [3216, 3385, 19], [3385, 3609, 20], [3609, 3959, 21], [3959, 4385, 22], [4385, 4667, 23], [4667, 4968, 24], [4968, 5269, 25], [5269, 5362, 26], [5362, 5435, 27], [5435, 5566, 28], [5566, 5645, 29], [5645, 5859, 30], [5859, 6599, 31], [6599, 7166, 32], [7166, 7969, 33], [7969, 8344, 34], [8344, 8500, 35], [8500, 9071, 36], [9071, 9430, 37], [9430, 9712, 38], [9712, 10098, 39], [10098, 10784, 40], [10784, 10881, 41], [10881, 10960, 42], [10960, 11165, 43], [11165, 11429, 44], [11429, 11546, 45], [11546, 11625, 46], [11625, 11723, 47], [11723, 11984, 48], [11984, 12070, 49], [12070, 12348, 50], [12348, 12482, 51], [12482, 12584, 52], [12584, 12699, 53], [12699, 12864, 54], [12864, 12917, 55], [12917, 13060, 56], [13060, 13281, 57], [13281, 13589, 58], [13589, 13864, 59], [13864, 13900, 60], [13900, 13948, 61], [13948, 14272, 62], [14272, 14482, 63], [14482, 14482, 64], [14482, 14769, 65], [14769, 14970, 66], [14970, 15229, 67], [15229, 15304, 68], [15304, 15317, 69], [15317, 15340, 70], [15340, 15580, 71], [15580, 15690, 72], [15690, 16435, 73], [16435, 16530, 74], [16530, 16902, 75], [16902, 17052, 76], [17052, 17052, 77], [17052, 17181, 78], [17181, 17374, 79], [17374, 17490, 80], [17490, 17565, 81], [17565, 17565, 82], [17565, 17995, 83], [17995, 18146, 84], [18146, 18304, 85]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18304, 0.00932]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
b779668b3ad64af4fd9e22e26c259c2c375d2673
|
A Safety-Centric Change Management Framework by Tailoring Agile and V-Model Processes
Abdallah M. Salameh, Ph.D. Candidate; University of Salford, Manchester, UK*
Omar T. Jaradat, Ph.D. Candidate; Mälardalen University; Västerås, Sweden*
Keywords: safety case, contracts, impact analysis, change management, agile software development, agile tailoring, V-model, XP, Kanban
Abstract
Safety critical systems are evolutionary and subject to preventive, perfective, corrective or adaptive changes during their lifecycle. Changes to any part of those systems can undermine the confidence in safety since changes can refute articulated claims about safety or challenge the supporting evidence on which this confidence relies. Changes to the software components are no exception. In order to maintain the confidence in the safety performance, developers must update their system and its safety case. Agile methodologies are known to embrace changes to software where agilists strive to manage changes, not to prevent them. In this paper, we introduce a novel framework in which we tailor a hybrid process of agile software development and the traditional V-model. The tailored process aims to facilitate the accommodation of non-structural changes to the software parts of safety critical systems. We illustrate our framework in the context of ISO 26262 safety standard.
Introduction
Many safety critical systems are subject to compulsory or advisory certification process which often necessitates building the systems in compliance with domain-specific safety standards (Jaradat & Bate, 2017). Safety standards are becoming the main guide of the development and maintenance of hardware and software parts of safety critical systems. Safety standards, also, form the basis for the approval and certification of those systems (Denney et al., 2015). Software systems, in general, are subject to different types of changes (e.g., preventive, perfective, corrective or adaptive) during the different stages in their life-cycle. In order to maintain the confidence in safety after accommodating a change, developers are required to update the safety case, which in turn requires identifying, re-analysing, and re-checking the impacted parts of the system and generate a new valid set of evidence (Jaradat & Bate, 2017). Despite the obvious recommendations to adequately maintain and review the systems and their safety cases by different safety standards, the latter offer little or no advice on how such operations can be carried out (T. Kelly & J. McDermid, 1999). There is an increasing need for globally-accepted methods and techniques to enable easier change accommodation in safety critical systems without incurring disproportionate cost compared to the size of the change. However, since broader re-verification and re-validation require more effort and time, it is important for any proposal aims to facilitate system changes to delimit the impact of changes.
*The author's names are listed in alphabetical order
Safety standards in many safety critical system domains adopt the traditional V-model as a development process for building the systems. Despite the effectiveness of validation and verification that the V-model provides, in addition to other advantages (e.g., easy to estimate costs, create timeliness, and stick to deadlines), the model has a well-known drawback when it comes to handling system changes. This is particularly true when it comes to changes to software systems and their requirements. Following the V-model implies that changes to software components requires re-visiting the system requirements and all later stages to perform a broad and costly impact analysis process. Hence, accepting software changes while using a V-model based process is not a trivial task.
Unlike the series of isolated phases in the V-model, agile methods depend on iterative and incremental development of software to enable reduction in cost, acceleration of time to market in addition to the focus of providing more maintainable code (Salameh, 2011; Tarwani & Chug, 2016). Software developers who follow agile methods breakdown their project into manageable fragments which enables a rapid responsive ways to handle software changes. The Agile way of working minimises the shortcomings of traditional sequential methods and improves the software development process in a more cost-efficient way (Tarwani & Chug, 2016). The alignment of the development process with a dynamic environment is a critical motivation for adopting Agile Software Development (ASD) (Cao et al., 2010). Test Driven Development (TDD) is an important agile process that brings many benefits such as reducing the potential consequences of software defects. TDD protects the system from future failures proactively, which leads to an acceleration of the maintenance process (Knippers, 2011).
The work in this paper does not seek to conduct a comparative study between agile methods and the V-model. The main contribution of this paper, however, is to propose XP-Kan-Safe as a novel maintenance framework to facilitate the accommodation process of software non-structural changes in safety critical systems by utilising the strengths of agile methods and the V-model. More clearly, we reconcile the known effective validation & verification process of the V-model to the known effective practices and the TDD process of agile methods. We exploit the usage of safety contracts (Bate et al., 2003) as: 1) stitches that connect the V-model, Extreme Programming (XP) and Kanban into our tailored process, and 2) means to enable a tri-directional impact analysis process. The hypothesis we make is that ASD can resolve some observed maintenance challenges in the V-model while maintaining software parts of systems.
Background and Motivation
Safety Cases and Safety Arguments
A safety case (also known as assurance or safety assurance case) is: “A structured argument, supported by a body of evidence that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given operating environment" (00-56 Standard, 2015). A safety case shall comprise both safety evidence (e.g., safety analyses, software inspections, or functional tests) and a safety argument explaining that evidence (Jaradat et al., 2014). Safety cases might contain an implicit safety argument but some safety standards
require an explicit argument that is usually expressed in terms of a defined hierarchy of safety claims and sub-claims that are supported by a body of evidence (00-56 Standard, 2015). There are several ways to represent safety arguments (e.g., textual, tabular, graphical, etc.). In this paper, we use the Goal Structuring Notation (GSN) (GSN Standard, 2011), which provides a graphical means of communicating (1) safety argument elements, claims (goals), argument logic (strategies), assumptions, context, evidence (solutions), and (2) the relationships between these elements (Jaradat et al., 2015a). Figure 1 shows the main notations of the GSN.

**Maintenance of Safety Critical Systems and Their Safety Cases**
Change requests should be assessed before decision makers decide whether or not to accept them. The assessment should reveal if the change can cause unreasonable risks, and the required cost to implement the change. Hence, system developers should understand the change and the potential risks that it might carry before they identify the impacted parts. Misunderstanding the change might lead to skip those parts of the system which are dependent on that assumptions. Also, the developers need to understand the dependencies between the system parts to identify the affected parts correctly. For example, the effect of a change can propagate to other parts of the system — creating a ripple effect — and cause unforeseen violations of the acceptable safety limits. If the impact of change is not clear, developers might be conservative and do wider analyses and verification (i.e., check more elements than strictly necessary), and this will exacerbate the cost problem of safety cases. It is also necessary for the developers to describe how the change affects the system parts in order to correctly estimate the cost of the response to that change. Otherwise, the response to a change might generate unplanned further changes to which the system must again respond, and this requires more cost than originally estimated.
**ISO 26262 Safety Standard**
ISO 26262 (ISO 26262:2011, 2011) regulates the automotive domain and it is intended to be applied to safety-related systems that include one or more electrical and/or electronic systems. The following parts are summarised descriptions of the safety requirements decomposition directly from ISO 26262 guidelines:
1. After identifying hazards, the standard recommends to formulate Safety Goals (SGs) to eliminate or mitigate hazards. The standard defines a safety goal as a top-level safety requirement resultant of the hazard analysis and risk assessment. Safety goals are not expressed in terms of technological solutions, but in terms of functional objectives.
2. Identification of SGs leads to the functional safety concept. The objective of the functional
safety concept is to derive the Functional Safety Requirements (FSRs) from the SGs, and to allocate them to the preliminary architectural elements. At least one FSR shall be specified for each SG. Derivation of FSRs can be supported by safety analyses (e.g., Failure modes and effects analysis (FMEA), Fault Tree Analysis, Hazard and Operability Study (HAZOP)) in order to develop a complete set of effective functional safety requirements.
3. The functional concept leads to the technical safety concept. The first objective of the latter is to specify the Technical Safety Requirements (TSRs) and their allocation to system elements. The second objective is to verify that the TSRs comply with the functional safety requirements. TSRs are used to derive Software Safety Requirements (SSRs).
**Safety Contracts**
Contract-based design (Benvenuti et al., 2008) is defined as an approach in which the design process is seen as a successive assembly of components where a component behaviour is represented in terms of assumptions about its environment and guarantees about its behaviour. Hence, contracts are intended to describe functional and behavioural properties for each design component in form of assumptions and guarantees. A contract is said to be a safety contract if it guarantees a property that is traceable to a hazard. Using contracts in development of safety critical systems is not a novel idea since there are many works utilise contracts for building, reusing or maintaining safety critical systems (e.g., (Bate et al., 2003; Jaradat et al., 2015a; Jaradat et al., 2015b)). The cost of maintaining, reusing and changing software components is lessened while using contracts as developers may rework software components with knowledge of the constraints placed upon them (Bates et al., 2003). In this paper, we use contracts to support the maintainability of safety critical systems. We also suggest to include additional information into safety contracts in order to enable effective traceability.
**Agile Software Development (ASD)**
Compared to traditional software engineering approaches, ASD targets complex systems and product development with dynamic, non-deterministic and non-linear characteristics. ASD methods (e.g., XP, Kanban, Scrum) evolve through collaboration between self-organising and cross-functional teams by sharing the same philosophy and utilising the appropriate practices for their contexts.
Each agile method has its own set of features (e.g., practices, terminologies, and tactics) and those features should reflect ASD values and principles. However, agile methods vary when it comes to the strategies they adopt to reflect those values and principles. For example, Kanban is known to have a rapid response to software requirement changes since it allows the team to instantly postpone some change requests to start with other emergent requests. Scrum might do the same but not after the completion of a sprint planning meeting and team commitment. XP teams are amenable to change within their iterations as long as a team has not started work on a particular feature that needs to be exchanged with the new feature. There is no standard recommendation as to how an agile method should implement its features (Campanelli &
Organisations, typically, adapt software development methodologies to be in line with their needs and contexts, which covering the full spectrum of the software development life-cycle (Heeager & Rose, 2015; Salameh, 2011). In fact, there is no single agile method that can be adopted for any arbitrary context or to efficiently cover all phases in the development life-cycle. Hence, organisations might not adopt an entire agile method, but rather they combine different processes from different agile methods based on their needs and contexts.
**Agile Tailoring**
The process in which an agile method is adapted for a specific project situation in a responsive way to accommodate the encountered challenges and to cover the intended interplay between contexts in a dynamic way, is called Agile Tailoring. There are two main approaches to tailor agile methods: the contingency factors and the method engineering theory (Campanelli & Parreiras, 2015). The first approach, handles the tailoring by choosing multiple methods to be on standby in an organisation (i.e., Crystal family (Abrahamsson et al., 2003)). The selection of any standby method is based on project size and criticality, as well as the development context, such as uncertainty level, impact and structure. The second approach is based on meta-method processes and proposes the creation of a new method to be applied on specific contexts based on existing method fragments (a fragment represents a set of practices) (Campanelli & Parreiras, 2015). Despite the flexibility of this approach, it introduces challenges such as how to control the fragments or how to assemble the method for a context specific situation by bringing the appropriate fragments and integrating them into one framework (Campanelli & Parreiras, 2015). In this paper, we tailor our framework using the method engineering approach.
**The Kanban Method**
Kanban is based on lean principles: it tries to remove the waste of the production process by embracing rules to limit Work In Progress (WIP) and measures the time to finish the tasks (Campanelli & Parreiras, 2015). Kanban does not prescribe a specific set of roles or process steps, but rather it encourages its users to start from the existing context by understanding and emphasising the customers’ needs (Ahmad et al., 2013). Kanban is deemed as an approach to process change for organisations by providing sufficient visibility and understanding of the workflow and its progress. Kanban is all about visual signs (aka Kanban Cards) which represent individual work items accompanied with their critical information. Those cards move across a board (aka Kanban board). The latter is partitioned by vertical lanes which are titled, typically, according to the names of the development life-cycle phases (e.g., Analysis, Development, testing). These lanes can be partitioned further to specify the current state of each phase (To Do, Doing and Done). The location of a card on the board indicates the progress of the work and its current state. Kanban shows the assigned work for each team member, communicates priorities and highlights bottlenecks via cycle or lead time and the cumulative flow diagram (Ahmad et al., 2013; Campanelli & Parreiras, 2015).
**The XP Method**
The XP method intends to improve software quality and responsiveness to the changing customer requirements. XP is considered a lightweight agile method that focuses on cost savings, unit tests before and along code activities, frequent full system integration and frequent releases (Campanelli & Parreiras, 2015). XP comprises five phases: exploration, planning, iterations to release, productionising, maintenance and death (Salameh, 2011). The exploration, planning and iterations to release are the only phases involved in our tailored framework.
During the exploration phase, the customers describe the features they wish to have in the first release of their system by writing each of them into a story card (Abrahamsson et al., 2017). Our tailored framework is designed to deal with changes to a system that has been already built by the V-model. Hence, the features are considered as changes to the software system in our case. More clearly, safety engineers (who represent the customers) write change requests into story cards and discuss them with the team manager. During the planning phase, the story cards should be prioritised, an agreement on the first small release should be made and the time span required to implement the story cards should be estimated (Abrahamsson et al., 2017). In the iteration and release planning phase, each release should be incremented by exactly one iteration. The development team should break down requested features (i.e., requested changes in our case) into several small releases. The customer selects the stories that should be implemented in a specific iteration. XP Planning Game is a close interaction between the customer and the development team. The latter should estimate the effort needed to implement the stories.
**A Maintenance Framework to Facilitate Change Management**
In this section, we build upon the background section to propose a new framework which aims to streamline the change management process of non-structural software changes in safety critical systems. The framework is referred to as XP-Kan-Safe and it comprises two main processes: The Preliminary Process and The Change Management process.
Figure 2 provides a conceptual model of the framework. The conceptual model encompasses three phases: 1) Analysis phase to cover the derivation of safety contracts, 2) Planning phase to cover the game planning, and 3) Implementation phase to cover the TDD and other XP practices. The grey background of the model represents the Kanban board.
After completing the safety analysis on the system level, safety contracts should be derived to guarantee the resultant SGs. A safety contract that guarantees a SG is referred to as “SG contract”. The assumptions of a SG contract should capture the FSRs that fulfil the guaranteed SG. Furthermore, a contract should be derived to guarantee every assumed FSR in SG contracts after completing the safety analysis on the safety function level. A safety contract that guarantees a FSR is referred to as “FSR contract”. The assumptions of a FSR contract should capture the TSRs that implement the guaranteed FSR. Finally, a contract should be derived to guarantee every assumed TSR in FSR contracts; such contracts are referred to as “TSR contracts”. The assumptions of a TSR contract should capture the SSRs that implement the guaranteed TSR after completing the safety analysis on the software components level.
Failure modes and effects analysis (FMEA) is recommended by many safety standards (including ISO 26262) as a safety analysis tool to identify potential failures modes. We enable the derivation of safety contracts from FMEAs by adding an extra column to the FMEA table so that safety analysts, together with requirement engineers, should cite their derived contracts in it. FMEA might have a deficiency when it comes to a multiple failures investigation. Hence, safety analysts might use different tools, such as Fault Tree Analysis (FTA) to search for the effects of multiple failures. Our preliminary process takes this into account and manages the derivation of safety contracts from FMEAs and FTAs. Figure 3 shows the connection between FTA and FMEA in addition to an example of a derived safety contract.
A guarantee in a contract and its related assumptions are the main elements of the contracts and they help to understand the relationships and the dependencies among the safety requirements. However, they might not be enough for analysts to identify the impacted artefacts and the elements in the GSN safety argument due to changes because they do not provide information as how the different parts are related to each other. For instance, identifying an impacted TSR will not directly lead to the impacted test cases and the items of evidence which need to be replaced. In order to enhance the traceability between the requirements (i.e., guarantees) and other related artefacts as well as GSN elements, safety contracts should be enriched with additional information. To this end, system developers should include additional information into the derived contracts as follows:
1. Elements in the system architecture: all derived safety requirements should be allocated to elements of the system architecture. However, since the changes we are after in this work are non-structural, we assume that the changes have no effect on the system architecture.
2. Test cases: potential failure modes for which a safety requirement is derived should be considered as testing criteria during the verification phase to ensure the prevention of those failures. Including a reference to test cases in safety contracts enables direct traceability between safety analyses (i.e., FMEA and FTA), safety requirements (i.e., guarantees) and test cases. This traceability enables a top-down change impact analysis from the safety analysis down to the test cases. This top-down analysis represents the first direction of the tri-directional impact analysis process in our maintenance framework. While documenting the safety contracts, the reference of test cases might not be available as the test cases
themselves might not be built yet. System developers are required to revisit each contract and add the corresponding test case references whenever they are made available. Furthermore, given that the test cases are available and complete, system developers can annotate them with the contracts’ references. The annotations in the source code of the test cases are important to establish a traceability that enables a bottom-up impact analysis from the test cases up to the safety analysis. This bottom-up analysis represents the second direction of the tri-directional impact analysis process in our maintenance framework.
3. Elements of safety arguments: each safety contract should contain a reference to the related goals, contexts or items of evidence from safety arguments. Whenever GSN references are made available, system developers are required to revisit each contract and add the corresponding GSN reference to it. Including a reference to GSN elements in safety contracts enables direct traceability between a system and its safety case. This traceability enables a bi-directional impact analysis from the system to its safety case and vice versa. More clearly, an affected guarantee can lead to an affected GSN element. Since the safety case presents the logic of how different artefacts are related, impact analysts might use it to highlight the change impact in the related system. The bi-directional change impact analysis represents the third direction of the tri-directional impact analysis process in our framework.
Figure 3 highlights the suggested traceability information and connects them to specimen artefacts and a GSN element.
The Change Management Process
In this section, we describe the second process of XP-Kan-Safe. This process and its activities represent the result of tailoring ASD and the V-model. The main objective of this tailored process is to guide whoever involved in the change management activities from the arrival of a change until the generation of a new test results report. Figure 4 presents the flow of these activities. The Change Management Process activities are described as follows:
Figure 4—The change management process of XP-Kan-Safe framework
**Activity 1: Understand the change and its impact in the system and its safety case.** Once a change request is placed, *Activity 1* should be followed in which the safety engineers should understand the nature of the change and determine its potential effects in the system and its safety case. In order to initiate the Kanban management process, safety engineers should create a card that describes the change request in more technical specifications and visualise it as a WIP in the analysis phase. The outcome of this activity should provide plausible data about the impacted parts of the system and its safety case.
**Activity 2: Identify the impacted contracts.** In this activity, all related safety contracts to the change should be identified. The benefit of applying the first process of XP-Kan-Safe (i.e., preliminary) will be more realised in this activity since using safety contracts should help to provide a systematic impact analysis through the utilisation of the tri-directional impact analysis. The identified contracts should be listed in the Kanban card.
**Activity 3: Terminate: Forward the change request to the related team.** If there is no safety contract identified as suspect in *Activity 2*, this implies that the change request has nothing to do with the functional safety in the system (no safety requirements are affected). In this specific case, the change request should be forwarded to the relevant team and no further continuation of the change management process is needed.
**Activity 4: Investigate the impacted contracts to estimate the required size of work.** There is no perfect impact analysis that can determine the effects of a change in the system and its artefacts at the first glance. That is, it is unlikely that the team will find out what might, precisely, get impacted merely by looking at the documented requirement and without iterating the impact analysis process. Hence, further investigation should be conducted to gain sufficient confidence in the perceived impact of a change. To this end, this activity should be followed to make further investigation of the impacted contracts. During this activity, a preliminary meeting should be carried out in which safety engineers, who represent the on-site customer with respect to XP, together with the development team, should determine the possibility of identifying more impacted contracts. Any additional identification of safety contracts should be added to the Kanban card. Safety contracts should support the collaboration between safety engineers and the development team to delimit the impacted parts of a system through the tri-directional impact analysis process. It is worth mentioning that any need to modify an existing contract or derive a new one will necessitate the application of this activity.
**Activity 5: Derive new contracts or modify existing contracts.** Since changes might introduce other changes, this might lead to modifying or deriving other contracts (i.e., requirements) that were not thought of earlier in *Activity 2*. In this activity, safety engineers and system developers derive new contracts or modify the existing ones to capture the newly introduced requirements or to update the already captured requirements, respectively. An initial cost of the change accommodation and its timeframe are two among several other factors upon which the approval decision is made. The involvement of the development team in the Activities 4 and 5 should cover the estimation of the initial amount of work and the time needed to complete it. Safety engineers and system developers should agree on: 1) what should be changed or added (i.e., size of the work) and 2) the acceptance of the accompanied potential risk on safety functions. Subsequently, they should submit their agreement to the
management where the latter can either decline or accept the change request. Submission of
the agreement concludes the Analysis Phase, and this means that the Kanban card should
move on the board from (Analysis → Doing) to (Analysis → Done).
**Activity 6: Document the change and its rejection.** If the change request receives a
rejection by the management, the performed investigation and the
management decision should be documented (ISO 26262:2011, 2011). The rejection implies
that the Kanban card should be closed.
**Activity 7: Plan the implementation of change.** If the change request receives an
approval by the management, the Kanban card should be available for development. The
adopted planning method, in our change management process, complies with XP. This
implies that the implementation of the change request is initiated by the planning game. The
input of the planning game is the estimated work and the impacted safety contracts. The
output are more fine-grained estimated tasks than the earlier estimated tasks in Activity 4.
**Activity 8: Implementation by TDD.** In this activity, the implementation of the
change is carried out using TDD. For those contracts that are subject to modification, system
developers should find the related test cases (using the parameters that refer to them in the
contracts) and modify them accordingly. Since modifying a contract might require creating
new test cases, system developers should cite the newly added test cases in the corresponding
contracts and vice versa. This is particularly important to support bi-directional traceability
between the test cases and the contracts while is deemed as a preparation for future changes.
Citing the newly added test cases in the contracts applies to the derived contracts during the
impact analysis process — after the preliminary process — as well Activity 5. Moreover, after
implementing required production code to satisfy the derived test cases, other already existing
test cases might get impacted by newly added code. If the solution is to modify or add new
requirements, system developers should inform the safety engineers about the suggested
changes to the requirements. In this case, the suggested changes by system developers should
be declared as unexpected changes. Afterwards, safety engineers and system developers
should arrange an on-the-fly meeting to investigate the discovered unexpected changes
Activity 4. The meeting should reveal 1) whether or not the suggested changes might
introduce unreasonable risks (i.e., criticality level) and 2) the size of work required to cope
with the suggested changes. The size of work is defined, in this context, based on its influence
on the earlier gaming plan Activity 7 so that big work means a modification of the release
planning is required. If the suggested changes are non-critical, system developers should
implement them or forward them to the relevant team. If the suggested changes are critical,
one of two possible actions should be performed:
1. If the size of work is small, developers should do the fixes on-the-fly and cite the related
test cases in the contracts and vice versa.
2. If the size of work is big and critical, developers should either follow the exchange
strategy by XP to re-prioritise the tasks within the current iteration of the planned release
or plan the tasks for the next release.
Activity 9: Run all related tests. In this activity, system developers should utilise the continuous integration as a first step, according to XP, to avoid delays caused by integration problems. Subsequently, a continuous testing process should be initiated to obtain immediate feedback on the possibility of violating safety countermeasures to prevent unreasonable risks associated with a software release. The scope of testing should be extended from a bottom-up assessment (from test cases to safety requirements) to validate safety goals. In case of any violation of safety requirements after running the continuous testing, system developers should follow Activity 8.
Activity 10: Generate new versions for the modified test reports. This activity should be followed once the continuous testing is completed successfully. New reports of the test results should be generated to replace the out-of-date reports in the safety case. It is significant to update the references of these reports in the safety contracts of the system and its safety case.
Discussion and Conclusion
Maintaining safety critical systems due to changes is a challenging process because of: 1) the lack of awareness of the change’s effects and the ripple of these effects on the system, 2) the lack of documentation of dependencies among the generated artefacts during the development process, and 3) the lack of traceability between a system and its safety case. Following the V-model to accommodate system changes might be very strict, which might be justifiable for structural system changes since many parts get impacted and there is no precise clue about the size of work needed to maintain the system. For software non-structural changes, this might not be justifiable. ASD can provide promising methods to maintain software changes. For example, XP puts great emphasis on the technical aspects (e.g., TDD, continuous integration and code refactoring). Also, Kanban brings the visibility of the workflow and improves the communication and collaboration among the stakeholders. Using ASD for maintaining safety critical systems can be promising but it still needs to comply with the current safety standards. In this paper, we introduced XP-Kan-Safe as a novel framework in which we tailor a hybrid process of ASD and the traditional V-model. The tailored process exploits safety contracts to connect ASD and the V-model, and enable a tri-directional impact analysis process. Future work will focus on creating a more in-depth case study to validate both the feasibility and efficacy of the process as well as to fully automate its application.
Acknowledgment
This work has been partially supported by the Swedish Foundation for Strategic Research (SSF) (through SYNOPSIS and FiC Projects) and the EU-ECSEL (through SafeCOP project). Both authors contributed equally to the paper and their names are listed in alphabetical order.
References
Biographies
Abdallah M. Salameh, Ph.D. candidate, School of Computing, Science & Engineering, University of Salford, Manchester, UK, Tel: +46 721844015, e-mail – a.salameh@edu.salford.ac.uk.
Abdallah Salameh is a senior developer at Bambora Group AB - Sweden and a Ph.D. candidate in the School of Computing, Science and Engineering at the University of Salford, U.K. His research interests include agile software development, where the main focus is on tailoring the processes in large-scale software intensive organisations.
Omar T. Jaradat, Ph.D. candidate, School of Innovation, Design and Engineering, Mälardalen University, Högskoleplan 1, SE-72123, Västerås, Sweden, Tel: +46 21101369, Fax: +46 21101460 e-mail – omar.jaradat@mdh.se.
Omar Jaradat is a Ph.D. candidate in the Innovation, Design and Engineering department at Mälardalen University. His research interests include safety argumentation for safety critical systems, where the main focus is on maintenance of safety-critical systems and safety cases.
|
{"Source-Url": "https://www.system-safety.org/issc2018/wp-content/uploads/2018/05/ISSC18_Final-1.pdf", "len_cl100k_base": 6906, "olmocr-version": "0.1.51", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 40770, "total-output-tokens": 8699, "length": "2e12", "weborganizer": {"__label__adult": 0.00034689903259277344, "__label__art_design": 0.0003139972686767578, "__label__crime_law": 0.0004017353057861328, "__label__education_jobs": 0.0019369125366210935, "__label__entertainment": 4.470348358154297e-05, "__label__fashion_beauty": 0.00016307830810546875, "__label__finance_business": 0.0004038810729980469, "__label__food_dining": 0.0003170967102050781, "__label__games": 0.0005669593811035156, "__label__hardware": 0.0006265640258789062, "__label__health": 0.0005240440368652344, "__label__history": 0.0002083778381347656, "__label__home_hobbies": 8.946657180786133e-05, "__label__industrial": 0.0004622936248779297, "__label__literature": 0.0002315044403076172, "__label__politics": 0.00021898746490478516, "__label__religion": 0.0003843307495117187, "__label__science_tech": 0.01032257080078125, "__label__social_life": 8.404254913330078e-05, "__label__software": 0.003810882568359375, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.00033926963806152344, "__label__transportation": 0.0005826950073242188, "__label__travel": 0.0002046823501586914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39180, 0.0162]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39180, 0.46483]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39180, 0.91395]], "google_gemma-3-12b-it_contains_pii": [[0, 3018, false], [3018, 6415, null], [6415, 9295, null], [9295, 12571, null], [12571, 15841, null], [15841, 19270, null], [19270, 21234, null], [21234, 21964, null], [21964, 24171, null], [24171, 27994, null], [27994, 31380, null], [31380, 34451, null], [34451, 37102, null], [37102, 39180, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3018, true], [3018, 6415, null], [6415, 9295, null], [9295, 12571, null], [12571, 15841, null], [15841, 19270, null], [19270, 21234, null], [21234, 21964, null], [21964, 24171, null], [24171, 27994, null], [27994, 31380, null], [31380, 34451, null], [34451, 37102, null], [37102, 39180, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39180, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39180, null]], "pdf_page_numbers": [[0, 3018, 1], [3018, 6415, 2], [6415, 9295, 3], [9295, 12571, 4], [12571, 15841, 5], [15841, 19270, 6], [19270, 21234, 7], [21234, 21964, 8], [21964, 24171, 9], [24171, 27994, 10], [27994, 31380, 11], [31380, 34451, 12], [34451, 37102, 13], [37102, 39180, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39180, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
854158b28c781d95f414601293c0d517382c3312
|
Scheduling in a Ring with Unit Capacity Links
Perry Fizzano
*Dartmouth College*
Clifford Stein
*Dartmouth College*
Follow this and additional works at: https://digitalcommons.dartmouth.edu/cs_tr
Part of the Computer Sciences Commons
**Dartmouth Digital Commons Citation**
This Technical Report is brought to you for free and open access by the Computer Science at Dartmouth Digital Commons. It has been accepted for inclusion in Computer Science Technical Reports by an authorized administrator of Dartmouth Digital Commons. For more information, please contact dartmouthdigitalcommons@groups.dartmouth.edu.
Scheduling on a Ring with Unit Capacity Links
Perry Fizzano & Clifford Stein
Department of Computer Science
Dartmouth College
Hanover, New Hampshire 03755
Abstract
We consider the problem of scheduling unit-sized jobs on a ring of processors with the objective of minimizing the completion time of the last job. Unlike much previous work we place restrictions on the capacity of the network links connecting processors. We give a polynomial time centralized algorithm that produces optimal length schedules. We also give a simple distributed 2-approximation algorithm.
1 Preliminaries
We consider the problem of scheduling unit-sized jobs on a network of processors arranged in a ring. An instance, $I$, of network scheduling can be described by $I = (G, J)$ where $G = (V, E)$ is an undirected graph representing the network and $J$ is the set of jobs to be processed. Using the scheduling nomenclature we say there are $m$ processors (or machines) labeled $p_1, p_2, \ldots, p_m$, and $n$ jobs. Each vertex in $V$ corresponds to a processor and each edge corresponds to a network link (notice this means there are $m$ nodes in the graph). Each edge has an associated capacity which restricts the amount of data transmitted across it in a single time step. In this paper we assume that the graph, $G$, is a ring. A ring is a network such that processor $p_i$ is connected to $p_{i+1}$ and $p_{i-1}$. (We are assuming throughout the paper that all addition on processor indices is done mod $m$, i.e. processor $p_{m+i}$ is identical to $p_i$.) The set of jobs, $J$, will be indistinguishable and all of unit size, thus an instance can be described by just the number of jobs on a processor at a given time. We denote by $j_i$ the number of jobs currently on processor $p_i$.
The network model just described allows a machine to process a job on the same step that it passes a job. This model is the same as that in many previous papers [1, 3, 4, 5, 6] and is supported by current technology [2]. Additional restrictions that we place on the model are that it takes unit time to traverse a network link and the capacity of each link is one job per time unit. This implies that a processor can pass one job to each neighbor in one time step but it can not pass two jobs to one neighbor.
Furthermore, we consider two network environments; centralized and distributed. A centralized environment is one in which there is global information available about the number of jobs currently residing on each machine. Conversely, a distributed environment has no source of global knowledge. Each processor only knows its own state and any information about other processors must be gathered explicitly.
The layout of the remainder of the paper is as follows. In Section 2 we give an algorithm for scheduling in a centralized setting and in Section 3 we prove it produces optimal length schedules. In Section 4 we give the timing analysis of the algorithm and develop a polynomial time solution to the problem. We present an algorithm for scheduling in a distributed environment in Section 5 and we make some concluding remarks in Section 6.
2 The Centralized Algorithm
2.1 Outline
We begin by presenting an algorithm for scheduling in a centralized environment. The algorithm we present is actually a decision procedure. Given a deadline, $d$, the
algorithm answers whether or not there is a schedule of length $d$ or less. We turn this into an optimization procedure by binary searching. The space over which we binary search is bounded below by $\lceil \frac{d}{2} \rceil$ and bounded above by the maximum number of jobs starting on any one machine.
For the algorithm think of each time step as two halves. In the first half each machine processes a job if it has one and in the second half each machine may transmit a single job to each of its neighbors. Decisions about whether to pass a job or not are made after comparing the number of jobs on each machine to the time remaining until the deadline $d$. Machines that have too many jobs to finish by the deadline are labeled surplus machines; those that could have more jobs and still finish by the deadline are labeled deficit machines; and those that will finish exactly at the deadline with their current number of jobs are labeled on-target machines. Once we determine which machines are surplus, deficit and on-target then we compute a way to send the maximum number of jobs away from surplus machines to deficit machines. This process is repeated until either all of the jobs are processed or the deadline $d$ is reached.
### 2.2 Details of One Step of the Algorithm
The first thing we need to do is determine which nodes are surplus, deficit and on-target. Given the deadline $d$ and the current time $t$, the value that is considered on-target is $d - t$. Given the on-target value we can easily determine the labels of all the machines. The procedure ESTABLISH ROUTES then computes the maximum number of routes in a greedy manner between nodes with surplus and nodes with deficit. A route is defined as a set of edges that connects a deficit node with a surplus node. A route that starts at a surplus node that is created greedily, stops at the closest deficit node in the clockwise direction. Similarly, if a greedily established route starts at a deficit node it stops at the closest clockwise surplus node. Then jobs are sent according to the results of ESTABLISH ROUTES. The only non-trivial part of the decision procedure described in Figure 1 is the procedure ESTABLISH ROUTES. We proceed to explain it here.
The first step of the procedure ESTABLISH ROUTES is to form a new graph, $G'$, which differs slightly from the original graph, $G$. To form $G'$ we contract on-target nodes and duplicate nodes with surplus or deficit greater than one. Formally, the contraction of a node $v$, originally connected to nodes $w$ and $x$, amounts to removing $v$ from the graph and connecting $w$ and $x$ directly. The duplication of a node $v$, originally connected to nodes $w$ and $x$, means that we replace $v$ by two nodes $v_1$ and $v_2$. Node $v_1$ is connected to $w$, node $v_2$ is connected to $x$ and finally $v_1$ is connected to $v_2$. Notice that contractions and duplications preserve the ring structure of the graph.
The next step is to pick a starting point. If there are two adjacent surplus nodes we pick one of them otherwise any surplus node can serve as the starting point. Once we have a starting point we walk around the ring once in a clockwise direction and make routes in a greedy manner. Figure 2 details the procedure Establish Routes.
One final observation is that the routes we have established in the graph $G'$ correspond directly to routes in $G$ even though we have contracted out on-target nodes. Consider a route (or any piece of a route) in $G'$ that goes from vertex $v$ to vertex $w$ such that $v$ and $w$ are not directly connected in $G$. Instead they are separated by on-target nodes $u_1, u_2, \ldots, u_k$. This route corresponds to a route in $G$ where $v$ passes a job to $u_1$, $u_1$ passes to $u_2$ and so on until $u_k$ passes a job to $w$. Notice that this chain of passing effectively lets us pass a job further than just one link in a time step.
### 3 Correctness of the Algorithm
First we need to prove that the subroutine ESTABLISH ROUTES is producing the maximum number of routes between surplus and deficit machines. Let $\mathcal{S}$ be the number of surplus nodes in $G'$ and let $\mathcal{D}$ be the number of deficit nodes in the graph $G'$. We will use the following simple lemma.
**Lemma 1** The maximum number of routes between surplus and deficit nodes in $G'$ is no more than the minimum of $\mathcal{S}$ and $\mathcal{D}$.
**Proof:** Without loss of generality assume that $\mathcal{S}$ is smaller. The best we could do is have every surplus node on a different route since each route must consist of at least one surplus node and one deficit node. Hence the maximum number of routes we could establish is $\mathcal{S}$. $\square$
RING SCHEDULER (d)
*d* is the length of the schedule to be checked.
for *i* = 1 to *d*
- for each processor *p_k* with a job, *j_k = j_k - 1*
- label machines as deficit, surplus or on-target (the target is *d - i* on step *i*)
- call ESTABLISH ROUTES
- send job(s) as specified by ESTABLISH ROUTES
if every machine has zero jobs left then
answer "yes there is a schedule of length *d*"
else
answer "no schedule of length *d*"
Figure 1: The decision procedure
ESTABLISH ROUTES
- create the new graph, *G'*, by contracting on-target nodes and duplicating nodes two or more in surplus or deficit
- if there are two adjacent surplus nodes then
*start* = the clock-wise node
- else
*start* = any surplus node
- while all the nodes have not been considered
- establish a clock-wise route from *start* to the first node which completes a route, call it *v'*.
- *start* = the first node clock-wise of *v'*
- translate the routes established in *G'* to routes in *G*
Figure 2: The procedure ESTABLISH ROUTES
Note that we could get fewer than $S$ routes if we could not arrange all the surplus nodes to be on different routes (e.g., if there are three surplus nodes in a row).
Next we justify the starting point that the algorithm chooses to establish the maximum number of routes.
**Lemma 2** The first route that Establish Routes makes is in some optimal solution.
**Proof:** There are two cases to consider. The first case holds if there are adjacent surplus nodes and the second case holds otherwise.
If there are adjacent surplus nodes, $s_1$ and $s_2$, such that $s_1$ is clockwise of $s_2$ then we claim that you can start at $s_1$ and walk clockwise around the ring to produce the maximum number of routes.
Let $OPT = (b_1, b_2, ..., b_k)$ be any maximum set of routes (Notice the ordering of the routes in OPT is not relevant. We're just trying to achieve a solution with maximum cardinality.) We show how to convert this solution to a solution of the same cardinality which contains a route whose starting point corresponds to the starting point of the first route our algorithm would establish. There are four cases to consider.
(i) $s_1$ is on a route that does not contain $s_2$ in $OPT$.
This is what our algorithm does.
(ii) $s_2$ starts a route which goes through $s_1$ in $OPT$.
We can drop $s_2$ from the route. Now we have a set of routes so that the first node of one route starts at $s_1$ and proceeds clockwise.
(iii) Neither of $s_1$ and $s_2$ are on a route in $OPT$.
Look at the first route in the optimal set which is cw from $s_1$. If the node on this route which is closest to $s_1$ is a surplus node then attach $s_1$ to the front of this route. If the node closest to $s_1$ is a deficit node, $v$, then replace the original route to $v$ with a route from $s_1$ to $v$. Now we have a solution of the same cardinality with the first route being one that our algorithm produces.
(iv) $s_1$ starts a route which goes through $s_2$ in $OPT$.
First, take $s_1$ off of this route. Then perform the same trick as case iii with $s_1$.
The second part of the proof handles the case when there are not adjacent surplus nodes in $G'$. If this is the case then we know that there are at least as many deficit nodes as surplus nodes. By Lemma 1 we know that the maximum number of routes we can get is bounded by $S$.
Since there are deficit nodes between every pair of surplus nodes we can produce $S$ paths by starting at any surplus node and create a clockwise path to the first deficit nodes. □
The previous lemma says that the first route that our algorithm forms is compatible with an optimal solution. This will be used as the basis for an inductive proof that shows that Establish Routes forms a maximum set of routes while using this initial route.
**Lemma 3** Establish Routes produces the maximum number of routes between surplus and deficit nodes in the graph $G'$.
**Proof:** Let $A = (a_1, a_2, ..., a_l)$ be the set of routes, in cw order, produced by our greedy method. By Lemma 2 we know that the first route, $a_1$ is compatible with some optimal solution. Let $OPT = (b_1, b_2, ..., b_k)$ be such a solution. Now we claim that after selecting $a_1$ as our initial route we have reduced the problem to a smaller instance of the same problem.
Say the route $a_1$ consists of vertices $v_1, v_2, ..., v_x$. The new problem includes the vertices $(v_{x+1}, ..., v_m)$ and the solution $OPT - b_1$ must be an optimal solution to this smaller problem. For if it wasn't, then we could obtain a larger set of routes to the original problem by concatenating $a_1$ to the beginning of the optimal schedule for this smaller instance. By induction, we can argue that the greedy choice for every route will produce an optimal set of routes. □
Up to this point we have shown that Establish Routes produces the maximum number of routes between surplus and deficit nodes. This implies that we are removing as many jobs as possible from surplus machines on each time step. However, there is one other aspect of the algorithm for which we have not accounted. On each step every machine that has a job processes it. Call this the Greedy Processing Rule. The next lemma shows that this rule does not inhibit the production of optimal length schedules.
**Lemma 4** There exist optimal length schedules that use the Greedy Processing Rule.
**Proof:** Assume we are given an optimal schedule, $S$, of length $d$. Assume that step $i$ is the first step where the schedule $S$ does not use the Greedy Processing Rule.
The claim is that we can replace the \( i^{th} \) step of \( S \) with a step of our schedule, \( S' \), which does use the Greedy Processing Rule. The only problem that could develop is that with a different \( i^{th} \) step \( S' \) might not be able to do the exact same routing as in some subsequent step of \( S \). But notice that this is only a problem when some processor, \( p \), processes its last job sooner in our schedule than it did in \( S \). This may affect a future routing step because some other processor, \( q \), may be on the receiving end of a route which goes through \( p \). Now \( q \) will not be able to receive a job because \( p \) will not be able to pass on any jobs towards \( q \). However, processing this job on \( p \) instead of \( q \) does not lengthen the schedule because after this step every processor has no more jobs than it did in \( S \). Hence, the length of the schedule from here on can be no greater in \( S' \) than in \( S \). \( \Box \)
**Theorem 1** Ring Scheduler will correctly determine if a schedule of length \( d \) exists.
**Proof:** If there is no schedule of length \( d \) then there will be no routing and processing scheme which could achieve it. Ring Scheduler will not erroneously find a schedule of length \( d \) because it does not process more than one job per time unit nor does is send more than one job across a link in any time step.
If there is a schedule of length \( d \) then we claim that Ring Scheduler will find it. By Lemma 4 we know that the Greedy Processing Rule allows us to produce an optimal length schedule. By Lemma 3 we are sending as many jobs as possible away from surplus machines on each time step. We claim that this greedy approach is optimal. The reason is that it doesn’t matter what order the surplus machines get rid of work because they all must get rid of all their surplus by the deadline in order for the schedule to complete by time \( d \). Imagine some surplus processor holding onto a job in order to pass more jobs on a subsequent time step. Since that processor is holding one extra job the most number of extra routes that could be established at a later time is one. Thus, more routes are not created overall by holding onto jobs. \( \Box \)
### 3.1 Different Processor Speeds
Until now, all processors could process one job per time unit. We can modify the problem so that some processors can process more jobs than one per time unit. Let \( s_i \) denote the speed of processor \( p_i \). The speed of a processor is defined as the number of jobs it can process in one unit of time. If we know the speed of each processor then we can determine if a processor is a surplus, deficit or on-target node by calculating whether it can process its remaining jobs by the deadline \( d \).
This modification does not change the basic structure of the problem. No node will receive a job that it can’t process by the deadline and the most number of jobs are being sent away from surplus nodes on each time step.
We can also modify the space over which we perform binary search. Let \( S \) denote the sum of the speeds of all the processors. A lower bound on the schedule length can be expressed as \( \lceil \frac{2n}{S} \rceil \). An upper bound on the schedule length is the maximum time any processor would take to finish with no jobs getting passed.
### 4 Time Complexity
#### 4.1 A Simple Analysis
The running time of Establish Routes is linear in the number of nodes of the graph \( G' \). (recall this is within a constant factor of the number of machines which is \( m \)).
To check a schedule of length \( d \) we need to run Establish Routes at most \( d \) times. However our choice as to what \( d \) to check is the result of a binary search. The interval over which we search is bounded from below by \( \lceil \frac{2n}{m} \rceil \) and from above by \( n \) which is an upper bound on the maximum number of jobs that starts on any one machine.
Thus there can be \( O(\log n) \) invocations of Establish Routes. This gives Ring Scheduler a running time of \( O(dm \log n) \). In the worst case \( d \) is \( O(n) \) which results in a pseudo-polynomial time algorithm because the input for this problem can be specified in \( O(m \log m + m \log n) \) space since the jobs are indistinguishable and only the number of jobs on each machine is necessary to describe the instance.
Previous results of Deng et al. [3] give results for general network structures and general capacities of the network links. However, their solutions are not polynomial either but pseudo-polynomial, because one term of the running time is the number of jobs, \( n \), which as we just said is not bounded by a polynomial in the input size.
Now we analyze a slight modification of our algorithm and show that it runs in polynomial time.
4.2 A Better Analysis
To obtain a faster running time we can take advantage of the fact that no machine ever changes from an on-target machine to a surplus or a deficit machine. Machines that are surplus or deficit will approach a value that is on-target but once they are on-target they never change. Our algorithm enforces this by contracting out on-target nodes from the graph on which we run ESTABLISH ROUTES.
So we can classify each machine in one of the following five states: two or more in surplus, one in surplus, on-target, one in deficit, two or more in deficit. However, since a machine can not go from any surplus state to any deficit state and it monotonically approaches the on-target value it can only take on at most three states; either the first three or the last three. Hence, the network as a whole will have at most $3m$ different configurations over the course of the algorithm. We can speed up the algorithm by not running ESTABLISH ROUTES as often. If after an iteration of the algorithm no machines have changed state then the same set of routes will suffice for the next iteration. Therefore, we only need to run ESTABLISH ROUTES $O(m)$ times instead of $O(d)$ times.
To make this procedure realizable we need to be able to compute the time that the current set of routes must change so that a machine does not go from a surplus state to a deficit state or vice versa. This entails recognizing one of two situations. The first is if a machine is two or more in surplus (or deficit) and it is on the end of two routes then as soon as it becomes only one in surplus (or deficit) or on-target we must run ESTABLISH ROUTES again. The second is if a machine is on the end of only one route it will change state when it reaches the on-target value and we must re-run ESTABLISH ROUTES at this point.
Let $t$ be the value that is considered on-target, let $T$ be the current time and let $j_i$ denote the number of jobs on processor $p_i$. Let $p_i^j$ be the processors that are on the end of exactly one route in the current set of routes and $p_i$ represents the processors that are on the end of two routes in the current set of routes. Now we can compute the next time that the routes will change as:
$$T + \min(\min_n, \max_n, (\floor{\frac{1}{2}}\floor{\frac{j}{k}})).$$
Theorem 2 Ring Scheduler runs in $O(m^2 \log n)$ time.
Proof: The above discussion shows that ESTABLISH ROUTES only needs to be called $O(m)$ times. Given this bound on the number of calls to ESTABLISH ROUTES we can bound the total running time of the algorithm by $O(m^2 \log n)$, which gives us a polynomial time algorithm.
5 A Distributed Scheduler
Thus far all of our results have depended on some sort of global knowledge. We needed to know exactly how many jobs each machine had on each time step in order to determine if the machine was a surplus machine or a deficit machine. In a distributed setting this information must be obtained by passing messages around the network. We are assuming that a message can be sent, as well as a job, along the network links each time step. The message is just an integer representing the number of jobs on a given machine so we are not really abusing the limited capacity of the network links.
The basic idea of the algorithm is for each processor to know the state of its neighbors at the previous time step, and then pass a job to either or both neighbors if that neighbor is in danger of being idle on the next time step. The details appear in Figure 3, where we use $j_i$ to denote the number of unprocessed jobs on processor $p_i$. Note that in this description of the algorithm two messages can be sent over a link in one step; it is not hard to reduce this to one.
We wish to prove that this algorithm produces schedules of length close to optimal. We will let $I$ denote an instance of the scheduling problem, and $OPT(I)$ the length of the shortest possible schedule for $I$. If algorithm $A$ always yields a schedule of length no more than $\rho OPT(I) + O(1)$ we call $A$ a $\rho$-approximation algorithm. We first show a lower bound on any scheduling algorithm, even one with global knowledge.
Lemma 5 If the optimal schedule is of length $d$ then no consecutive group of $k$ processors can start with more than $(k+2)d$ total jobs.
receive messages from neighbors \( p_{i-1} \) and \( p_{i+1} \).
**Proof:** The best that could be done with \((k + 2)d\) jobs and \(k\) consecutive processors is to have the work distributed evenly among the \(k\) processors and send two jobs out of the region on every time step. This leads to \((k + 2)d - 2\) jobs being processed in \(d\) units of time. This is a contradiction of the optimal schedule length being \(d\).
This lemma exposes a significant restriction on the way work can be distributed among the processors. For example, no pair of adjacent processors, at time 0, can contain more than \(4d\) jobs. There are also significant restrictions on the conditions under which jobs can be passed.
**Lemma 6** Given a processor \(p_i\), let \(t\) be the earliest time that \(j_i \leq 1\). Then
\(a\) \(p_i\) receives no jobs before time \(t\).
\(b\) After time \(t\), \(j_i \leq 3\).
\(c\) Let \(t' > t\) be the first time that \(p_j \leq 3\), where \(p_j\) is a neighbor of \(p_i\). For any \(k\), \(1 \leq k \leq t' - t\), \(p_i\) passes a job to \(p_j\) in at least half of the time steps between \(t\) and \(t + k\), inclusive.
**Proof:** Part \(a\) is clear from the description of the algorithm. For part \(b\), we observe that due to the time delay between the actual state of a processor and the state that its neighbor is aware of, processor \(p_i\) can receive jobs for two consecutive time steps. When it first receives jobs it must have zero jobs, thus after receiving jobs it has at most two jobs. During the next step, it will process one job, and receive up to two jobs, thus having at most three jobs at the end of the step. However, if it had two at the beginning of the previous step it will receive no more jobs on the next step, and will receive no more jobs until it has processed all of its jobs and has zero remaining. Hence there is no way for the number of jobs to rise above three.
A slightly more careful look at the proof of part \(b\) will suffice to establish part \(c\). At time \(t\), if \(p_i\) has one job, then at times \(t + 1\) and \(t + 2\) it will receive a job from each neighbor that has more than three unprocessed jobs. This is due to the one unit time delay between the actual state of a processor and the state that the processor’s neighbor is aware of. It then takes \(p_i\) at most two steps to process received jobs until it returns to having one job; the cycle continues until \(p_i\)’s neighbors run out of work to pass. So in at least two out of every four steps, passing occurs; furthermore, the passing occurs in the first two steps after \(p_i\) becomes idle. This establishes the claim. □
We now show that the algorithm in Figure 3 is a 2-approximation algorithm.
**Lemma 7** Let \(S'\) be the schedule in which no processor ever passes a job and let \(S\) be the schedule produced by the algorithm in Figure 1. Then \(S\) is no longer than \(S'\).
**Proof:** Let \(m(t)\) be the maximum number of jobs on any processor at time \(t\). In schedule \(S'\) it is always the case that \(m(t + 1) = m(t) - 1\). We will show that in schedule \(S\), \(m(t + 1) \leq m(t) - 1\), thus proving the lemma.
We observe that in \(S\) the only processors that pass jobs have more than three jobs, and that the processor that had the maximum number of jobs \(m(t)\) at time \(t\) has at most \(m(t) - 1\) jobs at time \(t + 1\). Thus, when \(m(t) > 3\), the processors with \(m(t)\) jobs decrease by at least one, and by part \(b\) of Lemma 6, no processor’s load increases above 3, so \(m(t)\) decreases. When \(m(t) \leq 3\), no passing occurs. Therefore at each step of \(S\) \(m(t)\) decreases by
at least 1, which implies that the length of $S$ is at most $m(0)$, which is the length of $S^r$. \hfill \Box
Now using the previous three lemmas we show that the capacitated ring scheduling algorithm gives schedules of length within a factor of two of optimal.
Theorem 3 Let $d$ be the length of the optimal schedule. Then the capacitated ring scheduling algorithm produces a schedule of length no more than $2d + 2$.
Proof: There are two cases to consider.
Case 1: No processor starts with more than $2d$ work.
By Lemma 7 we know that the schedule length does not increase by passing jobs, thus the maximum schedule length for this case is $2d$.
Case 2: Some processor starts with more than $2d$ work.
Let processor $p_i$ be a processor that starts with more than $2d$ work, i.e., processor $p_i$ has $2d + x$ work (for $0 < x \leq d$). Its neighbors, $p_{i-1}$ and $p_{i+1}$, start with at most $2d - x'$ (for $x' > x$ by Lemma 5). Assume $p_{i+1}$ starts with $2d - x'$ work, and that this is no less than what $p_{i-1}$ starts with. At time no later than $2d - x'$, $p_{i+1}$ becomes idle. At this point it may receive work from both its neighbors. By part c) of Lemma 6 we know that during at least half the time steps in the interval from time $2d - x'$ through the time when $j_i$ first goes below 3, $p_i$ will pass jobs to $p_{i+1}$.
At time $2d - x'$, $p_i$ has at most $x + x'$ work. Assume for simplicity that $p_i$ has passed no jobs to its neighbors at any time up to $2d - x'$. It will now begin to pass jobs to its neighbors until it has only three jobs left. In $[\frac{x + x'}{2}]$ time it can pass $[\frac{x + x'}{2}]$ work to each neighbor and process $[\frac{x + x'}{2}]$ jobs. It will then spend three units of time processing the final three jobs. So $p_i$ completes all its jobs in time no more than $2d - x' + [\frac{x + x'}{2}] + 3$ which is less than or equal to $2d + 2$ since $x' > x$. \hfill \Box
We note that a more careful analysis (on a slightly modified algorithm) which goes through a number of cases for the last three steps can be used to show a bound of exactly $2d$.
6 Conclusions
We have given a simple and efficient centralized scheduling algorithm to produce optimal length schedules on a ring of processors when the bandwidth of the network links is limited to one job per time unit. This is much faster than the best known algorithm for this instance [3] and in addition, is the first polynomial time solution to the problem. This approach has led us to designing a simple distributed algorithm for the same network structure that produces schedules within a factor of two of optimal.
References
|
{"Source-Url": "https://digitalcommons.dartmouth.edu/cgi/viewcontent.cgi?article=1103&context=cs_tr", "len_cl100k_base": 7199, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 33771, "total-output-tokens": 8136, "length": "2e12", "weborganizer": {"__label__adult": 0.0003337860107421875, "__label__art_design": 0.0003509521484375, "__label__crime_law": 0.000431060791015625, "__label__education_jobs": 0.001934051513671875, "__label__entertainment": 0.00013458728790283203, "__label__fashion_beauty": 0.00020778179168701172, "__label__finance_business": 0.0009322166442871094, "__label__food_dining": 0.0005197525024414062, "__label__games": 0.0009450912475585938, "__label__hardware": 0.00279998779296875, "__label__health": 0.0010938644409179688, "__label__history": 0.00043845176696777344, "__label__home_hobbies": 0.00019693374633789065, "__label__industrial": 0.0009946823120117188, "__label__literature": 0.0003170967102050781, "__label__politics": 0.0003941059112548828, "__label__religion": 0.0005774497985839844, "__label__science_tech": 0.369140625, "__label__social_life": 0.00011080503463745116, "__label__software": 0.0149383544921875, "__label__software_dev": 0.6015625, "__label__sports_fitness": 0.0003743171691894531, "__label__transportation": 0.001018524169921875, "__label__travel": 0.0002849102020263672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30811, 0.01611]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30811, 0.37907]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30811, 0.9245]], "google_gemma-3-12b-it_contains_pii": [[0, 806, false], [806, 4153, null], [4153, 8872, null], [8872, 9892, null], [9892, 14432, null], [14432, 19193, null], [19193, 23588, null], [23588, 27251, null], [27251, 30811, null]], "google_gemma-3-12b-it_is_public_document": [[0, 806, true], [806, 4153, null], [4153, 8872, null], [8872, 9892, null], [9892, 14432, null], [14432, 19193, null], [19193, 23588, null], [23588, 27251, null], [27251, 30811, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30811, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30811, null]], "pdf_page_numbers": [[0, 806, 1], [806, 4153, 2], [4153, 8872, 3], [8872, 9892, 4], [9892, 14432, 5], [14432, 19193, 6], [19193, 23588, 7], [23588, 27251, 8], [27251, 30811, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30811, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b7c911c4049efceeff55bfbedca00bde34663a0a
|
IANA Charset Registration Procedures
draft-iana-charset-reg-procedure-01
Abstract
Multipurpose Internet Mail Extensions (MIME) ([RFC-2045], [RFC-2046],
[RFC-2047], [RFC-2231]) and various other Internet protocols are capable
of using many different charsets. This in turn means that the
ability to label different charsets is essential.
This document obsoletes the IANA Charset Registration Procedures
originally defined in [RFC2978]. Specifically, this document
completely revises the registration procedures and the charset
registries. The charset registry is now divided into three parts
with separate registration procedures for each.
Note: The charset registration procedure exists solely to associate a
specific name or names with a given charset and to give an indication
of whether or not a given charset can be used in MIME text objects.
In particular, the general applicability and appropriateness of a
given registered charset to a particular application is a protocol
issue, not a registration issue, and is not dealt with by this
registration procedure.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on October 26, 2015.
Table of Contents
1. Definitions and Notation ........................................... 3
1.1. Requirements Notation ....................................... 3
1.2. Character .................................................. 3
1.3. Charset .................................................... 3
1.4. Coded Character Set ......................................... 4
1.5. Character Encoding Scheme ................................... 4
2. Charset Registration Requirements ................................. 4
2.1. Required Characteristics .................................... 4
2.2. New Charsets ................................................. 4
2.3. Naming Requirements ......................................... 5
2.4. Functionality Requirement .................................... 6
2.5. Usage and Implementation Requirements ....................... 6
2.6. Publication Requirements .................................... 6
2.7. MIBenum Requirements ....................................... 6
3. The Charset Registry ................................................ 7
3.1. The Recommended charset registry ............................ 7
3.2. The Widely-used Open Standard charset registry .......... 7
3.2.1. Submitting "Widely-used Open Standard" charset
Proposals to the IETF Community ........................ 8
3.2.2. IANA Charset Registration Template .................... 8
3.2.3. Charset Reviewer ........................................ 9
3.2.4. IANA Registration of "Widely-used Open Standard"
charsets ................................................. 9
3.3. The Other charset subregistry ................................ 9
4. IANA Considerations ............................................... 9
4.1. Publication of Registered Charset List ...................... 10
1. Definitions and Notation
The following sections define terms used in this document.
1.1. Requirements Notation
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
1.2. Character
A member of a set of elements used for the organization, control, or representation of data.
1.3. Charset
The term "charset" (referred to as a "character set" in previous versions of this document) is used here to refer to a method of converting a sequence of octets into a sequence of characters. This conversion may also optionally produce additional control information such as directionality indicators.
Note that unconditional and unambiguous conversion in the other direction is not required, in that not all characters may be representable by a given charset and a charset may provide more than one sequence of octets to represent a particular sequence of characters.
This definition is intended to allow charsets to be defined in a variety of different ways, from simple single-table mappings such as US-ASCII [RFC0020] to complex table switching methods such as those that use ISO 2022's [ISO-2022] techniques. However, the definition
associated with a charset name must fully specify the mapping to be performed. In particular, use of external profiling information to determine the exact mapping is not permitted.
HISTORICAL NOTE: The term "character set" was originally used in MIME to describe such straightforward schemes as US-ASCII and ISO-8859-1 [ISO-8859] which consist of a small set of characters and a simple one-to-one mapping from single octets to single characters. Multi-octet character encoding schemes and switching techniques make the situation much more complex. As such, the definition of this term was revised to emphasize both the conversion aspect of the process, and the term itself has been changed to "charset" to emphasize that it is not, after all, just a set of characters. A discussion of these issues as well as specification of standard terminology for use in the IETF appears in [RFC2130].
1.4. Coded Character Set
A Coded Character Set (CCS) is a one-to-one mapping from a set of abstract characters to a set of integers. Examples of coded character sets are ISO 10646 [ISO-10646], US-ASCII [RFC0020], and the ISO-8859 series [ISO-8859].
1.5. Character Encoding Scheme
A Character Encoding Scheme (CES) is a mapping from a Coded Character Set or several coded character sets to a set of octet sequences. A given CES is sometimes associated with a single CCS; for example, UTF-8 [RFC3629] applies only to ISO 10646.
2. CharSet Registration Requirements
Registered charsets are expected to conform to a number of requirements as described below.
2.1. Required Characteristics
Registered charsets MUST conform to the definition of a "charset" given above. In addition, charsets intended for use in MIME content types under the "text" top-level media type MUST conform to the restrictions on that type described in [RFC2045]. All registered
All charsets MUST note whether or not they are suitable for use in MIME text.
All charsets which are constructed as a composition of one or more CCS's and a CES MUST either include the CCS's and CES they are based on in their registration or else cite a definition of their CCS's and CES that appears elsewhere.
All registered charsets MUST be specified in a stable, openly available specification. Registration of charsets whose specifications aren't stable and openly available is forbidden.
2.2. New Charsets
This registration mechanism is not intended to be a vehicle for the design and definition of entirely new charsets. This is due to the fact that the registration process does NOT contain adequate review mechanisms for such undertakings.
As such, only charsets defined by other processes and standards bodies, or specific profiles or combinations of such charsets, are eligible for registration.
2.3. Naming Requirements
One or more names MUST be assigned to all registered charsets. Multiple names for the same charset are permitted, but if multiple names are assigned a single primary name for the charset MUST be identified. All other names are considered to be aliases for the primary name and use of the primary name is preferred over use of any of the aliases.
Each assigned name MUST uniquely identify a single charset. All charset names MUST be suitable for use as the value of a MIME content type charset parameter and hence MUST conform to MIME parameter value syntax (see Section 5.1 of RFC 2045). This applies even if the specific charset being registered is not suitable for use with the "text" media type. All charsets MUST be assigned a name that provides a display string for the associated "MIBenum" value defined below. These "MIBenum" values are defined by and used in the Printer MIB [RFC1759]. [[RFC 1759 got obsoleted by RFC 3805 and MIBEnum is]]
no longer there. Should we point to [http://www.iana.org/assignments/ianacharset-mib](http://www.iana.org/assignments/ianacharset-mib) instead?]] Such names MUST begin with the letters "cs" and MUST contain no more than 40 characters (including the "cs" prefix) chosen from the printable subset of US-ASCII. Only one name beginning with "cs" may be assigned to a single charset. If no name of this form is explicitly defined IANA will assign an alias consisting of "cs" prepended to the primary charset name.
Finally, charsets being registered for use with the "text" media type MUST have a primary name that conforms to the more restrictive syntax of the charset field in MIME encoded-words [RFC2047] [RFC2231] and MIME extended parameter values [RFC2231]. A combined ABNF [RFC5234] definition for such names is as follows:
```abnf
mime-charset = 1*mime-charset-chars
mime-charset-chars = ALPHA / DIGIT /
"!" / ";" / ":" / "@" / "#" / "\" / "%" / "^" / "_" / "`" /
"{" / "}" / "~"
ALPHA = "A".."Z" ; Case insensitive ASCII Letter
DIGIT = "0".."9" ; Numeric digit
```
2.4. Functionality Requirement
Charsets MUST function as actual charsets: Registration of things that are better thought of as a transfer encoding, as a media type [RFC2046], or as a collection of separate entities of another type, is not allowed. For example, although HTML could theoretically be thought of as a charset, it is really better thought of as a media type and as such it cannot be registered as a charset.
2.5. Usage and Implementation Requirements
Use of a large number of charsets in a given protocol may hamper interoperability. However, the use of a large number of undocumented and/or unlabeled charsets hampers interoperability even more.
A charset should therefore be registered ONLY if it adds significant functionality that is valuable to a large community, OR if it documents existing practice in a large community. Note that charsets registered for the second reason should be explicitly marked as being of limited or specialized use and should only be used in Internet messages with prior bilateral agreement.
2.6. Publication Requirements
Charset registrations MAY be published in RFCs, however, RFC publication is not required to register a new charset.
The registration of a charset does not imply endorsement, approval, or recommendation by the IANA, IESG, or IETF, or even certification that the specification is adequate. It is expected that applicability statements for particular applications will be published from time to time that recommend implementation of, and support for, charsets that have proven particularly useful in those contexts.
Charset registrations SHOULD include a specification of mapping from the charset into ISO 10646 (Unicode) [Unicode7.0] if specification of such a mapping is feasible.
2.7. MIBenum Requirements
Each registered charset MUST also be assigned a unique enumerated integer value. These "MIBenum" values are defined by and used in the Printer MIB [RFC1759]."
A MIBenum value for each charset will be assigned by IANA at the time of registration. MIBenum values are not assigned by the person registering the charset.
3. The Charset Registry
The following procedure has been implemented by the IANA for review and approval of new charsets. In [RFC2978] an Expert Review process was used to add new charsets into the registry. This document changes that model by creating a new charset registry with three new subregistries. For each of the new registries, the registration procedures and initial registrations are provided.
3.1. The Recommended charset registry
The first sub-registry of the full charset registry is the "recommended" charset registry.
New registrations in the "recommended" charset registry require "Standards Action" as defined by [RFC5226]. Specifically, the charset MUST have a standards track RFC that defines the charset itself and MUST ALSO have a standards track RFC recommending its use.
In the RFC that defines the charset, the document MUST have a single recommended MIME charset label following the "mime-charset" syntax defined in Section 2.3. It MUST also state whether it is suitable for MIME text and have a reference to a formal specification or translation table to Unicode [Unicode7.0].
There is one, initial entry in the Recommended charset registry: UTF-8 [RFC3629].
3.2. The Widely-used Open Standard charset registry
The second sub-registry of the full charset registry is the "Widely-used Open Standard" charset registry.
New registrations in the "Widely-used Open Standard" charset registry require "Expert Review" as defined by [RFC5226]. In Section 3.2.2 of this document a template is provided that allows proposals for new charsets in this subregistry.
In the template that describes the charset, the template MUST provide a single recommended MIME charset label following the "mime-charset" syntax defined in Section 2.3. It MUST ALSO state whether it is suitable for MIME text and have a reference to a formal specification or translation table to Unicode.
The following charsets are to be moved from the historic charset registry into the new "Widely-used Open Standard" subregistry: [INSERT A LIST OF CHARSET NAMES HERE. [[GUIDANCE IS REQUIRED FOR THIS ENTRY]]]
Send the proposed "Widely-used Open Standard" charset proposal to the "ietf-charsets@iana.org" mailing list. (Information about joining this list is available on the IANA Website, http://www.iana.org.) This mailing list has been established for the sole purpose of reviewing proposed charset registrations. Proposed charsets are not formally registered and must not be used; the "x-" prefix specified in [RFC2045] can be used until registration is complete.
The posting of a charset to the list initiates a two week public review process.
The intent of the public posting is to solicit comments and feedback on the definition of the charset and the name chosen for it.
3.2.2. IANA Charset Registration Template
To: ietf-charsets@iana.org
Subject: Registration of new charset [names]
Charset name:
(All names must be suitable for use as the value of a MIME Content-Type parameter, see Section 5.1 of RFC 2045.)
Charset aliases:
(All aliases must also be suitable for use as the value of a MIME content-type parameter.)
Suitability for use in MIME text:
Published specification(s):
(A specification for the charset MUST be openly available that accurately describes what is being registered. If a charset is defined as a composition of one or more CCS's and a CES then these definitions MUST either be included or referenced.)
ISO 10646 equivalency table:
(A URI to a specification of how to translate from this charset to ISO 10646 and vice versa SHOULD be provided.)
Additional information:
3.2.3. Charset Reviewer
When the two week period has passed and the registration proposer is convinced that consensus has been achieved, the registration application should be submitted to IANA and the charset reviewer. The charset reviewer, who is appointed by the IETF Applications Area Director(s), either approves the request for registration or rejects it. Rejection may occur because of significant objections raised on the list or objections raised externally. If the charset reviewer considers the registration sufficiently important and controversial, a last call for comments may be issued to the full IETF. The charset reviewer may also recommend standards track processing (before or after registration) when that appears appropriate and the level of specification of the charset is adequate.
The charset reviewer must reach a decision and post it to the ietf-charsets mailing list within two weeks. Decisions made by the reviewer may be appealed to the IESG.
3.2.4. IANA Registration of "Widely-used Open Standard" charsets
Provided that the charset registration has either passed review or has been successfully appealed to the IESG, the IANA will register the charset, assign a MIBenum value and make its registration available to the community.
3.3. The Other charset subregistry
The third subregistry is for all other charsets. Registration of charsets in the "other" charset subregistry is done on a "First Come, First Served" basis as defined by [RFC5226].
4. IANA Considerations
This document requests that IANA completely revise the existing charset registry. The new registry should be divided into three subregistries. These subregistries are: "Recommended charsets", "Widely-used Open Standard charsets" and "Other charsets".
The registration procedure for the "Recommended charset" subregistry
is Standards Action required. IANA is directed to move the following entries from the [RFC2978] legacy registry to this subregistry: UTF-8 [RFC3629].
The registration procedure for the "Widely-used Open Standard charset" subregistry is Expert Review. IANA is directed to move the following entries from the [RFC2978] legacy registry to this subregistry: INSERT A LIST OF CHARSET NAMES HERE. [[GUIDANCE IS REQUIRED FOR THIS ENTRY]]
The registration procedure for the "Other charset" subregistry is First Come First Served. IANA is directed to move the following entries from the [RFC2978] legacy registry to this subregistry: INSERT A LIST OF CHARSET NAMES HERE. [[GUIDANCE IS REQUIRED FOR THIS ENTRY]]
In all cases the registration template specified in Section 3.2.2 must be used.
4.1. Publication of Registered Charset List
This document directs IANA to create a new XML-based registry for charset registrations. This registry will be divided into three subregistries as specified in Section 3 of this document."
New charset registrations will be published in the new, XML-based registry. The proposed charset will use the approval process appropriate for the intended, designated subregistry.
Legacy charset registrations will be converted to the new XML registry. The instructions for converting the legacy registrations into entries in the new subregistries are documented in Section 4 of this document.
HISTORICAL NOTE: Previously, charset registrations were posted in the anonymous FTP file "ftp://ftp.isi.edu/in-notes/iana/assignments/character-sets" and all registered charsets were listed in the periodically issued "Assigned Numbers" RFC.
5. Security Considerations
The conversion of this IANA registry - and the changes made to the registration procedures for the new subregistries - introduces no known security considerations. Security issues that relate to charsets are dealt with in the RFCs that describe the protocols that use those charsets.
6. Acknowledgements
This document is a revision of RFC 2978 by Ned Freed and Jon Postel and is largely based on their original text.
7. References
7.1. Normative References
[RFC2231] Freed, N. and K. Moore, "MIME Parameter Value and Encoded Word Extensions:
7.2. Informative References
[ISO-8859]
Appendix A. Changes Since RFC 2978
Created 3 new subregistries with different IANA registration procedures instead of a single existing one.
Updated references, split them into Normative and Informative. Erratum 357.
Disallow single quotes in charset names (as per RFC 2231). Erratum 1912. Note that vertical bar and backslash characters were prohibited in RFC 2978 (a change from RFC 2278), but the change was never noted in RFC 2978.
Authors' Addresses
Mark Mcfadden
IANA
|
{"Source-Url": "https://datatracker.ietf.org/doc/pdf/draft-iana-charset-reg-procedure-01", "len_cl100k_base": 4345, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28720, "total-output-tokens": 6022, "length": "2e12", "weborganizer": {"__label__adult": 0.00036072731018066406, "__label__art_design": 0.0007119178771972656, "__label__crime_law": 0.0010385513305664062, "__label__education_jobs": 0.0013761520385742188, "__label__entertainment": 0.00024330615997314453, "__label__fashion_beauty": 0.0002505779266357422, "__label__finance_business": 0.0009784698486328125, "__label__food_dining": 0.0002868175506591797, "__label__games": 0.0006160736083984375, "__label__hardware": 0.0034389495849609375, "__label__health": 0.00037217140197753906, "__label__history": 0.0008115768432617188, "__label__home_hobbies": 0.00010961294174194336, "__label__industrial": 0.0006618499755859375, "__label__literature": 0.0008969306945800781, "__label__politics": 0.000774383544921875, "__label__religion": 0.0008072853088378906, "__label__science_tech": 0.27294921875, "__label__social_life": 0.00016355514526367188, "__label__software": 0.123046875, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.0002675056457519531, "__label__transportation": 0.0004453659057617187, "__label__travel": 0.0002129077911376953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22416, 0.03792]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22416, 0.21562]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22416, 0.84317]], "google_gemma-3-12b-it_contains_pii": [[0, 1758, false], [1758, 3606, null], [3606, 4883, null], [4883, 6729, null], [6729, 8617, null], [8617, 10357, null], [10357, 12202, null], [12202, 13899, null], [13899, 15405, null], [15405, 17233, null], [17233, 18919, null], [18919, 20223, null], [20223, 21390, null], [21390, 22416, null], [22416, 22416, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1758, true], [1758, 3606, null], [3606, 4883, null], [4883, 6729, null], [6729, 8617, null], [8617, 10357, null], [10357, 12202, null], [12202, 13899, null], [13899, 15405, null], [15405, 17233, null], [17233, 18919, null], [18919, 20223, null], [20223, 21390, null], [21390, 22416, null], [22416, 22416, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22416, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22416, null]], "pdf_page_numbers": [[0, 1758, 1], [1758, 3606, 2], [3606, 4883, 3], [4883, 6729, 4], [6729, 8617, 5], [8617, 10357, 6], [10357, 12202, 7], [12202, 13899, 8], [13899, 15405, 9], [15405, 17233, 10], [17233, 18919, 11], [18919, 20223, 12], [20223, 21390, 13], [21390, 22416, 14], [22416, 22416, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22416, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
47ede8bd1639b82a50040f55c77ff4aae26570e9
|
University of Washington — Computer Science & Engineering
CSE 351, Winter 2022 — Midterm Exam
Wednesday, February 9, 2022 — Friday, February 11, 2022
Name: _____________________________________________________________
UW NetID: ____________________________________________@uw.edu
Instructions
• You have 72 hours to complete the exam, though we expect that it will take you 1–4 hours. Submit your work on Gradescope by Friday, February 11 at 11:59pm. Late submissions will not be accepted.
• You may print this out and (legibly, please!) hand-write your answers, use e-ink on a tablet, or use software such as Adobe Reader to type your answers. Regardless, please use the space provided to help us with grading (except for the last question). Printed exams can be scanned into a PDF using your phone; many such apps are available. Please ensure that the scans are clear and the pages are straight.
• This exam is open note, open book, open Internet. Some questions will require the use of the CSE Linux environment (either attu or the VM).
• Collaboration is permitted, subject to the “Gilligan’s Island Rule.” You may discuss problems with your classmates, and write things down on shared paper or whiteboards, but you may not take any written artifacts from your discussions. You should engage in some kind of unrelated activity for 30 minutes (like watching an episode of Gilligan’s Island) between discussing exam problems and working on the exam, to ensure that you can fully understand and reproduce the solutions yourself.
<table>
<thead>
<tr>
<th>Question</th>
<th>Points</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bits, Bytes, and Nybbles</td>
<td>5</td>
</tr>
<tr>
<td>Stuffed Animal Organization</td>
<td>9</td>
</tr>
<tr>
<td>Pointers and Characters and Numbers, Oh My!</td>
<td>18</td>
</tr>
<tr>
<td>The Careful Design of Pointy Things</td>
<td>8</td>
</tr>
<tr>
<td>Do The Register Shuffle</td>
<td>11</td>
</tr>
<tr>
<td>Stacks Considered hARMful</td>
<td>4</td>
</tr>
<tr>
<td>Don’t Overflow This Stack!</td>
<td>15</td>
</tr>
<tr>
<td>Taking a Step Back</td>
<td>10</td>
</tr>
<tr>
<td>Total:</td>
<td>80</td>
</tr>
</tbody>
</table>
1. (5 points) Bits, Bytes, and Nybbles
Given the 32-bit numeral 0x45480000, interpret it as a...
(a) (1 point) signed int:
(b) (2 points) float:
(c) (2 points) string literal, read from left-to-right (you may use an ASCII table for reference):
2. (9 points) Stuffed Animal Organization
Sam has decided to start collecting Beanie Babies! He needs to figure out a scheme for organizing them, and wants your help. There are two characteristics that he cares about: eye color and number of legs.
Sam has multiple shelves on which he stores his Beanie Babies. Currently, each shelf can hold **up to 12** (but may hold fewer). His goal is to have a list of all his Beanie Babies and their locations (which shelf, and where on the shelf), so that he can easily locate them. He’s come up with two organizational schemes, both of which take up 16 bits of space.
### Proposed Organizational Schemes
<table>
<thead>
<tr>
<th>Proposed Organizational Schemes</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Store each field (shelf number, position on shelf, eye color, number of legs) separately, with each field taking up 4 bits. Find a Beanie Baby by starting at the topmost shelf and counting shelves from top-to-bottom until he reaches the correct shelf, then counting Beanie Babies from left-to-right.</td>
</tr>
<tr>
<td>2. Combine the shelf number and shelf position fields into a new 8-bit field. Find a Beanie Baby by starting from the leftmost Beanie Baby on the topmost shelf, then counting Beanie Babies from left-to-right and from top-to-bottom. (The representation of eye color and number of legs remains the same.)</td>
</tr>
</tbody>
</table>
(a) (3 points) Which scheme can represent more valid positions of Beanie Babies? Justify your answer in 1-2 sentences.
(b) (3 points) Provide one advantage of using scheme 1 instead of scheme 2, given the way that Sam will use this scheme in the real world. Justify your answer in 1-2 sentences.
(c) (3 points) Sam wants to compare Beanie Babies by number of legs. He writes a function `compareLegs`, which takes two arguments, `beanie1` and `beanie2`. It returns 1 if `beanie1` has fewer legs than `beanie2`, and 0 otherwise. But he can't remember how to complete it. Write out the appropriate bit mask to make the function work properly. Assume that the 4 bits storing number of legs are the least significant 4 bits of the data type.
```c
int compareLegs(
unsigned short beanie1,
unsigned short beanie2
) {
return ________________________ < ________________________;
}
```
3. (18 points) Pointers and Characters and Numbers, Oh My!
For this question, refer to the C assignments and memory diagram below, with addresses increasing left-to-right and top-to-bottom. Remember that x86-64 machines are little endian.
```
char *c = 0x19;
short *s = 0xc;
float *f = 0x28;
```
<table>
<thead>
<tr>
<th>Address</th>
<th>+0</th>
<th>+1</th>
<th>+2</th>
<th>+3</th>
<th>+4</th>
<th>+5</th>
<th>+6</th>
<th>+7</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x00</td>
<td>1e</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
</tr>
<tr>
<td>0x08</td>
<td></td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
</tr>
<tr>
<td>0x00</td>
<td>0e</td>
<td>11</td>
<td>11</td>
<td>11</td>
<td>11</td>
<td>11</td>
<td>11</td>
<td>11</td>
</tr>
<tr>
<td>0x10</td>
<td>8a</td>
<td>7c</td>
<td>6f</td>
<td>22</td>
<td>9a</td>
<td>66</td>
<td>44</td>
<td>17</td>
</tr>
<tr>
<td>0x10</td>
<td>00</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>0x18</td>
<td>33</td>
<td>77</td>
<td>6f</td>
<td>6c</td>
<td>66</td>
<td>62</td>
<td>79</td>
<td>74</td>
</tr>
<tr>
<td>0x20</td>
<td>65</td>
<td>73</td>
<td>00</td>
<td>44</td>
<td>00</td>
<td>00</td>
<td>88</td>
<td>c0</td>
</tr>
<tr>
<td>0x28</td>
<td>de</td>
<td>ad</td>
<td>be</td>
<td>ef</td>
<td>ca</td>
<td>fe</td>
<td>f0</td>
<td>0d</td>
</tr>
</tbody>
</table>
```
(a) (15 points) Fill in the C type, hex value, and interpreted value for each of the following C expressions. You should interpret integers in two’s complement, characters as ASCII, and floating-point numbers using the IEEE-754 standard. You may find the floating-point homework useful. For pointers, you can write “pointer” as the interpretation.
<table>
<thead>
<tr>
<th>C Expression</th>
<th>C Type</th>
<th>Hex Value</th>
<th>Interpretation</th>
</tr>
</thead>
<tbody>
<tr>
<td>*s</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>f+2</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>*(c+2)</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>f[-1]</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><em>((short</em>) (c-1))</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
(b) If we treat c as a string literal (i.e., an array of ASCII characters)...
i. (1 point) What is its string value?
```
```
ii. (2 points) How many bytes are taken up by c and the data it points at?
```
4. (8 points) The Careful Design of Pointy Things
Your intrepid instructor has decided to invent a new programming language, WolfLang, that promises to fix *all* the issues from the languages that preceded it. He decides to include a pointer datatype like C does, but with a few changes in an effort to make it safer.
**Proposed Changes in WolfLang**
1. Pointers can *only* be assigned to the address of a variable that matches their type; they cannot be cast. For example, the following:
```cpp
int x = 3;
float *y = (float *) &x; // invalid!
```
would not be valid in WolfLang because it attempts to cast an `int *` to a `float *`.
2. Pointers cannot be manipulated via arithmetic. For example, the following:
```cpp
int x = 3;
int *xp = &x;
xp = xp + 1; // invalid!
```
would not be valid because it performs pointer arithmetic on `xp`.
What *advantages and disadvantages* do the restrictions in WolfLang’s pointers create compared to C pointers? Give one of each, and discuss what they will mean for programmers using the language.
...
5. (11 points) Do The Register Shuffle
You come across the following mysterious-looking assembly function. This function takes two arguments. Assume that the C variables for each argument are the same as their register names, i.e., rdi and rsi.
```
mystery:
jmp .L2
.L3:
movb %al, (%rsi)
addq $1, %rdi
addq $1, %rsi
.L2:
movzbl (%rdi), %eax
testb %al, %al
jne .L3
movb $0, (%rsi)
ret
```
(a) (1 point) In the above function, what C variable type is %rdi?
(a) ________________________________
(b) (1 point) What C variable type is %rsi?
(b) ________________________________
(c) (2 points) This function contains a while loop. What is the loop condition?
```
while (*__________ != __________) {
```
(d) (5 points) Fill in the missing parts of the C code that is equivalent to the assembly above:
```
void mystery((answer to a) rdi, (answer to b) rsi) {
while ((answer to c)) {
*rsi = __________;
__________;
__________;
__________;
}
*rsi = __________;
}
```
Hey, there's more to this question! Don't forget to turn the page! 😊
(e) (2 points) On a high level, what does this function accomplish? Explain in 1-2 sentences.
(f) (2 points extra credit) This function is vulnerable to buffer overflow attacks! Briefly explain how.
6. (4 points) Stacks Considered hARMful
Some instruction set architectures, like ARM, provide a register in which a procedure stores its return address before making a call. The return address is only pushed onto the stack when the callee needs to make another call, because the register will be overwritten by the new return address. When this happens, you can think of the return address register like a special-purpose caller-saved register.
Provide one advantage of including this register. Briefly justify your answer.
7. (15 points) Don't Overflow This Stack!
For this problem, we'll examine a strange recursive C function (and corresponding assembly). One of the main goals here is to give you additional experience with using gdb.
In a CSE Linux environment (attu or the CSE VM), execute the following three commands to download the necessary files, and set the correct permissions:
```
wget https://courses.cs.washington.edu/courses/cse351/22wi/files/strange.c
wget https://courses.cs.washington.edu/courses/cse351/22wi/files/strange
chmod +x strange
```
To ensure consistency, please do not recompile strange.
Addresses & Memory Layout
(a) (1 point) Using the print command in gdb, what is the address of the function strange?
(a) ______________________
(b) (1 point) When the compiler created an object file for this code, which table(s) stored the function strange's name?
- Symbol table
- Relocation table
- Symbol and relocation table
(c) (2 points) By looking at the disassembly, what is the highest return address that will be pushed onto the stack by the function strange?
(c) ______________________
(d) (2 points) How many bytes does the code for the function strange take up in memory?
(d) ______________________
Stack Frame Layout
(e) (1 point) How large is the stack frame of strange? Recall that the return address is considered part of the callee's stack frame.
(e) ______________________
(f) (2 points) Which register(s) are pushed onto the stack by strange? Which C variable(s) are stored in these registers after pushing the previous value(s)?
Oh my, this question continues on the next page! ☺
Stack Frame Counts
You can run strange in the terminal with a single command-line argument, which is the argument \( n \) to the strange function. For example, \)./strange 3 \) will run the function with \( n = 3 \). The program will print out the sum of all numbers it computed in recursive calls.
(g) (2 points) Which command-line argument creates the highest number of user-defined stack frames? Start counting from the first call to strange (i.e., don't count main or printf).
- \( 2 \)
- \( 3 \)
- \( 4 \)
- \( 5 \)
(h) (2 points) How many user-defined stack frames are created when executing strange with the command-line argument above? Again, start counting from the first call to strange.
(h) ______________________
(i) (2 points) What is the maximum stack frame depth when executing strange with the command-line argument above? Start counting from the first call to strange.
Note that you can verify this by stopping at the maximum depth, executing the command backtrace, and counting the output lines.
(i) _______________________
8. (10 points) Taking a Step Back
Write 2–4 paragraphs reflecting on the experience of learning the material in this half of the course. This is open-ended; we’re looking for evidence that you took some time to think through how this material (course material, lectures, and labs) personally affected you.
This could be exclusively technical or exclusively socio-technical, but we’d prefer you include elements of both. There isn’t a single correct answer here — we just want you to have some space to reflect on what you’ve accomplished, what felt valuable to you, and what you’d rather do without.
If you’re not sure where to start, we’d recommend following the format below. You aren’t required to use this, but it may be helpful for getting started with the creative process.
We’ve provided space on the next pages for you to hand-write your reflection, if you prefer (legibly, please!). You are not required to fill out all two pages; we’ve intentionally left extra room to accommodate various writing styles. You can also attach typed page(s) to your submission.
We’re expecting this should take you around 30 minute to complete.
Optional Example Format
For each statement, note the degree to which you agree with it; one of Strongly Disagree, Disagree, Somewhat Disagree, Neutral, Somewhat Agree, Agree, Strongly Agree. Then, write a few paragraphs that explain your choice, noting both what might have changed and how you’ve experienced that change.
- From my experiences in this course, my view of low-level programming has changed.
- From my experiences in this course, my understanding of how computation is performed has changed.
- From my experiences in this course, my idea of what it means to be a computer scientist has changed.
- From my experiences in this course, the way that I see myself in computing spaces has changed.
- From my experiences in this course, the way I see myself broadly has changed.
- From my experiences in this course, my career goals, either from my first job or from my career as a whole have changed.
Assembly Instructions
<table>
<thead>
<tr>
<th>Binary</th>
<th>Decimal</th>
<th>Hex</th>
</tr>
</thead>
<tbody>
<tr>
<td>0000</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>0001</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>0010</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>0011</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>0100</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>0101</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>0110</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>0111</td>
<td>7</td>
<td>7</td>
</tr>
<tr>
<td>1000</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<td>1001</td>
<td>9</td>
<td>9</td>
</tr>
<tr>
<td>1010</td>
<td>10</td>
<td>A</td>
</tr>
<tr>
<td>1011</td>
<td>11</td>
<td>B</td>
</tr>
<tr>
<td>1100</td>
<td>12</td>
<td>C</td>
</tr>
<tr>
<td>1101</td>
<td>13</td>
<td>D</td>
</tr>
<tr>
<td>1110</td>
<td>14</td>
<td>E</td>
</tr>
<tr>
<td>1111</td>
<td>15</td>
<td>F</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>$2^0$</th>
<th>$2^1$</th>
<th>$2^2$</th>
<th>$2^3$</th>
<th>$2^4$</th>
<th>$2^5$</th>
<th>$2^6$</th>
<th>$2^7$</th>
<th>$2^8$</th>
<th>$2^9$</th>
<th>$2^{10}$</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>4</td>
<td>8</td>
<td>16</td>
<td>32</td>
<td>64</td>
<td>128</td>
<td>256</td>
<td>512</td>
<td>1024</td>
</tr>
</tbody>
</table>
**IEEE 754 FLOATING-POINT STANDARD**
Value: $\pm 1 \times$ Mantissa $\times 2^{\text{exponent}}$
Bit fields: $(-1)^e \times 1.M \times 2^{(E-\text{bias})}$
where Single Precision Bias = 127, Double Precision Bias = 1023.
**IEEE 754 Symbols**
<table>
<thead>
<tr>
<th>E</th>
<th>M</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>all zeros</td>
<td>all zeros</td>
<td>± 0</td>
</tr>
<tr>
<td>all zeros</td>
<td>non-zero</td>
<td>± denorm num</td>
</tr>
<tr>
<td>1 to MAX-1</td>
<td>anything</td>
<td>± norm num</td>
</tr>
<tr>
<td>all ones</td>
<td>all zeros</td>
<td>± ∞</td>
</tr>
<tr>
<td>all ones</td>
<td>non-zero</td>
<td>NaN</td>
</tr>
</tbody>
</table>
**IEEE Single Precision and Double Precision Formats:**
**Double Precision Format:**
1 bit 8 bits 23 bits
<table>
<thead>
<tr>
<th>$S$</th>
<th>E</th>
<th>M</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>52.51</td>
<td>256</td>
<td>128</td>
</tr>
</tbody>
</table>
1 bit 11 bits 52 bits
<table>
<thead>
<tr>
<th>$S$</th>
<th>E</th>
<th>M</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>52.51</td>
<td>256</td>
<td>128</td>
</tr>
</tbody>
</table>
**Assembly Instructions**
- `mov a, b` Copy from a to b.
- `movs a, b` Copy from a to b with sign extension. Needs two width specifiers.
- `movz a, b` Copy from a to b with zero extension. Needs two width specifiers.
- `lea a, b` Compute address and store in b.
*Note:* the scaling parameter of memory operands can only be 1, 2, 4, or 8.
- `push src` Push src onto the stack and decrement stack pointer.
- `pop dst` Pop from the stack into dst and increment stack pointer.
- `call <func>` Push return address onto stack and jump to a procedure.
- `ret` Pop return address and jump there.
- `add a, b` Add from a to b and store in b (and sets flags).
- `sub a, b` Subtract a from b (compute b-a) and store in b (and sets flags).
- `imul a, b` Multiply a and b and store in b (and sets flags).
- `and a, b` Bitwise AND of a and b, store in b (and sets flags).
- `sar a, b` Shift value of b right (arithmetic) by a bits, store in b (and sets flags).
- `shr a, b` Shift value of b right (logical) by a bits, store in b (and sets flags).
- `shl a, b` Shift value of b left by a bits, store in b (and sets flags).
- `cmp a, b` Compare b with a (compute b-a and set condition codes based on result).
- `test a, b` Bitwise AND of a and b and set condition codes based on result.
- `jmp <label>` Unconditional jump to address.
- `j* <label>` Conditional jump based on condition codes (more on next page).
- `set* a` Set byte a to 0 or 1 based on condition codes.
### Conditionals
<table>
<thead>
<tr>
<th>Instruction</th>
<th>(op) s, d</th>
<th>test a, b</th>
<th>cmp a, b</th>
</tr>
</thead>
<tbody>
<tr>
<td>je sete “Equal”</td>
<td>d (op) s == 0</td>
<td>b & a == 0</td>
<td>b == a</td>
</tr>
<tr>
<td>jne setne “Not equal”</td>
<td>d (op) s != 0</td>
<td>b & a != 0</td>
<td>b != a</td>
</tr>
<tr>
<td>js setjs “Sign” (negative)</td>
<td>d (op) s < 0</td>
<td>b & a < 0</td>
<td>b-a < 0</td>
</tr>
<tr>
<td>jns setns (non-negative)</td>
<td>d (op) s >= 0</td>
<td>b & a >= 0</td>
<td>b-a >= 0</td>
</tr>
<tr>
<td>jg setg “Greater”</td>
<td>d (op) s > 0</td>
<td>b & a > 0</td>
<td>b > a</td>
</tr>
<tr>
<td>jge setge “Greater or equal”</td>
<td>d (op) s >= 0</td>
<td>b & a >= 0</td>
<td>b >= a</td>
</tr>
<tr>
<td>jl setl “Less”</td>
<td>d (op) s < 0</td>
<td>b & a < 0</td>
<td>b < a</td>
</tr>
<tr>
<td>jle setle “Less or equal”</td>
<td>d (op) s <= 0</td>
<td>b & a <= 0</td>
<td>b <= a</td>
</tr>
<tr>
<td>ja seta “Above” (unsigned >)</td>
<td>d (op) s > 0U</td>
<td>b & a > 0U</td>
<td>b > U a</td>
</tr>
<tr>
<td>jb setb “Below” (unsigned <)</td>
<td>d (op) s < 0U</td>
<td>b & a < 0U</td>
<td>b < U a</td>
</tr>
</tbody>
</table>
### Registers
<table>
<thead>
<tr>
<th>Name</th>
<th>Convention</th>
<th>Name of “virtual” register</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rax</td>
<td>Return value – <strong>Caller saved</strong></td>
<td>%eax %ax %al</td>
</tr>
<tr>
<td>%rbx</td>
<td><strong>Callee saved</strong></td>
<td>%ebx %bx %bx</td>
</tr>
<tr>
<td>%rcx</td>
<td>Argument #4 – <strong>Caller saved</strong></td>
<td>%ecx %cx %cl</td>
</tr>
<tr>
<td>%rdx</td>
<td>Argument #3 – <strong>Caller saved</strong></td>
<td>%edx %dx %dl</td>
</tr>
<tr>
<td>%rsi</td>
<td>Argument #2 – <strong>Caller saved</strong></td>
<td>%esi %si %sil</td>
</tr>
<tr>
<td>%rdi</td>
<td>Argument #1 – <strong>Caller saved</strong></td>
<td>%edi %di %dil</td>
</tr>
<tr>
<td>%rsp</td>
<td><strong>Stack Pointer</strong></td>
<td>%esp %sp %spl</td>
</tr>
<tr>
<td>%rbp</td>
<td><strong>Callee saved</strong></td>
<td>%ebp %bp %bpl</td>
</tr>
<tr>
<td>%r8</td>
<td>Argument #5 – <strong>Caller saved</strong></td>
<td>%r8d %r8w %r8b</td>
</tr>
<tr>
<td>%r9</td>
<td>Argument #6 – <strong>Caller saved</strong></td>
<td>%r9d %r9w %r9b</td>
</tr>
<tr>
<td>%r10</td>
<td><strong>Caller saved</strong></td>
<td>%r10d %r10w %r10b</td>
</tr>
<tr>
<td>%r11</td>
<td><strong>Caller saved</strong></td>
<td>%r11d %r11w %r11b</td>
</tr>
<tr>
<td>%r12</td>
<td><strong>Callee saved</strong></td>
<td>%r12d %r12w %r12b</td>
</tr>
<tr>
<td>%r13</td>
<td><strong>Callee saved</strong></td>
<td>%r13d %r13w %r13b</td>
</tr>
<tr>
<td>%r14</td>
<td><strong>Callee saved</strong></td>
<td>%r14d %r14w %r14b</td>
</tr>
<tr>
<td>%r15</td>
<td><strong>Callee saved</strong></td>
<td>%r15d %r15w %r15b</td>
</tr>
</tbody>
</table>
### Sizes
<table>
<thead>
<tr>
<th>C type</th>
<th>x86-64 suffix</th>
<th>Size (bytes)</th>
</tr>
</thead>
<tbody>
<tr>
<td>char</td>
<td>b</td>
<td>1</td>
</tr>
<tr>
<td>short</td>
<td>w</td>
<td>2</td>
</tr>
<tr>
<td>int</td>
<td>l</td>
<td>4</td>
</tr>
<tr>
<td>long</td>
<td>q</td>
<td>8</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://courses.cs.washington.edu/courses/cse351/past-exams/cse351-22wi-midterm.pdf", "len_cl100k_base": 6838, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 36654, "total-output-tokens": 6304, "length": "2e12", "weborganizer": {"__label__adult": 0.000827789306640625, "__label__art_design": 0.0014438629150390625, "__label__crime_law": 0.0008625984191894531, "__label__education_jobs": 0.2491455078125, "__label__entertainment": 0.00029015541076660156, "__label__fashion_beauty": 0.0006561279296875, "__label__finance_business": 0.0007109642028808594, "__label__food_dining": 0.0016269683837890625, "__label__games": 0.00286102294921875, "__label__hardware": 0.0037689208984375, "__label__health": 0.0012302398681640625, "__label__history": 0.0012865066528320312, "__label__home_hobbies": 0.0005888938903808594, "__label__industrial": 0.001522064208984375, "__label__literature": 0.0011396408081054688, "__label__politics": 0.000823974609375, "__label__religion": 0.0015010833740234375, "__label__science_tech": 0.07781982421875, "__label__social_life": 0.000629425048828125, "__label__software": 0.0113525390625, "__label__software_dev": 0.63623046875, "__label__sports_fitness": 0.0012350082397460938, "__label__transportation": 0.0016460418701171875, "__label__travel": 0.0006318092346191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19168, 0.04907]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19168, 0.23578]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19168, 0.84033]], "google_gemma-3-12b-it_contains_pii": [[0, 2243, false], [2243, 3947, null], [3947, 4718, null], [4718, 6485, null], [6485, 7568, null], [7568, 8681, null], [8681, 9408, null], [9408, 11023, null], [11023, 12073, null], [12073, 14126, null], [14126, 14126, null], [14126, 14126, null], [14126, 17152, null], [17152, 19168, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2243, false], [2243, 3947, null], [3947, 4718, null], [4718, 6485, null], [6485, 7568, null], [7568, 8681, null], [8681, 9408, null], [9408, 11023, null], [11023, 12073, null], [12073, 14126, null], [14126, 14126, null], [14126, 14126, null], [14126, 17152, null], [17152, 19168, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 19168, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19168, null]], "pdf_page_numbers": [[0, 2243, 1], [2243, 3947, 2], [3947, 4718, 3], [4718, 6485, 4], [6485, 7568, 5], [7568, 8681, 6], [8681, 9408, 7], [9408, 11023, 8], [11023, 12073, 9], [12073, 14126, 10], [14126, 14126, 11], [14126, 14126, 12], [14126, 17152, 13], [17152, 19168, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19168, 0.34812]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
16915a3af5b567b636219f0d1e69f7e7413443b9
|
Applications of Artificial Intelligence and Machine Learning in Othello
Jack Chen
TJHSST Computer Systems Lab 2009–2010
Abstract
This project explores Artificial Intelligence techniques in the board game Othello. Several Othello-playing programs were implemented and compared. The performance of minimax search algorithms, including alpha-beta, NegaScout and MTD(f), and of other search improvements such as transposition tables, was analyzed. In addition, the use of machine learning to enable AI players to improve play automatically through training was investigated.
1 Introduction
Othello (also known as Reversi) is a two-player board game and abstract strategy game, like chess and checkers. I chose to work with Othello because it is sufficiently complex to allow significant exploration of advanced AI techniques, but has a simple set of rules compared to more complex games like chess. It has a moderate branching factor, larger than checkers and smaller than chess, for example, which makes advanced search techniques important without requiring a great deal of computational power for strong play. Although my AI programs are implemented to play Othello, most of the algorithms, data structures, and techniques I have investigated are designed for abstract strategy games in general instead of Othello specifically, and many machine learning algorithms are widely applicable to problems other than games.
2 Background
The basic goal of an AI player is to consider the possible moves from the current game state, evaluate the position resulting from each move, and choose the one that appears best. One major component of an AI player is the static evaluation function, which heuristically estimates the value of a position without exploring moves. This value indicates which player has the advantage and how large that advantage is. A second major component is the search algorithm, which more accurately evaluates a state by looking ahead at potential moves.
3 Static Evaluation
3.1 Features
For positions at the end of a game, the static evaluation is based solely on the number of pieces each player has, but for earlier positions, other positional features must be considered. The primary goals before the end of the game are mobility, stability, and parity. The major features used in my static evaluation function reflect these three goals. The overall evaluation is a linear combination of the features, that is, it is a weighted sum of the feature values. Features that are good have positive weights, while features that are bad have negative weights, and the magnitude of a feature’s weight reflects its importance.
- Mobility
Mobility is a measure of the number of moves available to each player, both at the current position and in the future (potential mobility). Mobility is important because a player with low mobility is more likely to be forced to make a bad move, such as giving up a corner. The goal is to maximize one’s own mobility and minimize the opponent’s mobility.
- Moves
The number of moves each player can make is a measure of current mobility. The moves differential, the number of moves available to the player minus the number of moves available to the opponent, is one of the features used in my static evaluation function. Positions with higher moves differential are better for that player, so this feature has a positive weight.
- Frontier squares
Frontier squares are empty squares adjacent to a player’s pieces. The number of frontier squares is a measure of potential mobility, because the more frontier squares a player has, the more moves the opponent can potentially make. Having fewer frontier squares than the opponent is good, so the frontier squares differential is weighted negatively.
- Stability
Pieces that are impossible to flip are called stable. These pieces are useful because they contribute directly to the final score.
- Corners
Corners are extremely valuable because corner pieces are immediately stable and can make adjacent pieces stable. They have the largest positive weights of all the features I use.
- X-squares
X-squares are the squares diagonally adjacent to corners. X-squares are highly undesirable when the adjacent corner is unoccupied because they make the corner vulnerable to being taken by the opponent, so they have very negative weight.
- C-squares
C-squares are the squares adjacent to corners and on an edge. C-squares adjacent to an unoccupied corner are somewhat undesirable, like X-squares, but they are much less dangerous. In addition, C-squares can contribute to edge stability, which makes them desirable in some cases. Generally, C-squares are weighted fairly negatively, but to a much smaller extent than X-squares.
- Parity
Global parity is the strategic concept that the last player to move in the game has a slight advantage because all of the pieces gained become stable. White therefore has an advantage over black, as long as the parity is not reversed by passes. In addition, in the endgame, empty squares tend to separate into disjoint regions. Local parity is based on the idea that the last player to move in each region has an advantage because the pieces that player gains are often stable. I use global parity as a feature, but do not consider local parity.
3.2 Game Stages
The importance of the features used in the static evaluation function depends on the stage of the game. For example, one common strategy is to minimize the number of pieces one has early in the game, as this tends to improve mobility, even though this is contrary to the ultimate goal of the game. It is useful, then, to have different feature weights for different game stages. In Othello, the total number of pieces on the board is a good measure of the game stage.
4 Search Algorithms and Data Structures
Static evaluation is often inaccurate. For example, it is difficult to detect traps statically. When evaluating a position, it is therefore important to consider possible moves, the possible moves in response to each of those moves, and so on. This forms a game tree of the possible sequences of moves from the initial game state.
4.1 Minimax
The minimax search algorithm is the basic algorithm to do this exploration of the game tree. Minimax recursively evaluates a position by taking the best of the values for each child position. The best value is the maximum for one player and the minimum for the other player, because positions that are good for one player are bad for the other.
The number of nodes searched grows exponentially with search depth, which is measured in ply (one ply is a move by one player). The rate of growth is the branching factor, which is the average number of children of each node, or the average number of moves from each position. In Othello, the branching factor averages about 10, although it tends to be higher in the midgame and lower in the endgame. Because of this extremely large growth rate, searching the entire game tree down to the end of the game is not practical. Therefore, minimax search can only look a limited number of moves ahead. The terminal positions at the end of this lookahead are evaluated with the static evaluation function.
Figure 1 shows a minimax search on an example game tree. Player 1 is the minimizing player and player 2 is the maximizing player. The letters represent game states, and the numbers next to each node represent its evaluation. The search has depth 3, and the values of the leaf nodes represent values from a static evaluation function.
Figure 2 shows the performance of minimax search at various search depths. The plots show the average time required per game (in seconds) when using a minimax search of fixed depth to play several randomized games, on linear and logarithmic scales. The time grows exponentially with a base of about 10, which matches the average branching factor.
4.2 Alpha-Beta Pruning
There are many minimax algorithms that are much faster than naive minimax. Alpha-beta pruning is an extremely important improvement on which several others depend. Alpha-beta search greatly reduces the number of nodes in the game tree that must be searched. This search algorithm maintains two values, alpha and beta, that represent the window between the best values the players can be assured of from the search so far. If the algorithm finds a bound for a node’s value that is outside the alpha-beta window, then the node and its subtree can be safely pruned because the node’s value cannot affect the value of the root.
Figure 3 shows an alpha-beta search on the same game tree shown in Figure 1. Nodes L and G, along with their subtrees, are pruned, significantly reducing the computation time spent searching this game tree.

In the best case, if moves are searched in order from best to worst, then the effective branching factor of alpha-beta search is reduced to the square root of naive minimax’s branching factor, meaning a search twice as deep is possible with about the same computation time. Of course, the correct ordering of the moves is not known, or a search would be unnecessary. However, even with random move ordering, alpha-beta pruning dramatically reduces the number of nodes searched. There are many methods that can be used to improve move ordering, such as previous estimates from shallower searches in iterative deepening, killer moves, and history heuristics.
Figure 4 compares the performance of alpha-beta and minimax at various search depths. For alpha-beta, the time grows exponentially with a base of about 5, which is a huge improvement over minimax’s branching factor of approximately 10.
4.3 Transposition Table
Move sequences that result in the same position are called transpositions. For example, the two opening move sequences shown in Figure 5 result in the same position.
An important way to improve search speed is to cache information about positions that have already been searched in a data structure called a transposition table. Transpositions could cause a program to repeatedly analyze the same position. Storing previous results in
Figure 4: Comparison of time required for alpha-beta and minimax search vs. search depth
Figure 5: An example transposition
a transposition table allows the program to avoid this problem. In addition, a transposition table can be used to improve move ordering by storing the best move found for each position and searching this move first. This is especially useful with iterative deepening, as the best moves found in shallow searches often remain good moves for deeper searches. I found that this improved move ordering is a much more important use of the transposition table in terms of increasing search speed.
The transposition table is implemented as a hash table to allow efficient access. One useful method for hashing positions in games like Othello and Chess is Zobrist hashing [12]. A Zobrist hash consists of an XOR sum of several bitstrings. For each square on the board, there is one randomly generated bitstring representing a black piece and another representing a white piece. A position’s Zobrist hash is formed by XORing together the appropriate bitstrings. The primary benefit of Zobrist hashing is that it can be incrementally updated very quickly by XORing it with the bitstrings for the pieces that have changed. Zobrist hashes also have the advantage of uniform distribution.
Alpha-beta search may not yield an exact value if the true value lies outside the alpha-beta window. Instead, the search may yield only an upper or lower bound. However, this limited information is still useful in later searches, as it can reduce the alpha-beta window size or result in cutoffs.
For each position in the transposition table, the following information is stored:
- The hash key or another hash of the position, which is used to detect collisions. This takes less memory than storing the entire board, although there is a possibility that two different positions will have the same hashes. Such a collision would cause inaccurate information to be used. However, with a sufficiently long hash key, such as the 64-bit hash keys I use, the probability of a collision is extremely small and is outweighed by the time and memory savings.
- Information about the position’s value from previous searches, including an exact value, an upper bound, or a lower bound.
- The best move found so far.
- The depth of the search this information is from. If the depth is less than the depth of the current search, the evaluation information should not be used, but the move information can still be used to improve move ordering.
Figure 6 shows the performance of alpha-beta search with and without a transposition table for memoization and for move ordering. Based on these results, search with memory is an average of 1.5 to 2 times as fast.
4.4 NegaScout
NegaScout [11] (which is similar to Principal Variation Search [9]) is an enhancement of alpha-beta search that can reduce the number of nodes that must be searched. NegaScout
Figure 6: Comparison of time required for alpha-beta search with and without a transposition table vs. search depth
searches the first move for a node with a normal alpha-beta window. It then assumes that
the next moves are worse, which is often true with good move ordering. For the remaining
moves, it uses a null-window search, in which the alpha-beta window has zero width, to
test whether this is true. The value must lie outside the null-window, so the null-window
search must fail. However, this yields a lower or upper bound on the value if the null-window
search fails high or low, respectively. If it fails low, then the test successfully shows that the
move is worse than the current alpha and therefore does not need to be further considered.
Otherwise, the test shows that the move is better than the current alpha, so it must be
re-searched with a full window.
Null-window searches are faster because they produce many more cutoffs. However, even
though NegaScout never explores nodes that alpha-beta prunes, NegaScout may be slower
because it may need to re-search nodes several times when null-window searches fail high.
If move ordering is good, NegaScout searches faster than alpha-beta pruning, but if move
ordering is poor, NegaScout can be slower. See Section 4.6 for analysis of the performance
of NegaScout.
A transposition table is even more beneficial to NegaScout than to alpha-beta because
stored information can be used during re-searches, for example, to prevent re-evaluation of
leaf nodes.
4.5 MTD(f)
MTD(f) [10] is another search algorithm that is more efficient than alpha-beta and outper-
forms NegaScout. It is efficient because it uses only null-window searches, which result in
many more cutoffs than wide window alpha-beta searches. Each null-window search yields
a bound on the minimax value, so MTD(f) uses repeated null-window searches to converge
on the exact minimax value. Because many nodes need to be re-evaluated several times, a transposition table is crucial for MTD(f) to prevent excessive re-searches.
MTD(f) starts its search at a given value, f. The speed of MTD(f) depends heavily on how close this first guess is to the actual value, as the closer it is the fewer null-window searches are necessary. It is therefore useful to use iterative deepening on MTD(f), using the value of the previous iteration as the first guess for the next iteration.
While MTD(f) is theoretically more efficient than alpha-beta and NegaScout, it has some practical issues, such as heavy reliance on the transposition table and search instability. The performance of MTD(f) is discussed in Section 4.6.
### 4.6 Performance Analysis
Figure 7 compares the performance of NegaScout, MTD(f), and alpha-beta search. All of these searches are done with a transposition table. Different searches were done with transposition table sizes of $2^{20}$, $2^{22}$, or $2^{24}$ positions, and the plotted time is the minimum of these times. These results indicate that NegaScout is ineffective at small depths, but is significantly faster than alpha-beta on deeper searches. MTD(f) is faster than both alpha-beta and NegaScout overall.

(a) Time for NegaScout and MTD(f) relative to alpha-beta
(b) Time for NegaScout and MTD(f) relative to alpha-beta
However, as search depth increases, the relative speed of MTD(f) worsens. This is due to its heavy transposition table dependence, as can be seen in Figure 8, which shows the performance of NegaScout and MTD(f) with different transposition table sizes. When the transposition table is too small, MTD(f) takes a heavy performance hit from repeated re-searching. For 9-ply MTD(f) searches, a size of $2^{22}$ appears sufficient, while for 10-ply searches, this is also too small. On the other hand, NegaScout is barely affected by these changes in transposition table size.
5 Bitboards
A bitboard is a data structure that can be used to represent a game board in which each bit corresponds to one of the squares on the board and indicates whether or not a certain piece is on that square. For Othello, I use bitboards consisting of two 64-bit bitstrings. One of the bitstrings represents black’s pieces and another represents white’s pieces, and each bit of these bitstrings represents one of the 64 squares on the board. The use of bit manipulation techniques on bitboards allows great speed improvements in certain operations, such as finding the possible moves and counting frontier squares. Bitboards are also advantageous in terms of memory use, since they are very compact. The use of bitboard optimizations made the AI player about 5 times as fast, as shown in Figure 9. This is enough to search about one ply deeper, a significant advantage.
6 Time Management
My time management strategy dynamically allocates a certain amount of time to each move based on the amount of time available and the number of moves remaining in the game. If a move takes more or less time than allocated, or if the number of moves that my AI player needs to make changes due to passes, the time allocated to later moves changes accordingly.
Once time is allocated for a move, my AI player searches with iterative deepening as deeply as possible given the time limit. One simple way to do this is to continue searching until all of the allocated time has been spent, then aborting the search immediately. However, this wastes some time spent on the final search iteration. To help avoid aborting a search, my time management strategy estimates the branching factor, based on the number of moves available in the leaf nodes of the previous search iteration, and uses this to predict the time for the next search iteration. If this search is predicted to exceed the allocated time, then the iterative deepening is ended, thus saving the remaining allocated time for future moves.
It is possible to estimate the time required to search to a given depth without iterative deepening based on the branching factor and time of earlier searches. However, the branching factor depends on several factors including the search depth and the game stage, so iterative deepening allows more accurate time estimation. In addition, iterative deepening helps increase search efficiency by improving move ordering. Although nodes at the first few levels may be re-searched several times, the time spent searching at lower depths is much less than the time spent on the deepest search iteration whenever the branching factor is reasonably large, so the extra time spent on shallower searches is usually outweighed by the time saved on deeper searches. During the endgame, the branching factor is low enough that this is not the case, so my AI player does a full-depth search to the end of the game instead of the normal iterative deepening search. This endgame search also is much faster per node because all static evaluation at the terminal nodes is based solely on the number of pieces each player has.
7 Training
Initially, my static evaluation function’s feature weights were set manually based on human strategy, and hand-tuned somewhat with a manual hill-climbing process. However, this process is slow and ineffective. A much better way to set feature weights is to use machine learning to automatically train the static evaluation function by optimizing the feature weights.
First, I generated a large set of example positions by playing several thousand games with a stochastic version of my untrained AI player. In order to generate a diverse training set that also reflects realistic play, this AI chooses moves with probability based on the move’s evaluation, with high probability for the best move(s) and decreasing probability for increasingly suboptimal moves. Later, after training using these examples, I used the trained AI player to generate additional example positions in a bootstrapping process.
I divided the game into 59 stages, each stage representing positions with a certain number
of total pieces from 5 to 63, and trained a separate set of weights for each stage. The training process starts with the last stage and proceeds through the game stages in reverse order. For each stage, the set of example positions matching the stage are evaluated with a fairly deep search. For the last few stages of the game, these evaluations are exact because the search can reach the end of the game. As earlier stages are trained, the leaf nodes of the searches are statically evaluated with the weights for a later game stage, which have already been trained, making the evaluations quite accurate. These evaluations are the target values for the static evaluation function at the current game stage. To optimize the current stage's weights, I used a batch gradient descent method.
7.1 Gradient Descent
Gradient descent is an optimization algorithm that finds a local minimum of a function by taking steps in the direction of steepest descent, proportional to the negative gradient of the function. For example, Figure 10 shows a contour plot of the function being optimized with the sequence of steps taken by the gradient descent algorithm. The steps are orthogonal to the contours because they are in the direction of the gradient, and they become progressively smaller as the algorithm converges on the local optimum.

As applied to static evaluation training, the function to minimize is a measure of the error for a given set of weights, based on the difference between the target value for each position and the value produced by the static evaluation function with the given weights. The gradient descent method starts with an arbitrary set of weights and then then repeatedly
takes steps in the direction that reduces error most until it reaches convergence at a local minimum.
A basic form of gradient descent takes steps of size directly proportional to the magnitude of the gradient, with a fixed constant of proportionality called the learning rate. However, a learning rate that is too small may result in extremely slow convergence, while a learning rate that is too large may converge to a poor solution or fail to converge at all. The use of a dynamic learning rate helps to avoid these problems. In each step, a line search is used to determine a location along the direction of steepest descent that loosely minimizes the error.
To illustrate the benefits of a dynamic learning rate, I compared the performance of training with the line search method and with two fixed learning rates. Training with a small fixed learning rate of 0.003 converged to almost the same solutions as the method with a line search, but in about four times as many steps. Although using a line search requires more time per step, the line search method still completed training in about half as much time. On the other hand, a method with a slightly larger fixed learning rate of 0.005 failed to converge when training weights for the last stages of the game. As the weights for all earlier stages of the game depend on these, the training results are all highly suboptimal. Figure 11 shows the error at each of the first few steps in the training of the last game stage (63 total pieces) with these fixed learning rates. As we can see, in the training with learning rate 0.005, the error grows exponentially. To see why, we plot the value for one weight at each of the first few steps in Figure 12. After a few steps, the value for this weight began oscillating between positive and negative values with exponentially growing magnitude. The other weights behaved similarly.

(a) Learning rate 0.003
(b) Learning rate 0.005
7.2 Overfitting and Cross-Validation
The goal of training is to not only have the static evaluation function fit the example positions well, but to generalize this learning to fit other positions well. However, if a game results in positions unlike the training examples, the fit may be poor. This is especially problematic
when the number of examples is too small, which can result in overfitting to the examples that actually worsens performance on general positions.
To avoid overfitting, the example positions are partitioned into a training set and a validation set. The training is done with the examples in the training set only, and after each step in the training, performance is evaluated on the validation set. If this performance stops improving, even if performance on the training set is still decreasing, training is stopped. This early stopping method is a simple and effective way to prevent overfitting.
### 7.3 Training Results
Figure 13 shows the trained weights for some of the more important features over each game stage. We can see that corners and adjacent squares are extremely important, especially early in the game, while moves become increasingly important near the end of the game. Frontier squares are consistently weighted slightly negatively.
### 8 Referee
Another major part of this project was the development of an Othello referee program to run the game. The referee acts as the interface between two players playing a game. The referee supports AI player programs written in multiple programming languages as well as human players. It keeps track of the game state, enforces rules and time limits, and handles scoring. The referee has a graphical user interface that displays the board, animates the players' moves, and allows a human to play easily by clicking on the board. The referee is highly customizable and supports a number of additional features, such as tournaments between several players and automatic statistical testing to determine whether one player plays significantly better than another.
Figure 13: Trained weights for selected features over each game stage
9 Conclusions
I implemented and compared several Othello programs using various static evaluation functions, search algorithms, and data structures. Search improvements such as transposition tables and bitboards greatly improve performance, and efficient search algorithms such as NegaScout and MTD(f) are much faster than the basic alpha-beta search algorithm. I found that MTD(f) outperformed the other search algorithms I tested. The use of machine learning to optimize the static evaluation function was successful in improving the static evaluation’s accuracy, resulting in better play.
My best AI players were fairly strong, able to easily defeat me and other amateur human players even with extremely small time limits. However, they were not nearly as strong as Othello programs such as Michael Buro’s Logistello [6]. There are many other AI techniques that could be explored in future research.
10 Future Research
Selective search algorithms, such as ProbCut [2] and Multi-ProbCut [4] can further enhance game-tree search by pruning parts of the game tree that probably will not affect the overall minimax value. This allows the player to search much deeper in the relevant parts of the game tree.
Opening books [5] allow much better and faster play in the early game by storing previously computed information about early game board states.
Another potential area of investigation is parallelization. Splitting up searches between several processors can greatly increase the search speed.
Traditionally, the static evaluation function is based on human knowledge about the game. In Othello, static evaluation is usually based on features related to human goals, such as mobility, stability, and parity. However, using pattern-based features as discussed in [3] can improve static evaluation.
There are several machine learning techniques that can be applied to the training of the static evaluation function. Among the algorithms I investigated but did not implement are genetic algorithms, particle swarm optimization, and artificial neural networks.
There are many other machine learning methods that can be used to improve the quality and speed of an AI player based on experience. For example, [8] describes the use of an experience base to augment a non-learning Othello program, and [1] describes a chess program that learns as it plays games using the TDLeaf(\(\lambda\)) algorithm. Another interesting idea is to learn a model of an opponent’s strategy and incorporate that into the minimax search [7].
References
|
{"Source-Url": "https://www.tjhsst.edu/~rlatimer/techlab10/Per5/FourthQuarter/ChenPaperQ4-10.pdf", "len_cl100k_base": 6019, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 34849, "total-output-tokens": 7362, "length": "2e12", "weborganizer": {"__label__adult": 0.0015687942504882812, "__label__art_design": 0.0011625289916992188, "__label__crime_law": 0.00244140625, "__label__education_jobs": 0.004505157470703125, "__label__entertainment": 0.0007238388061523438, "__label__fashion_beauty": 0.0009560585021972656, "__label__finance_business": 0.0010862350463867188, "__label__food_dining": 0.0022125244140625, "__label__games": 0.16943359375, "__label__hardware": 0.003570556640625, "__label__health": 0.0019025802612304688, "__label__history": 0.001850128173828125, "__label__home_hobbies": 0.00043654441833496094, "__label__industrial": 0.002315521240234375, "__label__literature": 0.00128173828125, "__label__politics": 0.0012063980102539062, "__label__religion": 0.0015020370483398438, "__label__science_tech": 0.2371826171875, "__label__social_life": 0.0003151893615722656, "__label__software": 0.0136871337890625, "__label__software_dev": 0.544921875, "__label__sports_fitness": 0.0032958984375, "__label__transportation": 0.0015745162963867188, "__label__travel": 0.000762939453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31589, 0.02674]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31589, 0.69667]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31589, 0.93582]], "google_gemma-3-12b-it_contains_pii": [[0, 1979, false], [1979, 4117, null], [4117, 6534, null], [6534, 7916, null], [7916, 10179, null], [10179, 10304, null], [10304, 13125, null], [13125, 15019, null], [15019, 17083, null], [17083, 17960, null], [17960, 21194, null], [21194, 22939, null], [22939, 25285, null], [25285, 27013, null], [27013, 27083, null], [27083, 29613, null], [29613, 31589, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1979, true], [1979, 4117, null], [4117, 6534, null], [6534, 7916, null], [7916, 10179, null], [10179, 10304, null], [10304, 13125, null], [13125, 15019, null], [15019, 17083, null], [17083, 17960, null], [17960, 21194, null], [21194, 22939, null], [22939, 25285, null], [25285, 27013, null], [27013, 27083, null], [27083, 29613, null], [29613, 31589, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31589, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31589, null]], "pdf_page_numbers": [[0, 1979, 1], [1979, 4117, 2], [4117, 6534, 3], [6534, 7916, 4], [7916, 10179, 5], [10179, 10304, 6], [10304, 13125, 7], [13125, 15019, 8], [15019, 17083, 9], [17083, 17960, 10], [17960, 21194, 11], [21194, 22939, 12], [22939, 25285, 13], [25285, 27013, 14], [27013, 27083, 15], [27083, 29613, 16], [29613, 31589, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31589, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
64c15be916703ad0a587da938223e53454a9ffc2
|
Analyzing Interaction Flow Modeling Language in Web Development Lifecycle
Karzan Wakis1,2,3 and Dayang N.A. Jawawi1
1Universiti Teknologi Malaysia Skudai 81310 Johor Malaysia
2Sualimani Polytechnic University-Iraq
3University of Human Development-Iraq
Abstract—Two years ago, the Object Management Group (OMG) adopted a new standard method named Interaction Flow Modeling Language (IFML) for web engineering domain. IFML is designed to express the content, user interaction, and control behavior of the front end of applications. There are no number lacks in web engineering methods, because each of them is defined to particular specifications, one of which is the open issue of supporting the whole lifecycle in process development. In this paper, we analyze IFML models in the process development lifecycle to show capability of the method used in the process development. We then make a comparison between IFML and other methods in lifecycle phases. Finally, we add IFML to the web engineering lifecycle’s map. It is anticipated that the result of this paper will be to become a guide for developers for using IFML in the development of new applications.
Keywords—Interaction Flow Modeling Language; IFML; Web Engineering Methods; Web Development Lifecycle
I. INTRODUCTION
Model Driven Web Engineering (MDWE) methods such as WebML [1], UWE [2] or OOHDM [3] have become mature solutions for developing Web Applications. These methods utilize Model Driven Development (MDD) perceptions to acquire advanced web applications ideas into models; hence utilizing such models obtain application automatically. The classic MDWE development process consists of three phases [4]: (1) building a domain model, (2) defining a hypertext model and (3) defining the application’s look and feel. The process outcome is a set of models with the capacity to create the ultimate web application via code generation.
As evident in [5], several methods created for the plan of hypermedia systems only partially cover the hypermedia system’s lifecycle besides being highly centered on the configuration of these systems, as evident from Fig. 1. Just recently, in 2014, the OMG was able to adopt a novel standard method identified IFML for web domain by Marco Bramilla [6].
There exist several gaps within the field of web engineering methods with one of them being no single method that considers the entire establishment lifecycle thoroughly, with each method having its particular strengths [7], as evident from Fig. 1. As seen in [5], several methods that are created for the design of hypermedia systems only partially cover the hypermedia systems’ lifecycle and are highly centered on the configuration of such systems. The web engineering community, several research groups are geared towards sustainable solutions to such variations, with some being solved by merging two methods like RUX-Method and UWE method to support Rich Internet Applications (RIA) [8], while the solution of others was obtained through enhanced methods like UWE metamodels in establishing novel modules of websites [9] although could never have all the gaps completely solved. Subsequent to numerous perfections, Marco Bramilla recommends IFML upon a ten years experience WebRatio and WebML [6], since the preceding researchers had confirmed WebML being among the most accurate methods within web engineering approaches [10-11].
MDWE [12] offer the tools and methodologies engaged in the structuring and development of various types of web application. The researchers cover various issues by engaging diverse models (presentation, navigation, and data among others), with support from model compilers capable of automatically generating several of the logic codes and Web Pages of the application. The advantages of engaging MDWE are evident from diverse perceptions like software quality, team output or adjustment to improving technologies [13-14]. Of these diverse MDWE methods, it is worth describing the IFML [15], an object management group condition for the establishment of data-intensive utilization hence becoming a key reference within the industry growth [16-17]. Its efficient creation tool, WebRatio, permits the editing and validation of IFML models besides facilitating the development of the final application code for a given technical exploitation platform,
minimizing the time-to-market as well as the development effort for such uses.
IFML was designed after ten years’ experience with one of the best methods and managed to solve some gaps in the existing methods. However, our main contribution is finding IFML location among web engineering methods in process development web application phases. In this paper, we will analyze IFML in respect to process development web applications. This will involve demonstrating the capability of IFML to support whole phases of web engineering in a lifecycle to determine IFML’s location in the lifecycle map. In future research, we will make a comparison between IFML and the other web engineering methods in a lifecycle.
The paper is organized as follows: Section 2 explains the background work undertaken for the Web Engineering lifecycle and IFML. In section 3 we conduct a web engineering methods analysis to support the lifecycle. In Section 4, we analyze the ability of IFML to support the lifecycle. Section 5 describes the addition of IFML to the lifecycle map and makes a comparison between IFML with other web engineering methods to support lifecycle phases. In section 6 we design case study by using IFML to prove our result in previous sections. In the last section, we present some concluding remarks and suggestions for future research.
II. RELATED WORK
In this section we discuss about web engineering phases in lifecycle and effective the methods in the process development web applications. Also we discuss about the previous work that done by IFML. Optimization of development effort in the Web Engineering domain has been addressed by several works. In [18] the researchers centered on the examination of the effect of engaging a MDWE method concerning customary web developments. The researchers achieved a significant productivity benefit by engaging their model driven approach. Moreover in [19] a detail literature review about MDWE explains that one of the column in this area is process development and agility in lifecycle.
For quite some time, there has been an escalating growth in the various proposed methods, approaches or methods within professional and academic literature as an attempt of handling some particular features of Web development. Of the most significant challenges facing Web-based system design and development include intricate interfaces, navigation, complex maintenance, security concerns, as well as indefinite remote users, although they came up with solutions to problems they equally offer some limitations, with scarcity in cover lifestyle being among them [20-22]. In their study, Lang and Fitzgerald [23] present a comprehensive list of overfly methods and approaches for Web/hypermedia systems development. A depiction and comparative assessment of the renowned Web development methods can be achieved in [24).
An important observation in [20] as noticed from Fig.1 is the varied coverage by methods of the development phases. In the Fig.1, each approach is located in the phase where its main focus lies. Thus, although the UWA Project [25] or WebML [26] give some consideration to requirements definition and implementation, they mainly emphasize the analysis and design phase. As can be seen, the majority of Web development methods are concentrated within the analysis and design phase, with noticeably less focus on the other phases of the life cycle.
We come back to IFML; it has good features for developing web applications, especially rich in interface and can easily support RIA. Macro Brambilla and Piero Fraternal, 2014 [15] explains most concepts IFML within a book. The book explained metamodels, process development web and mobile applications, capability extensions, and so on. Another work is object-oriented analysis and design for developing information systems by using IFML by [27]. In [28-29] used IFML for developing mobile application by WebRatio. But after inventing IFML no work exist in the lifecycle process development; we need to explain this method and present capability in the process development lifecycle.
III. MDWE LIFE CYCLE
In this section we explain current web engineering methods location in lifecycle, and we attempted to present capability the methods in the lifecycle in process development web applications.
In their study [30] offered along-drawn-out lifecycle procedural model for the development of web-based applications within small and medium enterprises. The model comprised of three processes sets, including requirement-development and evolution processes. Predominantly, the significance of post-delivery advancement process to small and medium enterprises is the development and maintenance of quality web applications by engaging the scarce resources and time [30]. Other researchers employed what is commonly identified as mockups (user interface prototypes) as an approach of commencing the modeling process within the framework of an integrated agile MDWE method [31]. As a measure of aiding this method, the present study incorporated a frivolous metamodel that encourages modeling aspects over mockups, creating end users interface as well as creating MDWE models.
Furthermore this study considered a statistical assessment of the two methods (traditional modeling versus mockup based modeling) [31]. In [7], a very excellent combination model has been offered with the objective of covering lifecycle, while suggesting three web engineering approaches: UWE, NDT, and WebML to handle lifecycle as expressed in Fig.2, regardless of this idea being excellent, it is equally intricate in the implementation phase since it requires novel transformation model, besides lack of tool supporting the implementation of this concept.
Model Driven Architecture (MDA) does not only entail modeling, it is unfeasible to anticipate 100% code generation for all computing setbacks, while presently no vendor can practically give a absolute MDA solution. Therefore, increased expectations from MDA would result in a probable failure. Simply, MDA facilitates a method of system design and development approach, engaging several standard tools and notations to acquire interoperability plus reuse among vendors, as well as platform independence. In order to achieve the complete MDA benefits, institutions should not simply
incorporate some modeling process within the creation methods; but equally promote the complete software lifecycle development process, from requirements management and analysis, to configuration, creation, execution, deployment, as well as maintenance. Else the complete MDA benefits will be lost [32].
Upon offering a quick evaluation of the MDWE lifecycle as evident above, it is worthy concluding the web engineering methods’ certain strengths in lifecycle phases and lost assessment phase from all methods. Therefore, the proposed model merges two or three methods to have these setbacks handled. The subsequent chapter analyzes IFML in comparison to other methods so as to exemplify its capacity against other methods.
IV. ANALYZING IFML IN LIFECYCLE
In this section we study centers on the models and components of IFML associated with the lifecycle development process.
A. General Overview
IFML [15] has been confirmed to enhance the platform-independent delineation of Graphical User Interfaces (GUI) among applications accessed or installed on systems like laptops, desktop computers, tablets, mobile phones and PDAs. The key focus is on the application’s behavior and structure as observed by the end user. The language used in modeling equally integrates references to the business logic and data influencing the experience of the user. This is attained accordingly by having the domain model objects referenced so as to offer the content presented in the interface as well as the measures capable of being triggered through interface interaction.
B. IFML Artifacts
The IFML language is specified within an official, human-readable OMG specification document, which in turn is accompanied by some technical artifacts:
- The IFML metamodel, specifying the structure and relations between the IFML elements;
- The IFML is UML profile, defining a UML-based syntax for expressing IFML models, through an extension of the concepts of the class, state machine, and composite structure diagrams;
- The IFML visual syntax, offering a graphic notation for expressing IFML models in a concise and intuitive way; and;
### TABLE I. DEVELOPMENT METHODS PROCESS STAGE [37]
<table>
<thead>
<tr>
<th>Formulation</th>
<th>Planning</th>
<th>Analysis</th>
<th>Design</th>
<th>Testing</th>
<th>Evaluation</th>
</tr>
</thead>
<tbody>
<tr>
<td>HDM</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
</tr>
<tr>
<td>RMM</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
</tr>
<tr>
<td>OOHDM</td>
<td>P</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
</tr>
<tr>
<td>HMBS</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
</tr>
<tr>
<td>UWE</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
</tr>
<tr>
<td>WebML</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
</tr>
<tr>
<td>0O-H</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
</tr>
<tr>
<td>W2000</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
</tr>
<tr>
<td>WAE</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
</tr>
<tr>
<td>SWM</td>
<td>P</td>
<td>P</td>
<td>P</td>
<td>P</td>
<td>P</td>
</tr>
<tr>
<td>OOWS</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>TABLE I. DEVELOPMENT METHODS PROCESS STAGE [37]</th>
</tr>
</thead>
<tbody>
<tr>
<td>HDM</td>
</tr>
<tr>
<td>RMM</td>
</tr>
<tr>
<td>OOHDM</td>
</tr>
<tr>
<td>HMBS</td>
</tr>
<tr>
<td>UWE</td>
</tr>
<tr>
<td>WebML</td>
</tr>
<tr>
<td>0O-H</td>
</tr>
<tr>
<td>W2000</td>
</tr>
<tr>
<td>WAE</td>
</tr>
<tr>
<td>SWM</td>
</tr>
<tr>
<td>OOWS</td>
</tr>
</tbody>
</table>
Upon offering a quick evaluation of the MDWE lifecycle as evident above, it is worthy concluding the web engineering methods’ certain strengths in lifecycle phases and lost assessment phase from all methods. Therefore, the proposed model merges two or three methods to have these setbacks handled. The subsequent chapter analyzes IFML in comparison to other methods so as to exemplify its capacity against other methods.
The following notation is used in this table: C, if the method fully fulfils the development stage; P, if the stage is partially fulfilled; and blank when the method does not deal with the activity.
Additional studies by Domingues [34] and Koch [35] have been exemplified in Table 1. The table puts into consideration the phases of the development methods suggested for Pressman [36], and are inclusive of “(i) formulation; (ii) planning; (iii) analysis; (iv) design (architectural, navigational and interface); (v) pages generation; (vi) testing; and (vii) customer assessment.” The following notation is used in this table: C, if the method fully fulfils the development stage; P, if the stage is partially fulfilled; and blank when the method does not deal with the activity.
Fig. 2. Use common metamodels to make approaches compatible [7]
The RIA’s development process is founded on the MDA idea. This means, it decouples the system notion by coming up with distinct system models at diverse abstraction levels. Subsequently, model transformation is considered during the development lifecycle with standard patterns or rules of transformation. Not only do the models assist in describing the system idea at diverse stages, they too play a role in automated code generation. For the purposes of conforming to MDA, the models we utilized are categorized into three: computation independent, platform specific, and platform specific. There exist tools for developing each model. Fig.1 exemplifies both the MDA compliant process and the system development step. Just as evident form Fig.1, there exists no any method that addresses the whole of lifecycle development in details while each method holds its distinct benefits [33].
Fig. 3. Model-Driven development process overview [32]
• The IFML model serialization and exchange format, for tool portability.
Altogether, these artifacts compose the IFML language specification. Each of them is specified according to the OMG standards:
• The metamodel is defined through the MOF metamodeling language (an equivalent ECORE definition is available too).
• The UML profile is defined according to UML 2.4 profiling rules.
• The visual syntax is defined through Diagram Definition (DD) and Diagram Interchange (DI) OMG standards.
• The model serialization and exchange format is defined based on XMI.
C. Metamodels
Definition of IFML metamodel is done respective of the best methods of language description, incorporating abstraction, modularization, recycle as well as extensibility. There are three packages categorizing the metamodel: Extension package, the core package, as well as data-type package. The core package entails the ideas creating the language interaction infrastructure in terms of interaction flows, Flow Elements as well as the limits. Central package ideas are broadened by actual ideas in the extension package to cover highly precise behaviors. The Data Types package entails the custom data types delineated by IFML. The basic UML metamodel data types are reused by the IFML metamodel, focuses several UML meta classes as the FML meta classes basis, and talking the assumption that a domain model is illustrated with a UML class diagram or an identical representation.
IFML model is considered as the top-level component of the other model components. It entails a domain model, an Interaction Flow Model, as well as View Points. Interaction Flow Model offers the application view of the user, by quoting to the Interaction Flow Model Elements sets, jointly defining a completely functional portion of the system. As an abstract category, Named Element focuses on the Element class (the model’s broad class) exemplifying the named elements. For any component, it is easy to specify comments and constraints. Interaction Flow Model Elements is considered an abstract category that levels the aspects of an IFM. Per se, its use is not directly associated with the IFML diagrams; rather, it is defined by more particular notions (such as Interaction Flow Element, Interaction Flow). Sequentially, these sub-concepts are abstract, hence the need to be aptly specialized.
D. IFML Development Process
The development of applications defined by interactivity is normally handled with agile techniques, which navigate diverse phases of “problem discovery” / “design refinement” / “implementation.” The iteration of the development method derives a partial version or a prototype of the system. Such an augmentable lifecycle is predominantly suitable for contemporary web and mobile uses, with the need of being installed swiftly and alter frequently throughout their lifetime to adjust to user prerequisites. Fig. 4 offers a probable structural development process hence positioning IFML within the activity flow:
1) Requirements specification: gathers and formalizes the data concerning the application domain as well as the anticipated functions. The input entails a set of business needs promoting the application development as well as the accessible data on the organizational, technical and managerial settings. The result is a practical specifications file entailing:
• The recognition of the user functions plus of the use cases linked with each function;
• A data dictionary of the needed domain notions as well as of their semantic associations; and
• The workflow represented in every application case, showing the interaction of the key actors (the application, the user and perhaps external services) during the implementation of the use case.
Furthermore, nonfunctional needs should equally be delineated, such as scalability, performance, accessibility, maintainability and security. Upon directing the application to the ordinary people, the prerequisites about the feel and look as well as the interfaces’ usability take into assumption special prominence among the nonfunctional requirements. User-focused configuration practices that depend on the development of ideal mockups of the practicality operation can be utilized. Such mockups can be applied for the primary validation of the interface notions and later act as the setting for establishing more comprehensive and technical delineations for the front-end modeling stage.
2) Domain modeling: systematizes the key information objects established during conditions delineation into a broad and articulate setting model. Domain modeling delineates the key data sets established during conditions requirement into a domain model, normally a (characteristically visual) depiction of the necessary objects, their qualities and relationships.
3) Front-end modeling: plots the data manipulation and information conveyance functionality proposed by the requirements application conditions into front-end model. The operation of front-end modeling is at the conceptual angle, with IFML coming into play. The developer is at the liberty of utilizing IFML in the specification of front-end organization in
a single or several top-level view containers, the internal formation of every view container regarding sub-containers, the constituents forming each view container’s content, the events depicted by the components and view containers, as well as how such events set off business events and revise the interface.
4) **Business logic modeling:** delineates the business objects and the techniques needed to sustain the established use cases. UML dynamic and static figures are usually used in highlighting the objects interface as well as messages flow. Process-adjusted details (like UML functionality and sequence charts, BPMN process models, and BPEL service orchestrations) offer an efficient method of signifying the workflow across services and objects. The services highlighted in the business logic plan can be oriented in the front-end model to signify the operations to be set off through interface interaction. Being interdependent in nature, front-end, data, and business-logic structure events are performed in an iterative manner. The preference category of Fig.4 is simply indicative. Within some companies, the responsibility could commence at the structure of the front-end while the actions and data objects could be established at a later phase though analysis of the published information as well as the requested operations towards sustaining the interactions.
Architectural structure is the technique of delineating the network, hardware as well as the software elements that compose the architecture whereby the application offers its services to the users. The objective of the architectural structure is to establish the mixture of these components that adequately achieves the application needs as regards to scalability, efficiency, accessibility, security, and all together adhering to the economic and technical project limitations.
5) **Implementation:** entails the approach of creating the software modules that convert the business logic, data as well as interface design into an application functioning on the opted design. Implementation of data situates the domain model onto a single or several data sources by merging the conceptual-level aspects with the formations of logical data (such as relationships and aspects to relational tables). The execution of business logic generates the software components required to sustain the identified use cases. The execution of individual entities may gain from the adoption of software designs, which systematize the manner in which fine-grain elements are devised and merged into a wider and highly reusable operational units and equally provide for nonfunctional needs like scalability, accessibility, security and competence. Translation of abstract-level View Components and View Containers into the opposite aspects within the considered execution plan is done courtesy of interface accomplishment. It is possible for the View Containers and business objects to interoperate either in the server or client layer.
6) **Testing and evaluation:** confirms the consistency of the installed application concerning the nonfunctional and functional requirements. The key important aspects for interactive model testing include:
- **a) Functional trialing:** verification of the application behavior regarding the functional requirements. Functional testing is disintegrated into classical events of module examination, system testing and integration testing.
- **b) Usability Assessment:** the nonfunctional prerequisites of accessibility, communication efficiency, and observance to merged usability values are confirmed against the generated front end.
- **c) Performance assessment:** the application’s response time and throughput ought to be examined in peak and average workload provisions. There is the need to monitor and examine the insufficient service levels, the usability design, so as to establish and get rid of bottlenecks.
V. **RESULT OF IFML ANALYSES AND ADDING TO LIFECYCLE MAP**
After conducting a detailed review of IFML in process development and analyzing existing references, we were able to acquire a full image concerning the IFML lifecycle. Our analysis centered on IFML’s need for requirements, but not necessarily supporting it. It is the UML profile that has helped in the design and analysis phase. Also, with the support of WebRatio, visual syntax has been defined through: DD and DI and OMG standards; model serialization; and exchange format which is defined based on XMI. These factors have all helped to fully support the implementation stage. Finally, we can add to the web engineering phases the fact that location between analysis/design and some implementation is the same WebML as shown in Fig.5 because WebRatio allowed the implementation after design, but with rich interface and best practice.

proposed by [37] after adding IFML as shown in Table 2. IFML cannot support formulation, planning, and, but can support analysis/design and code generation. This is one of the new terms in IFML that can evaluate the project, as shown in Fig.4.
TABLE II. COMPARISON OF IFML WITH OTHER METHODS IN THE DEVELOPMENT PROCESS STAGE
<table>
<thead>
<tr>
<th></th>
<th>Formulation</th>
<th>Planning</th>
<th>Analysis</th>
<th>Design</th>
<th>Generation</th>
<th>Testing</th>
<th>Evaluation</th>
</tr>
</thead>
<tbody>
<tr>
<td>IFML</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
<td>P</td>
<td>P</td>
</tr>
<tr>
<td>HDM</td>
<td>C</td>
<td>C</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RMM</td>
<td>P</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
<td></td>
<td></td>
</tr>
<tr>
<td>OOHDM</td>
<td>P</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
<td></td>
<td></td>
</tr>
<tr>
<td>HMBS</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
<td></td>
<td></td>
</tr>
<tr>
<td>UWE</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>WebML</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td>P</td>
<td></td>
</tr>
<tr>
<td>OO-H</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>W2000</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>WAE</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
<td>C</td>
<td>P</td>
<td>P</td>
</tr>
<tr>
<td>SWM</td>
<td>P</td>
<td>P</td>
<td>P</td>
<td>P</td>
<td>P</td>
<td>P</td>
<td>P</td>
</tr>
<tr>
<td>OOWS</td>
<td>C</td>
<td>C</td>
<td>C</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
VI. DESIGN CASE STUDY
For showing the capability IFML method to design web applications, we highlighted movie shop in Amazon website as case study as shown in Fig.6.
A. Content Model
In this example, a Customer is assigned a Credit Card that at the beginning is empty. As the user browses through the page and gets information about the Movies available, adds products to the credit cart. The list of Movies selected at the moment by the user, can be consulted at any time, offering the option of pay the current order, empty the car or continue browsing in order to add more Movies, Fig.7 shows Content Model for Amazon Movie by IFML method.
B. Process Model
When the customer enters into the website, starts exploring the available Movies. Once he finds a movie of interest, selects it, and the item goes to the credit cart. The user can either keep exploring products in order to add more items to his order, or continue to manage the credit cart by deleting all the Movies, or updating quantities of the selected ones. Once the user is ready to proceed with the payment, performs the checkout. In order to authorize the payment, it's necessary to send the customer information to the bank entity, and wait for the confirmation. This procedure is illustrated in the Fig.8.
After selecting a Movie the user can full description of the movie, directly you can buy the movie by adding card, as shown in Fig.10.
The procedure described in the Fig.9 and Fig.10 is represented in IFML as shown in the Fig.11. Once the user selects a category from MovieCategory a navigation event is produced, and as a result, the details of the Movie showed in MovieDetail.

**Fig. 11.** IFML model corresponding to the exploration of Movie
Fig.12 shows the model fragment that adds a product to the cart, once the user press add button, a modal window appears asking for the quantity of items desired. This value, along with the SelectedMovie are submitted as parameters and represent the input of the add to cart action triggered. Once the action is performed, a confirmation window is displayed.

**Fig. 12.** IFML model corresponding to the add to cart event
When the user chooses the Checkout option, the container Customer Information is displayed. The user must provide his personal information by filling out the form within this container. After the user submits his personal information, the container Payment Information is displayed. In this container the user must provide his bank account details for execute the payment process see Fig.13.

**Fig. 13.** IFML Module Representation of the Checkout Event
To increase reusability and modularization in the models, designers may decide to cluster homogeneous parts of the model into Modules. For instance, the part of the model that deals with the payment management can be packaged into a specific module. This would simplify the model of the application, as shown in Fig.14.

**Fig. 14.** Inner Process of the Module Payment Execution
After design our case study by IFML model, and showing important interaction in the process buying a move, we can conclude IFML method can fully support analyze/design phase in the web engineering lifecycle. However allowed to generate code generation as semantic implementation, but cannot fully support other phases.
### VII. CONCLUSION AND FUTURE WORK
In this paper, we analyzed the actions of IFML in the process development life cycle. In addition, we added IFML to the lifecycle map and made a comparison between IFML and the existing methods in order to develop web application process development phases. Results showed that IFML is a good method with best practice but cannot fully support the web development lifecycle. IFML is composed of a UML profile and support rich interface. That is an important point by which to improve IFML to support the lifecycle through combination with other web engineering methods or adding agile methods to improve process development.
We recommend for researcher to extend this research through implement IFML in the different case study, also researcher can combine IFML with other methods to solve weaknesses method. Moreover we recommend making usability and reliability evaluation to present quality this method.
ACKNOWLEDGMENT
We thank Professor Dr. Gustavo Rossi, for his review and useful comments on this article.
REFERENCES
www.ijacsa.thesai.org
|
{"Source-Url": "http://thesai.org/Downloads/Volume8No1/Paper_37-Analyzing_Interaction_Flow_Modeling_Language.pdf", "len_cl100k_base": 6999, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29878, "total-output-tokens": 9233, "length": "2e12", "weborganizer": {"__label__adult": 0.0003268718719482422, "__label__art_design": 0.0003464221954345703, "__label__crime_law": 0.0002617835998535156, "__label__education_jobs": 0.0008287429809570312, "__label__entertainment": 4.786252975463867e-05, "__label__fashion_beauty": 0.00016045570373535156, "__label__finance_business": 0.00022518634796142575, "__label__food_dining": 0.00028443336486816406, "__label__games": 0.0003740787506103515, "__label__hardware": 0.0006093978881835938, "__label__health": 0.0003647804260253906, "__label__history": 0.0001932382583618164, "__label__home_hobbies": 6.54458999633789e-05, "__label__industrial": 0.0003101825714111328, "__label__literature": 0.00021195411682128904, "__label__politics": 0.00018644332885742188, "__label__religion": 0.0003337860107421875, "__label__science_tech": 0.006900787353515625, "__label__social_life": 6.29425048828125e-05, "__label__software": 0.003871917724609375, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00022864341735839844, "__label__transportation": 0.0003871917724609375, "__label__travel": 0.00017499923706054688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39643, 0.01767]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39643, 0.20553]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39643, 0.89586]], "google_gemma-3-12b-it_contains_pii": [[0, 4348, false], [4348, 10674, null], [10674, 16395, null], [16395, 21564, null], [21564, 26487, null], [26487, 29632, null], [29632, 32432, null], [32432, 39643, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4348, true], [4348, 10674, null], [10674, 16395, null], [16395, 21564, null], [21564, 26487, null], [26487, 29632, null], [29632, 32432, null], [32432, 39643, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39643, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39643, null]], "pdf_page_numbers": [[0, 4348, 1], [4348, 10674, 2], [10674, 16395, 3], [16395, 21564, 4], [21564, 26487, 5], [26487, 29632, 6], [29632, 32432, 7], [32432, 39643, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39643, 0.22472]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9718b224442a3bf5787b950899e2ad74acd08e96
|
Conceptual paper
Open innovation and IPRs: Mutually incompatible or complementary institutions?
Mário A.P.M. Da Silva
Universidade do Porto, Porto, Portugal
A R T I C L E I N F O
Article history:
Received 10 March 2018
Accepted 25 March 2018
Available online 12 April 2018
JEL classification:
O33
Keywords:
Open innovation
IPRs
Knowledge spillovers
R&D
A B S T R A C T
In this paper, we explain the analytics of a particular type of mechanism of open innovation (OI), namely the management of non-pecuniary exchange of information, and address the relationship between intellectual property rights (IPRs), particularly patent rights, and OI using a static game-theoretic setting of research and development (R&D) competition. We develop a duopoly model in which for-profit firms can contribute to downstream improvements to be made by a diverse community of inventors, workers and users of technology by creating knowledge spillovers which the development community employs as an input for its innovative efforts. Although a money market for R&D information does not exist, the commercial firms may appropriate an indirect return on innovative efforts. The degree of spillover information is determined endogenously, and equilibria with voluntary revealing may arise in our theoretical model. We show that, surprisingly perhaps, a rise in the strength of patent protection induces the free sharing and dissemination of technological information and other contributions to the OI development of innovations. Conversely, a fall in the strength of the patent system induces the exercise of traditional IPRs by innovative firms to protect their intellectual assets.
© 2018 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Introduction
We need to explain the involvement of commercial firms in open innovation (OI) in several industries using a variety of technologies. OI is defined by Chesbrough and Bogers (2014) as “a distributed innovation process based on purposely managed knowledge flows across organizational boundaries, using pecuniary and non-pecuniary mechanisms in line with the organization’s business model.” Economists need a better understanding of the incentives of intellectual property holders to initiate and participate in the development of OI. The new innovation model OI can relate to intellectual property rights (IPRs). IPRs, if properly managed, can be an enabling mechanism for innovation, as an instrument to promote the open exchange of knowledge (Arora, 1995).
In economic sectors in which innovation occur rapidly and is incremental and cumulative by nature, the patent system may produce complex overlapping property rights and patent ticket problems (Shapiro, 2000). In the software industry, for instance, innovation tends to be highly incremental and cumulative, in which case, essential licensing for using a new innovation is more likely to involve many patents. Patents, which are important traditional IPRs, create a temporary right to exclude others from using an invention. In such circumstances, the transaction costs of negotiating licenses with many different patentees are fairly high and therefore the allocation of property rights does affect economic efficiency in the marketplace. Free and open development of software has the potential to eliminate such patent ticket problems, which constitutes a major threat to innovation (Isaac & Park, 2004). François Letellier advocates the idea that open source software is one of the most advanced forms of OI.
In this paper, we explain the analytics of a particular type of mechanism of OI, namely the management of non-pecuniary exchange of information, and draw conclusions concerning the relationship between IPRs and OI. IPR management under OI recognizes the value of knowledge exchange to increase the rate of innovation. We consider some economic incentives that help explain OI using a simple research and development (R&D) duopoly framework. Game-theoretic aspects of R&D competition are possibly important in understanding OI incentives.
We identify several research gaps in previous game-theoretic literature, specifically on the relationships between the strength of IPRs, the innovation incentives of the OI community, and the exercise of traditional IPRs by commercial firms. There are two research questions that haven’t been properly addressed yet. First, previous studies haven’t widely investigated if, when, and how
weak appropriability conditions in the upstream industry might enhance innovation incentives and efforts in the development community of individual developers, customers and users of technology. The other question that hasn’t been properly attended is if, when, and how a fall in the strength of IPRs might induce the exercise of traditional IPRs by commercial firms to protect their intellectual assets. We describe how we purpose to fill in the existing literature next.
Here we address the relationship between IPRs, particularly patent rights, and OI using a static game-theoretic setting of R&D competition. We present arguments for the proposition that patent rights complement OI rather than undermine it. The management of proprietary knowledge, in particular the managerial decision by a pro-profit competing firm, at the free revelation and dissemination of technological knowledge stage, to initiate and participate in OI projects depends on how strong IPR institutions are in protecting new inventions.
We show that, surprisingly perhaps, a rise in the strength of patent protection increases the benefits to innovation in the OI sector, and so it induces the free sharing and dissemination of technological information and other contributions to the OI development of innovations. The OI communities and commercial firms have therefore (at least) a reason to use the IPR institutions in place to protect their innovation system. Conversely, a fall in the strength of the patent system decreases the protection rights against misappropriation by individual inventors, workers, the rival firm and other users of the technology and hinders the insurance that firms’ innovations remain free and open, and so it induces the exercise of traditional IPRs by innovative firms to protect their intellectual assets.
The rest of the paper is organized as follows. In “Literature review” section, as a backdrop for the current research, we briefly review the literature on open innovation and open source software. We present a theoretical model in “The model” section, and establish a number of analytical results in “Equilibrium results” section. In “Conclusion” section, we offer some concluding remarks, particularly on the role of IPRs to promote innovation by OI communities.
**Literature review**
This section briefly reviews the literature linking open innovation and open source software, the latter being an increasingly popular example of open innovation. There are at least two main themes in the existing literature on open innovation and open source software that economists have tried to explain: Why do firms release their source code to form an open source project? How could people collaborate for free and produce as a result of collaborative practices valuable information goods? Open innovation theories may be especially applicable for examining how firms have been able to benefit from the opportunities provided by open source, and why firms would contribute resources to projects that will benefit others.
The term ‘open innovation’ was introduced in 2003 by Henry Chesbrough. Open innovation is the “use of purposively inflows and outflows of knowledge to accelerate internal innovation and expand the market for external use of innovation” (Chesbrough, 2006). There are two important types of open innovation, depending on the directions of knowledge flows across the boundary of the firm: outside-in (or inbound) and inside-out (or outbound). A third type of open innovation, the so-called coupled open innovation, links outside-in and inside-out, and involves combining knowledge inflows and outflows between firms or agents (Gassmann & Enkel, 2004). The inside-out and coupled types are the less explored and understood by academic researchers and industry practice.
There are two flavors of open innovation (Euchner, 2010). Open-source innovation, the other definition of open innovation, builds on the concept of open-source software. The research of von Hippel on open and distributed processes innovation establishes the importance of user innovation using an example of open-source software (von Hippel, 2005). The treatment of intellectual property and control of the direction of innovation are the distinguishing factors of the two open innovation approaches. Open-source innovation redefines the firm itself, by requiring that firms take a modified view of intellectual property, by opening intellectual property, and shift the locus of control of the innovation process closer to the user community. This difference in open innovation definitions reflects a schism that has arisen in open-source software itself. There is a strong disagreement between the “free software” people and the “open software” people within the open-source software community (Chesbrough, 2012). However, this difference is more a matter of emphasis regarding the locus of and motive for innovation. User innovation and open innovation are two major distributed processes of innovation, both challenging the dominant view of technological innovation for most of the 20th century, that is, the closed innovation model (Bogers & West, 2012).
Following an open innovation approach, firms can use open source as an ongoing source of external innovation, or to spin off technologies that cannot be internally commercialized by the firm, while they can contribute and combine efforts to use open source too (West & Gallagher, 2006). Firms involved in open source software often make investments that will be shared with rivals. Innovation strategies firms employ include pooled product development, spinouts, selling complements and attracting donated complements. Firms can also use a broad range of knowledge sources for a firm’s innovation, including customers, academics and rivals. An open innovation approach presents to managers of commercial firms three challenges: Maximizing returns to internal innovation, incorporating external innovation, and motivating the generation and contribution of external knowledge. The last challenge involves a paradox: Why would firms contribute resources, including intellectual property, to open source projects that will benefit others, including their rivals?
Open source includes both an intellectual property strategy and a development methodology (West & O’Mahony, 2005). Open source refers to a specific set of software licenses approved by the non-profit open source initiative (OSI). Since the open source definition was drafted in 1998, the OSI has approved over fifty licenses that meet the requirements of an open source license (OSI, 2004). It also refers to a development methodology where a community of programmers collaborates to produce software using virtual tools and the Internet. There are two models of community building: the model of a community-founded project, which is the most familiar model of open source development, and, more recently, the sponsored model. Major technical infrastructures are required to enable collaborative software development among the sponsor’s programmers and the external community. Open source software differs from proprietary software in two important ways: in its intellectual property strategy and in its production process.
From a legal standpoint, the open source movement and proprietary software treat the source code of software differently. Programmers feel comfortable contributing to open source because they are assured of the right to make copies of the program and distribute those copies, as well as the right to make improvements to the program (Perens, 1999). Volunteers and the companies are only able to cooperate because of the rights that come with open source. The free source software movement also has argued that free source software projects lead to better software, and also requires that software remain free perpetually (West & Dedrick, 2005). The free software contains intellectual property restrictions intended to force sharing of any derivative works. These movements grew out
the interests of programmers to write better software for their own purposes. The open source movement has focused on gaining widespread adoption of free and open source software by businesses.
The collaborative open source production process is another difference between open source software and proprietary software development contributions. Levine and Prietula (2003) define open collaboration as any system of innovation relying on interacting participants in the creation of a product of economic value who are loosely coordinated. Patterns of collaboration, innovation and production were observed initially in open source software, but can now be found in user-to-user forums and online communities, among many other instances. The open source software development is characterized by being geographically distributed, where programmers work in arbitrary locations, and coordinate their activity by means of emails. Despite the weakening of traditional methods of coordinating work, open source software development has produced software of high quality. To explain how this has happened, Mockus, Fielding, and Herbsleb (2002) examined how work was distributed within the development community in two major projects, the Apache web server and the Mozilla browser. They concluded that the software development was controlled by a small group of programmers, but they received occasional error correction from a much larger group of developer-user contributors.
In spite of important differences between open source and proprietary software, for-profit firms have sought to obtain the benefits of open source by releasing their proprietary source code to create new open source projects. West (2003) identifies two reasons why firms release their source code to open source projects in its design phase: to win adoption, or to gain development assistance on non-critical areas. One of the earliest sponsored projects was found in 1998 when Netscape formed the Mozilla project. Ongoing sponsorship can reduce ambiguity and provide structure to keep the project going forward. Potential contributors can more easily find their role as there is clearly someone in charge. This is just one way to motivate external users to adopt the technology. As to mature sponsored projects, an example of ongoing proprietary control can be found in “partly open” models such as Microsoft's Shared Source. The company retains full control of its development but may be successful in attracting contributors from users.
The manager of each pro-profit firm has to decide how to use its proprietary technology, specifically portion $1 - \beta$ of new technology which is actually protected by IPRs and thus can effectively be controlled by its legal owner. A firm's choices are restricted to the control of non-monetary exchanges of technological information. A firm may decide to keep its proprietary technology and use it in its production process. However, patented or copyrighted technology can also be free and open. Thus we assume away pecuniary rewards tied to prices of patent selling or direct flows of licensing revenue. It is not an option for a firm to commercialize new technology, say by licensing the use of one's technology to an external partner in return for a royalty or a fee.
Commercial firms may decide to initiate and contribute to OL. Different innovators and partners contribute resources to OL development without claiming any revenue-generating patent rights. Individual inventors, experienced workers and sophisticated users of technology improve the innovations of commercial firms. The productivity effect $\rho > 0$ of the joint production of innovation by firms and individuals is owing to an extensive prior knowledge base and a stock of solved technical problems. Follow-on innovations in the OL development tend to be creative combinations of prior knowledge available and previous known techniques. OL projects do not renounce use of IPR institutions. IPRs and technology licensing help protect the intangible assets of the OL community against misappropriation and misrepresentation. Technology licensing contracts granting use of technology with restrictions and copyright laws are not perfectly enforced, and so a fraction $1 - \beta$ of improvements of initial innovations made by external partners are not kept within the OL community. In OL projects, disclosure of original technology is readily available and further modifications of technology remain free and open. Improvements of the technology by the OL community are unrestricted but the dissemination of improvements is restricted to keep them within the OL community.
We model two alternative ways of firms controlling non-monetary exchanges of technological information from inside the firm out and from outside the firm in. The first version of the model is the integrated management model of inflows and outflows of knowledge where each firm $i$ has a single control variable of external information flows, $\alpha_i \in [0, 1]$. The second version is the separated management model of external information flows in which each firm chooses simultaneously the extent of inflows $(\tilde{i})$ and outflows $(\tilde{o})$ of knowledge, $\alpha_i(\tilde{i}) \in [0, 1]$ and $\alpha_i(\tilde{o}) \in [0, 1]$, respectively. Readily available disclosure of initial innovations and free distribution of incremental innovations implies that firms choose $\alpha = 1$ in the first version of the model, and $\alpha(\tilde{o}) = \alpha(\tilde{i}) = 1$ in the second one.
In this paper, we do not address the motivations of the OL community at large to improve and develop initial innovations, and we assume that the costs of individual members of the OL in the innovation process are zero. In some cases, namely the software industry, important individual participants in FO development seemed to contribute in response to altruistic and intrinsic motivations. Typically in FO development, users of technologies were actively involved in the innovation process (von Hippel, 1988). Incremental innovations can be made at low cost by the users of the technology, which may have a substantial higher private valuation.
External knowledge is complementary to the technological knowledge resulting from R&D activities which take place within the borders of the firm. The identification and assessment of complementarity and usefulness of bits of knowledge become therefore unnecessary. Firms do no need to incur in search and opportunity costs, that is, costs of interacting with other firms and agents to select those firms and agents with whom technological communication and cooperation can take place.
Firm $i$'s effective cost reduction due to technological change in the part which is determined by firms' control of information exchanges is, in the first version of the model, given by
$$X_i = x_i + \alpha_i(1 - \beta)x_j + \sum_{k=1}^{2} \alpha_k(1 - \beta)x_k\rho\alpha_k(1 - \beta)$$
(1)
and, in the second version, given by
$$X_i = x_i + \alpha_i(1 - \beta)x_j + \sum_{k=1}^{2} \alpha_k(1 - \beta)x_k\rho\alpha_k(1 - \beta)$$
(2)
These expressions do not show additive terms of firm $i$'s technological progressiveness which are exclusively dependent upon exogenous factors $\beta, x_i, x_j$ and $\rho$. This procedure seems appropriate given that our analysis is about how firm management of inflows and outflows of proprietary knowledge can increase innovation.
The additive terms of Eqs. (1) and (2) capture the idea that each commercial firm and external partners of the OI community are working on complementary innovations which, together improve the production technology of the final homogeneous good and so can be employed production activities to reduce the marginal cost of each firm. The multiplicative terms $x_k\rho$ of these equations, which are preceded by the summation sign, represent the notion that OI development is a creative combination of heterogeneous knowledge and builds both on new inventions and prior knowledge. These last terms are common to every $X_i$, $i = 1, 2$, which is an indication of the symmetric treatment in terms of benefits given to initiators and contributors to OI projects.
A firm's level of profit gross of innovation cost should influence how much contribution to external innovative partners, that is, how much disclosure of innovations or dissemination of follow-on inventions the firm chooses to do. Firm management controls the inflow and outflow of technological information in order to maximize operational profits and assure the maximum economic rent extraction possible by the firm.
Duopolists $i$ and $j$ compete a la Cournot in the final product market. Thus firm $i$'s operational profit is an increasing function of the following difference between effective firms' R&D efforts:
$$2X_i - X_j = (2 - \alpha_i(1 - \beta))x_i + (2\alpha_i(1 - \beta) - 1)x_j + \sum_{k=1}^{2} \alpha_k(1 - \beta)x_k\rho\alpha_k(1 - \beta)$$
(3)
in the first version of the model, and in the second version,
$$2X_i - X_j = (2 - \alpha_i(1 - \beta))x_i + (2\alpha_i(1 - \beta) - 1)x_j + \sum_{k=1}^{2} \alpha_k(1 - \beta)x_k\rho\alpha_k(1 - \beta)$$
(4)
We also ignore the additive terms of the differences of firms' effective cost reductions that are uniquely dependent on factors exogenous to the industry where firms $i$ and $j$ operate.
Equilibrium results
In this section we derive sufficient and necessary requirements for equilibria in the duopoly game of revelation and dissemination of technological information. Either expression (3) or (4) of the objective of firm $i = 1, 2$ is a linear function of both firms' control variables of information exchanges, and so we should look for corner solutions in the derivation of equilibrium results in each version of our model.
We begin to establish the sufficient and necessary conditions for an equilibrium where both firms $i$ and $j$ initiate and contribute to OI. Innovative development is considered to be OI only if original innovations are readily available from commercial firms and their subsequent modifications by the diverse community of individual inventors, workers and users of technology are freely redistributable.
In equilibrium, $\alpha_i = \alpha_j = 1$ if $x_i = x_j > 0$ and
$$\rho \geq \frac{1}{2(1 - \beta)}$$
(5)
in the first version of the model, or $\alpha_i(1 - \beta) = \alpha_j(1 - \beta) = 0$ if $x_i > 0, x_j > 0$ and
$$\rho \geq \frac{1}{1 - \beta}$$
(6)
in the second one. Given the exclusiveness of complementary innovation in the industry, $\alpha(1 - \beta) = 1$ is always an equilibrium strategy in the second version of the model.
Non-pecuniary rewards can motive initiation and participation of commercial firms in OI projects if productivity effect $\rho$ is high enough. In OI, initial innovations are fully disclosed and follow-on incremental inventions are freely redistributed in the OI development.
A necessary condition for the innovation process to take place in the OI sector, given the degree of innovativeness $\rho$ of the OI community, is that IPR protection $1 - \beta$ should be strong enough so that appropriation of the value that is being created by innovative firms and compliance with technology licenses that permit proprietary modifications but require that modifications remain free and open are a real possibility.
The restriction of strategy choice to once-and-for-all-commitments to disclosure and dissemination, $\alpha_i, i = 1, 2$, makes the selection of strategies of full-commitment to OI comparatively more likely to occur in equilibrium. It is clear from the comparison between inequalities (5) and (6) that the equilibrium requirement imposed on $\rho$ in the first version of the model is more stringent than the one in the second version, as critical $\rho$ is higher in the first version of the model.
These findings agree with the results of earlier game-theoretic literature in that, when the choice between revealing and secrecy is endogenous, revealing may be preferable to secrecy in equilibrium (Henkel, 2004). In a duopoly model of product quality, Henkel found that equilibria with both firms opting to reveal exist for low intensity of competition and medium to high heterogeneity of technology need. In addition, an open regime with compulsory revealing can lead to higher product quality and higher profits than a proprietary regime with no revealing.
Now we establish the requirements for equilibria when the OI and IPR sectors of innovation co-exist. The first equilibrium of this sort is $\alpha_i = 0, \alpha_j = 1$ in the first version of the model, $\alpha_i(1 - \beta) = 0, \alpha_j(1 - \beta) = 1$ and $\alpha(1 - \beta) = \alpha_j(1 - \beta) = 1$ in the second one if $x_i > 0, x_j = 0$ and
$$\rho \leq \frac{1}{1 - \beta}$$
(7)
The same equilibrium requirement imposed on the critical level of the productivity effect $\rho$ of co-creation of innovation is derived in both versions of our model.
In equilibrium, firm $i$ holds on to its property rights to appropriate the value that it has created through innovations and firm $j$ is
willing to take part in the OI development of those innovations. This outcome is more likely to be take place in the economy the weaker IPRs are, that is, the higher $\beta$ is, which may seem somewhat odd.
Apparently, this result runs against the presumption of conventional intellectual property arguments to foster innovation. The traditional economic justification of IPRs such as patents is that they improve social welfare by providing the necessary incentives for innovation.
Multiple equilibria are possible depending on the expectations held by each competing firm. Thus there is an IPR equilibrium $\alpha_i = \alpha_j = 0$ in the first version of the model for any $\beta$, $x_i$, $x_j$ and $\rho$. In the second version, an IPR equilibrium $\alpha_i(0) = \alpha_j(0) = 0$ and $x_i(0) = x_j(0) = 1$ exists if $x_i > 0$, $x_j > 0$ and inequality (7) holds. There is another equilibrium of co-existence of OI or IPRs, for another set of parameters, in the second version of the model: $\alpha_i(x) = \alpha_j(x) = 1$ and $x_i(0) = x_j(0) = 1$ if $x_i > 0$, $x_j > 0$ and inequality (6) is satisfied.
Conclusion
This section presents the main conclusions arising from our simple model and concerning with how IPRs can relate to OI. There a number of remarks on the role of IPRs such as patents to foster innovation that can be made when IPR management strategies available to commercial firms are either enforcing the legal right given by a patent to an inventor or patent owner to exclude others from commercially exploiting the invention or initiating and participating in OI projects.
We have shown that IPRs, properly managed can enhance OI most likely under strong IPR regimes. Equilibrium behavior of commercial firms in OI projects is more likely to emerge when the enforcement of IPR rights is strong. As a result, and taking a dynamic perspective, scarce resources in a given economic sector or industry should be diverted from the proprietary sector to the OI sector of innovation. Moreover, equilibrium behavior in OI projects is more likely to be observed under the first version of the model than under the second one. Each firm's full commitment to OI projects, from start to finish, by taking a unique decision on the level of information exchange from inside the firm out and from outside the firm in, possibly gives the firm an advantage to undertake OI projects and accelerates its rate of innovation.
However, OI development, if selected in equilibrium together with the enforcement of traditional IPRs, could be possible under weak IPRs too. Weak IPR protection should be of no major concern for commercial firms planning to engage in closed innovation when the option of OI is available to them. The idea that weak IPR protection could successfully foster innovation may seem odd. Closed innovation refers to an earlier innovation model in which R&D is realized within the boundaries of the firm. The paradigm of closed innovation holds the view that the generation and distribution of innovation requires control by innovative firms if there is to be successful innovation.
We now present a discussion of the study limitations and implications for future research. Business models have no explicit role to play in our theoretical model, as it was set up using a 'copy-left' approach to coordinate the contributions made by individual developers and users of technology. Open innovation processes combine internal and external ideas together into platforms and systems. These processes employ business models to define the requirements for these platforms and systems. A successful platform requires a business model that induces developers and customers to join the platform. Furthermore, the capital that firms may require to scale their innovations and subsequent improvements made by the OI community was not a question addressed in the model. From an open innovation perspective, firms contribute resources and donate intellectual property to open source projects while exploiting the common benefits of all contributors to make the technology more attractive and to facilitate the sale of related products.
Open-source innovation approaches need business models that can succeed in a more open world. Firms are increasingly defining new strategies for exploiting the principles of open innovation in a changing world. Future research should attend to the development of models for the strategic management of open innovation. It would be worth examining the ways in which the community contributions can add value to the firm, and the causal mechanisms linking managerial decisions to achieving an ongoing supply of external innovations and finding ways to appropriate value from those innovations for enhancing the firm performance.
References
|
{"Source-Url": "https://www.elsevier.es/index.php?p=revista&pRevista=pdf-simple&pii=S2444569X18300386", "len_cl100k_base": 6169, "olmocr-version": "0.1.47", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 27779, "total-output-tokens": 7569, "length": "2e12", "weborganizer": {"__label__adult": 0.0009050369262695312, "__label__art_design": 0.0011920928955078125, "__label__crime_law": 0.0024013519287109375, "__label__education_jobs": 0.0167694091796875, "__label__entertainment": 0.0003638267517089844, "__label__fashion_beauty": 0.00038361549377441406, "__label__finance_business": 0.177734375, "__label__food_dining": 0.0010499954223632812, "__label__games": 0.00234222412109375, "__label__hardware": 0.0014753341674804688, "__label__health": 0.0018444061279296875, "__label__history": 0.001094818115234375, "__label__home_hobbies": 0.00044035911560058594, "__label__industrial": 0.0022487640380859375, "__label__literature": 0.002574920654296875, "__label__politics": 0.003675460815429687, "__label__religion": 0.00079345703125, "__label__science_tech": 0.305908203125, "__label__social_life": 0.0004825592041015625, "__label__software": 0.04052734375, "__label__software_dev": 0.43310546875, "__label__sports_fitness": 0.0005331039428710938, "__label__transportation": 0.0014524459838867188, "__label__travel": 0.0004858970642089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34048, 0.05251]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34048, 0.4613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34048, 0.91279]], "google_gemma-3-12b-it_contains_pii": [[0, 4577, false], [4577, 12611, null], [12611, 19297, null], [19297, 25762, null], [25762, 34048, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4577, true], [4577, 12611, null], [12611, 19297, null], [19297, 25762, null], [25762, 34048, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34048, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34048, null]], "pdf_page_numbers": [[0, 4577, 1], [4577, 12611, 2], [12611, 19297, 3], [19297, 25762, 4], [25762, 34048, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34048, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
749b532b50f85b253011172693eb6eb975d724b3
|
Formal Specification and Verification of Concepts in Information Language in ODP Systems
Jalal Laassiri, Saïd El Hajji, Mohamed Bouhdadi
University Mohammed V-Agdal, Faculty of Sciences
Laboratory of Mathematic and Informatics and Applications
Rabat, Morocco
Laassiri.jalal@gmail.com, [El Hajji, Bouhdadi]@ fsr.ac.ma
ABSTRACT: Distributed systems can be very large and complex and the many different considerations which influence their design can result in a substantial body of specification, which needs a structuring framework if it is to be managed successfully. The purpose of the RM-ODP is to define such a framework. The Reference Model for Open Distributed Processing (RM-ODP) provides a framework within which support of distribution, inter-working and portability can be integrated. It defines: an object model, architectural concepts and architecture for the development of ODP systems in terms of five viewpoints. However, RM-ODP is a meta-norm, and several ODP standards have to be defined. Indeed, the viewpoint languages are abstract in sense that they define what concepts should be supported and not how these concepts should be represented using the UML/OCL meta-modeling approach. In this paper, we report on the definition and address of the syntax and semantics for a fragment of ODP object concepts defined in the RM-ODP foundations part and in the information language. These concepts are suitable for describing and constraining ODP information viewpoint specifications.
Keywords: RM-ODP, Information Language, Structural Concepts, Denotational Meta-modeling Semantics, UML/OCL, PIM, PSM and MDA.
Received: 4 March 2010, Revised 18 April 2010, Accepted 28 April 2010
© DLINE. All rights reserved
1. Introduction
The rapid growth of distributed processing has led to a need of coordinating framework for the standardization of Open Distributed Processing (ODP). The open distributed processing (ODP) computational viewpoint describes the functionality of a system and its environment in terms of a configuration of objects interacting at interfaces, independently of their distribution. Quality of service (QoS) contracts and service level agreements are an integral part of any computational specification, which is specified in ODP in terms of environment contracts. The Reference Model for ODP (RM-ODP) [1]-[4] provides such a framework. It creates an architecture supporting distribution, networking and portability. The foundations part [2] contains the definition of concepts and analytical framework for normalized description of (arbitrary) distributed processing systems. These concepts are gathered in several categories including basic modeling concepts, specification concepts, organizational concepts, and structuring concepts. The architecture part [3] contains specifications of the required characteristics that qualify distributed processing to be open. It defines a framework containing:
• Five viewpoints called: enterprise, information, computation, engineering and technology; which provide a basis for the ODP systems specification
• A language for each viewpoint, defining concepts and rules to specify ODP systems from the corresponding viewpoint.
• Specifications of functions required to support ODP systems.
• Transparency prescriptions, showing how to use the ODP functions to achieve distribution transparency.
In other words, the first three viewpoints do not take into account neither distribution nor heterogeneity inherent problems. This principle corresponds closely to the concepts of PIM (Platform Independent Model) and PSM (Platform Specific Model) models in MDA (Model Driven Architecture) architecture [5]. However, RM-ODP is a meta-norm [6] and can not be directly applied. Indeed, for instance, the viewpoint languages are abstract in sense that they define what concepts should be supported, not how these concepts should be represented. It is important that RM-ODP does not use the term language in its largest sense: a set of terms and rules for the construction of statements from terms; it does not propose any notation for supporting viewpoint languages.
In fact, RM-ODP provides only a framework for the definition of new ODP standards. These standards include those for ODP functions [7],[8]; standards for modeling and specifying ODP systems; standards for methodology, programming, implementing, and testing ODP systems. Elsewhere, the languages Z [9], SDL [10], LOTOS [11] and, Esterelle [12] are used in RM-ODP architectural semantics part [4] for the specification of ODP concepts. Unfortunately, up to now, no formal method is suitable to specify and verify every aspect of an ODP system. The inherent characteristics of ODP systems imply the need to integrate different specification languages and to handle non-behavioral properties of ODP systems that is the QoS concepts.
There had been an amount of research to apply UML [13] as a syntactic notation for the ODP viewpoint language [14]-[16]. The taken approach is to give a meta-model description for the language; it is a definition of this language by itself. This is presented in terms of three views: the abstract syntax, the well-formedness rules and the modeling elements semantics. The abstract syntax is expressed using a subset of UML static modeling notations that are class diagrams. The well-formedness rules are expressed in OCL [17], a precise language based on first order-logic. OCL is used for expressing constraints on objects structure which cannot be expressed by class diagrams only. We used the meta-modeling approach [18] to define syntax of a sub-language for ODP QoS-aware enterprise viewpoint specifications.
Furthermore, a part of UML meta-model itself has a precise semantic [18,19] defined using denotational meta-modeling approach. The denotational approach [20] is realized by defining the instance form of every language element and a set of rules determining which instances are denoted or not by a particular language element. There are three main steps through a denotational meta-modeling approach to the semantics:
1. Define the meta-model for the model’s language: object template, interface template, action template, type, and role.
2. Define the meta-model for the instances’ language: objects, binders, and interfaces.
3. Define the mapping or the meaning function between these two languages.
There are good reasons for adopting the UML meta-modeling approach in context of ODP systems. The UML meta-models provide a precise core of any CASE tool. The tools include a consistency checker that makes sure that invariants defined on a model do not conflict, a consistency checker between meta-models makes sure that different system specifications are consistent and do not conflict. Besides, for testing ODP systems [2],[3], the current techniques [21],[22] are not widely accepted. A new approach for testing, named agile programming [23],[ 24] or test first approach [25] is being increasingly adopted. The opinion is integrating system model and testing model using UML meta-modeling approach [26]. This approach is based on the executable UML [27]. In this context, OCL is used to specify the properties that have to be tested. OCL also serves to attach constraints to UML meta-models in order to verify the coherence of meta-models and to translate the constraints into code to evaluate them on instance models.
The part of RM-ODP considered in this paper is a subset for describing and constraining the structure of ODP information viewpoint specifications. It consists of modeling and specifying concepts defined in the RM-ODP foundations part and concepts in the information language. The UML/OCL meta-model developed here elaborates the conceptual core of the ODP information viewpoint language. We do not consider concepts for describing dynamic behavior.
The rest of the paper is organized as follows. Section 2 presents the literature review. Section 3 describes the subset of concepts considered in this work named the object model and information viewpoint. Section 4 describes the meta-model for generic models, object, action, template, type/subtype, class/subclass and basic/derived class. Section 5 describes the meta-model for models instances, which are essentially object diagrams. Section 6 makes the connection between models and their instances. This introduces the basic form of the semantic approach described here. A conclusion and perspectives finalize the paper.
2. Literature review
Behavior models play a central role in system specifications. Many specification languages can be used to specify the behavior of a business and IT systems. A system designer chooses a particular language depending on the designer’s experience and on the problems he is trying to solve. For example, to show the conformance of the implementation of a system behavior with its specification, a system designer can use formal languages (for example, Pi-calculus). To visualize the state machine of a developed system, a system designer may use a UML statechart diagram or activity diagram (a variation of a state machine in which the states represent the performance of actions or subactivities [32]). The design of complex systems requires that a system designer solve many problems simultaneously (visualize a model, check the conformance of a model, etcetera), thus several specification languages should be used. This raises a problem: a system designer needs to build several independent models of the same system. This leads to the duplication of the information, which can be an additional source of errors: models done in different languages can be inconsistent.
The concept from the RM-ODP semantic domain that is necessary for the modeling of the behavior of systems was considered in [35]. The basic concepts that were used in this work are taken from the clause 8 “Basic modeling concepts” of the RM-ODP Part 2. These concepts are: action, time, and state. According to [30], these concepts are essentially the first-order propositions about model elements.
Also used, some concepts (type, instance, and precondition, postcondition) from the clause 9 are “Specification concepts”. Specification concepts are the higher-order propositions applied to the first-order propositions about the model elements. Wegmann [31] states: “Basic Modeling Concepts and generic Specification Concepts are defined by RM-ODP as two independent conceptual categories. Essentially, they are two qualitative dimensions that are necessary for defining model elements that correspond to entities from the universe of discourse”.
To explain the semantics of the generic model more clearly, the Alloy formalism was used. Alloy is a simple modeling language that allows a modeler to describe the conceptual space of a problem domain. Using Alloy, specifying the RMODP semantic domain can be obtained.
RM-ODP conceptual elements from the semantic domain can be partitioned in the following way:
```plaintext
model RM-ODP {
domain {ODP_Concepts}
state {
partition … BasicModellingConcepts,
SpecificationConcepts : static ODP_Concepts
…
}
}
```
Code Fragment 1. RM-ODP model
Let’s consider the minimum set of modeling concepts (Basic Modeling Concepts and Specification Concepts) necessary for the specification of systems behavior. There are a number of approaches for specifying the behavior of distributed systems coming from people with different backgrounds and considering different aspects of behavior. “However, they can almost all be described in terms of a single formal model” [34]. Based on [34], Lamport specifies the behavior of a concurrent system. A system designer has “to specify a set of states, a set of action and a set of behavior”. Each behavior is modeled as a finite or infinite sequence of interchangeable states and actions.
To describe this sequence there are mainly two dual approaches. According to [33] they are:
1. “Modeling systems by describing their set of actions and their behaviors”.
2. “Modeling systems by describing their state spaces and their possible sequences of state changes”.
“These views are dual in the sense that an action can be understood to define state changes, and state changes occurring in state sequences can be understood as abstract representations of actions” [33]. In [35] work, he/she considers both of these approaches as an abstraction of the more general approach based on RM-ODP.
3. RM-ODP
RM-ODP is a framework for the construction of open distributed systems. It defines a generic object model in foundations part, and an architecture which contains specifications of the required characteristics that qualify distributed processing as open. The architecture extends and specializes object concepts of foundations part. The RM-ODP architecture model consists of a set of five viewpoint models, the concepts and rules associated with the language of each model, the distribution transparency constructs, and the ODP functions. The entire RM-ODP model is based on the RM-ODP foundations of an object model, rules for specification, and rules for structuring RM-ODP (Model Reference - Open Distributed Processing) [ISO96a] [ISO96b] [ISO98] is an international standard published by ISO/IEC. It provides a reference model for the specification of open distributed applications. The RM-ODP model can describe a system according to five viewpoints; each viewpoint is interested in a particular aspect of the system. These viewpoints are:
Enterprise. It introduces the concepts necessary to represent a system in the context of an enterprise on which it operates. It is interested to the objective and the policies of a system. A system is then represented by a community which is a configuration of enterprise objects formed to achieve a goal.
Information. It is focused on the semantics of information and the treatment carried out on information. A system is then described by information objects, relationships and behavior. The description is expressed through the use of three diagrams named invariant, static and dynamic.
Computational. It allows a functional decomposition of the system. The various functions are fulfilled by objects that interact thanks to their interfaces. The basic concepts define the type of the interfaces which the computational objects support, the way in which the interfaces can be bound, and the forms of interaction which can take place. Adaptation management in multi-view systems
Engineering. It is focused on the deployment and communication of a system. It defines communication concepts like channel, stub, skeleton and deployment concepts like cluster, capsule, etc.
Technology. It describes the implementation of a system in term of configuration of technical objects representing the hardware and software components of the implementation. The goal of such a description is to provide additional information for the implementation and the test, by selecting standard solutions for the components and the communication mechanisms.
3.1 RM-ODP Object Model (Foundations Part)
The RM-ODP international standard [5] presents a very good architectural framework for modeling distributed systems. In our experience, unfortunately at the present time not many modelers use the standard in their everyday practice. It’s a pity, considering the amount of highly qualified experts’ knowledge invested in the project and the big constructive potential that its results might bring to practice if they were adequately used. We see one of the ways to promote the use of RM-ODP in formalization of its framework. The formalization requires a careful and attentive translation of the standard definitions into formal logical constructions, but once done it would allow creation of ODP-based software toolsets that could bring to modelers an “easy to be applied” version of the standard.
Generally, the term object model refers to the collection of concepts used to describe objects in an object-oriented specification (OMG CORBA), Object model [5] and RM-ODP object model [4]. It corresponds closely to the use of the term data-model in the relational data model. To avoid misunderstandings, the RM-ODP defines each of the concepts commonly encountered in object oriented models. It underlines a basic object model which is unified in the sense that it has successfully to serve each of the five ODP viewpoints. It defines the basic concepts concerned with existence and activity: the expression of what exists, where it is and what it does. The core concepts defined in the object model are object and action. An object is the unit of encapsulation: a model of an entity. It is characterized by its behavior and, dually, by its states. Encapsulation means that changes in an object state can occur only as a result of internal actions or interactions. An action is a concept for modeling something which happens. ODP actions may have duration and may overlap in time. All actions are associated with at least one object: internal actions are associated with a single object; interactions are actions associated with several objects.
Objects have an identity, which means that each object is distinct from any other object. Its identity implies that there exists a reliable way to refer to objects in a model. Depending on the RM-ODP viewpoint, the emphasis may be placed on behavior or on states. When the emphasis is placed on behavior, an object is informally called to perform functions and offer services, these functions are specified in terms of interfaces. An interface is a subset of interactions where an object can participate. Contrary to other object models, ODP object can have multiple interfaces.
The other concepts defined in the object model are derived from concepts of object and action; those are class, template, type, subtype/supertype, subclass/superclass, composition and behavioral compatibility. Though, the composition of objects is a combination of two or more objects yielding a new object. An object is behaviorally compatible with a second object with respect to a set of criteria if the first object can replace the second object with no notice by the environment on the difference in object behavior on basis of that set of criteria.
A type (of an $<x>$) is a predicate characterizing a collection of $<x>$s. The ODP notion of type is much more general than most of object models. Also, ODP permits to define several types, and dynamically change types.
A class (of an $<x>$) defines the set of all $<x>$s satisfying a type. An object class, in the ODP meaning, represents the collection of objects that satisfy a given type. Many object models do not clearly distinguish between a specification for an object and the set of objects that fit the specification. ODP makes the distinction between template and explicit class. An $<x>$ template specifies the common features of a collection $x$ in a sufficient detail that an $x$ can be instantiated using it.
3.2. RM-ODP Information Language
The information held by the ODP system about entities in real world, including the ODP system itself, is modeled in an information specification in terms of information objects, and their relationships and behaviors.
Basic information elements are modeled by atomic information objects. More complex information is modeled as composite information objects which, as any other ODP object, exhibit behavior, state, identity and encapsulation. Its type is a predicate characterizing a collection of information objects, which their class is the set of all information objects satisfying a given type.
An information object template specifies the common features of an information objects collection in sufficient detail that an information object can be instantiated using it. It may reference static, invariant and dynamic schema.
An action is a model of something that happens in real world. Actions are instances; their types are modeled by ODP action types. An action in the information viewpoint is associated with at least one information object. It can be either internal action or interaction.
An invariant schema is a set of predicates on one or more information objects which must always be true. The predicates constrain the possible states and state changes of the objects on which they apply.
ODP also notes that an invariant schema can specify the types of one or more information objects; that will always be satisfied by whatever behavior the objects might exhibit. A static schema defines the state of one or more information objects, at some point in time, subject to the constraints of any invariant schema.
A dynamic schema is a requirement of the allowable state changes of one or more information objects, subject to the constraints of any invariant schema. A dynamic schema specifies how the information can evolve as the system operates. In addition to describing state changes, dynamic schema can also create and delete information objects, and allow reclassifications of instances from one type to another.
Besides, in the information language, a state change involving a set of objects can be seen as an interaction between those objects. Not all the objects involved in the interaction need to change state; some of the objects may be involved in a read-only manner [29].
4. Syntax Domain
We define in this section the meta-models for concepts presented in the previous section. Figure 1 defines the context free
syntax for the core of object concepts, and figure 2 defines the context free syntax for the information language.
In the following, we define context constraints for the defined syntax.
We consider the concepts of subtype/supertype (RM-ODP 2-9.9) and subclass/superclass (RM-ODP 2-9.10) as relations between types and classes respectively.
Context m: Model inv:
mSpecifier-> includesAll
(m.InformationObjectTemplates.DynamicSchema)
mDescriber ->includesAll
(m. InformationTemplate.StaticSchema)
mConstrainer-> includesAll
(m. InformationObject.InvariantSchema)
mActionTemplates -> includesAll
(m.InformationObjectTemplates.action)
mTypes->includesAll
(m.ActionTemplates.Types -> union
(m.InformationObject.Types)
5. Semantics Domain
The semantics of a UML model is given by constraining the relationship between a model and possible instances of that model (see Figure 3). It means constraining the relationship between expressions of the UML abstract syntax for models and expressions of the UML abstract syntax for instances. We define a model to specify the ODP information viewpoint. That is, a set of information objects, their relationships and behaviors. This model defines Semantic Domain (figure 3).
A system can only be an instance of a single system model, because it is self contained and disjoint from other models. On the other side, objects are instances of one or more object templates; they may be of one or several types. With no further constraints, it is possible for an object to change the templates of which it is an instance; thus this meta-model supports dynamic types.
There is one well-formedness rule for instances, which are given below:
Context s: system inv:
The source and target information objects of s's links are information objects in s
s.informationobjects->includesAll(s.links.source->union(s.links.target))
Links between two information objects are unique per role
s.links->forAll(l:s.links -> select(l'|l'.source=l.source&l'.target=l.target&l'.of=l.of)=l)
Declarative of "Specification concepts" (RM-ODP 2.9) in Alloy [28], time dependence.
Context Time inv:
forall(o:InformationObject ,t:Time | t.instant ->notEmpty implies o.state ->notEmpty)
Context Precondition inv:
Forall (prec: Dynamicschema.Precondition , o : InformationObject|exists( s : State) | o.mappedTo = prec and o.state_start = s)
Context Postcondition inv:
forall (postc: dynamicschema.Postcondition , o : InformationObject | exists(s : State) | o.mappedTo = postc and a.state_end = s).
6. Meaning Function
Other invariants are required to constraint the relationships between models and instances. These constitute the semantics which are the subject of this section. The semantics for the UML-based language defined by the relationship between a system model and its possible instances (systems). The constraints are relatively simple, but they demonstrate the general principle. Firstly there are two constraints related to information objects and links, respectively. The first shows how inheritance relationships can force an information object to be of many Information Object Template.
Context o: object inv:
The templates of o must be a single template and all the parents of that template
o.of->exists(t | o.of=t->union(t.parents))
The second ensures that a link connects objects of templates as dictated by its role.
Context l: link inv:
Information Objects which are the source/target of link have templates which are the source/target of the corresponding roles.
(l.of.source)->intersection (l.source.of ) -> notEmpty
and (l.of.target)->intersection(l.target.of)->notEmpty
Secondly, there are four constraints which ensure that a model instance is a valid instance of the model, it is claimed to be an instance of:
The first and second ensure that objects and links are associated with templates known in the model.
Context s: system inv:
The model, that s is an instance of, includes all object templates that s.objects are instances of:
s.of.InformationObjectTemplates->includesAll(s.InformationObjects.of)
The model, that s is an instance of, includes all InformationObjectClass that s.InformationsObjects are instances of
s.of.InformationObjectClass ->includesAll(s.s.InformationsObjects.of)
The third ensures that links are associated with roles known in the model.
Context s: system inv:
The model, that s is an instance of, includes all the role that s.links are instances of
s.of.roles ->includesAll(s.of.roles.of)
The fourth constraint ensures that the system cardinality constraints on roles are observed.
Context s: system inv:
The links of s respect cardinality constraints for their corresponding role:
\[
\text{s.links.of} \rightarrow \text{forAll}(r | \text{let links_in_s be r.instances} \rightarrow \text{intersect}(s\text{.links }) \in (r.\text{upperBound} \rightarrow \text{notEmpty} \implies \text{links_in_s} \rightarrow \text{size} \leq r.\text{upperBound}) \\
\text{and links_in_s} \rightarrow \text{size} \geq r.\text{upperbound})
\]
The fifth ensures that reverse links are in place for roles with inverses. If a link is of a role with an inverse, then there is a corresponding reverse link:
\[
\text{s.links} \rightarrow \text{forAll}(l | l.\text{of.role.inverse} \rightarrow \text{notEmpty} \implies \text{s.links} \rightarrow \text{select}(l' | l.'\text{.source} = l.\text{target} \& l.'\text{.target} = l.\text{source} \& l.'\text{.of} = l.\text{of.inverse}) \rightarrow \text{size} = 1
\]
7. Conclusion
The Reference Model for Open Distributed Processing (RM-ODP) provides a framework which supports distribution, interworking and portability can be integrated. However, the ODP viewpoint languages define what concepts should be supported, not how these concepts should be represented. In addition, the UML standard has adopted a meta-modeling approach to define the abstract syntax of UML. One approach to define the formal semantics of a language is denotational: essentially elaborating the value or instance denoted by an expression of the language in a particular context. However, when we use the denotational meta-modeling approach in this paper, we defined the UML/OCL based syntax and semantics of a language for a fragment of ODP object concepts described in the foundations part and in the information viewpoint language. Indeed, these concepts are suitable to define and constrain ODP information viewpoint specifications. In parallel, we are applying the same approach to define a language of concepts characterizing dynamic behavior.
References
[34] www.infoscience.epfl.ch/record/464/files/BalabkoW03A.pdf
|
{"Source-Url": "http://www.dline.info/jet/fulltext/v1n3/4.pdf", "len_cl100k_base": 5718, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22456, "total-output-tokens": 7744, "length": "2e12", "weborganizer": {"__label__adult": 0.0003082752227783203, "__label__art_design": 0.0005044937133789062, "__label__crime_law": 0.0003485679626464844, "__label__education_jobs": 0.001209259033203125, "__label__entertainment": 8.249282836914062e-05, "__label__fashion_beauty": 0.0001569986343383789, "__label__finance_business": 0.0003917217254638672, "__label__food_dining": 0.0003414154052734375, "__label__games": 0.00048828125, "__label__hardware": 0.0008563995361328125, "__label__health": 0.0004825592041015625, "__label__history": 0.00031256675720214844, "__label__home_hobbies": 9.930133819580078e-05, "__label__industrial": 0.0005974769592285156, "__label__literature": 0.0004606246948242187, "__label__politics": 0.00028514862060546875, "__label__religion": 0.0005731582641601562, "__label__science_tech": 0.07574462890625, "__label__social_life": 0.0001024007797241211, "__label__software": 0.01068115234375, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0002532005310058594, "__label__transportation": 0.0005517005920410156, "__label__travel": 0.00020742416381835935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32348, 0.02559]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32348, 0.87263]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32348, 0.86371]], "google_gemma-3-12b-it_contains_pii": [[0, 3372, false], [3372, 8484, null], [8484, 12481, null], [12481, 17169, null], [17169, 21511, null], [21511, 21699, null], [21699, 22725, null], [22725, 26084, null], [26084, 29985, null], [29985, 32348, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3372, true], [3372, 8484, null], [8484, 12481, null], [12481, 17169, null], [17169, 21511, null], [21511, 21699, null], [21699, 22725, null], [22725, 26084, null], [26084, 29985, null], [29985, 32348, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32348, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32348, null]], "pdf_page_numbers": [[0, 3372, 1], [3372, 8484, 2], [8484, 12481, 3], [12481, 17169, 4], [17169, 21511, 5], [21511, 21699, 6], [21699, 22725, 7], [22725, 26084, 8], [26084, 29985, 9], [29985, 32348, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32348, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
553a4d9dcef7bf7aa1b8e311d119362336a05962
|
TOWARDS SEMANTIC KNOWLEDGE MAPPING: AN EXTENSION OF COMPENDIUM WITH SEMANTIC KNOWLEDGE REPRESENTATION
Krishna Sapkota¹, Arantza Aldea¹ and René Bañares-Alcántara²
¹Department of Computing, Oxford Brookes University, Oxford, UK
{k.sapkota, aaldea}@brookes.ac.uk
²Department of Engineering Science, University of Oxford, Oxford, UK
rene.banares@eng.ox.ac.uk
ABSTRACT
Compendium is a knowledge mapping application equipped with graphical representations of ideas and arguments. Extension of the processes in Compendium with Semantic Web technologies can be beneficial for the intelligent searching of concepts or ideas, and supporting decision making process. This paper presents the extended Compendium which exploits the Semantic Web for knowledge representation and user interaction. The result evaluated by the experts and users showed that the extension eases and streamlines the decision making process.
KEYWORDS
Compendium, Decision Matrix, Wicked Problem, IBIS, Semantic Web, Ontology
1. INTRODUCTION
Consideration of different criteria and their weight is the most important step towards a decision making process. One of the software application that helps in the process with graphical user interface is Compendium [1], [2]. It is an Issue Based Information System (IBIS) [3], which visualises the connection of ideas and information in a graphical user interface. The ideas and information are illustrated in various kinds of nodes. Each node is identified by its own symbol name and description. The nodes can be created, arranged and linked to each other by the popular drag and drop feature and short-cut keys.
The existing system with extensions still lacks semantic representation of the knowledge. Various extensions of Compendium with additional features have shown promising results in streamlining the decision-making processes. Some of the examples are the Option-vs-Criteria matrix, Global-Parameters matrix and Goals-vs-Criteria matrix [4]. These matrices are table having different headers such as criteria, options and goals. For example in Option-vs-Criteria matrix, the criteria are lined along the left column of a table and options are arranged along the top row of the table. However, there are still some improvements expected on the extensions such as making the knowledge of the matrices usable, reusable, searchable and portable. One of the possibilities is to exploit Semantic Web technologies to represent the knowledge, which fulfils these expectations. Moreover, it can store, retrieve and update the information representing criteria, goals and parameters; check consistency in the information; and present the information in a semantic hierarchy.
DOI: 10.5121/ijaia.2012.3501
This paper focuses on exploiting Semantic Web technologies in order to make the decision matrices [4] more meaningful and reusable. One of such technologies is domain ontology, which describes states and processes of a domain. In this work, the ontology representing the knowledge of matrices in Compendium was created manually with help from domain experts. The knowledge was provided with options to check consistency and export into several useful formats. This paper extends the decision matrices presented in [4] with Semantic Web technologies. Therefore, processes and goals of these matrices are not the scope and the contribution of this paper. The following are the of contributions of this paper;
(1) **Explicit representation of knowledge**: It makes the implicit information explicit that is imbedded in various decision matrices such as *Option-vs-Criteria* matrix, *Global-Parameters* matrix and *Goals-vs-Criteria* matrix.
(2) **Reusability of the existing tools**: It increases the reusability of the existing tools incorporating the matrices by adopting the Semantic Web technologies. Furthermore, the knowledge is more adaptable, portable and shareable as it is represented in a W3C\(^1\) standard format.
(3) **User friendly interface**: In addition to representing the knowledge in a homogeneous and standard format, the current work extends the system with Semantic Web oriented user interface. In particular, for users are guided with useful information in each step of the decision making process.
The rest of the paper is organised as follows. The background of Compendium is described in section 2. Some earlier approaches of adding useful functionalities to Compendium are presented in section 3. Section 4 describes the extended Compendium with new features and integration of Semantic Web technologies 6. The section 7 concludes the paper with recommended works.
### 2. BACKGROUND
Compendium is a knowledge mapping software tool developed by Open University, which uses IBIS and has been developed over a decade [3]. The major elements of the IBIS are topics, issues and positions. A topic defines the overall view of a problem that needs solving. The issues are also referred to as questions or problems that need to be answered. Likewise, the positions or alternative-positions represent the possible solutions or answers. The issues are also associated with arguments, which describe the pros and cons of a position or another argument.
A topic representing a problem can be very complex and is referred to as **Wicked Problem** [5]. In an IBIS, the system deals with every issue related to a topic and finally come to an overall solution. However, while treating an issue, another issue may arise. The newly emerged issue may create further issues. The new and the subsequent issues are treated in the same way as the main issue. In a practical scenario, while solving a problem, a complex and nested structure of problems is formed which are called **Wicked Problems**. They are extremely hard or almost impossible to solve due to their contradictory, incomplete and everchanging requirements, and are very difficult to recognise [5].
The Compendium has been developed as a result of evolution in IBIS. There have been so many changes and developments in IBIS since its inception. The most significant progress was the development of graphical IBIS also referred to as gIBIS [6], where issues and positions are visualised graphically as distinct nodes. It was designed to work in a team of people who are working on the same project, which assisted the team members to understand and work in collaboration. Since its popularity spread over, there had been several attempts to make similar kinds of systems. One of the most successful and popular systems is Compendium.
The advantage of graphical IBIS in Compendium is that the whole design of problem-solving process can be visualized as a tree, and problems can be visually tackled from leaf to root. It makes the process transparent, as the progress of selecting position for an issue can be clearly visualised, and the team will be aware of the depth of the problem more accurately. Compendium illustrates ideas and information via various kinds of nodes, which have their own symbols, names and description. The nodes can be created, arranged, rearranged and linked together by the convenient drag and drop feature, and short-cut keys. Figure 1 shows the different types of nodes in the working environment of Compendium. These features in Compendium enable dynamic visual modelling and mapping of dialogs and arguments. They also help in interchanging each other’s idea and information in a team in an efficient way, which expedites and streamlines decision-making processes.
There are three elements in Compendium namely 1) hypermedia concept mapping, 2) conceptual framework and 3) meeting facilitation. In the concept mapping element, the issues, positions and options are visually represented to show their inter connection in the form of a map. The map can be populated with the issues, and the issues are connected with the possible solutions and their pros and cons. In the conceptual framework element, most common problems can be tackled by using some templates. The templates are created by following standard operating procedures, best practices and some other appropriate approaches. In the meeting facilitation element, the discussions, arguments and the subsequent decision made in a meeting are recorded. This process is also referred as Dialog Mapping.

3. EXTENDED COMPENDIUM WITH DECISION MATRICES
A team at Oxford Brookes University and University of Oxford [4] extended Compendium by creating and integrating useful tools to support the decision-making process. The key extensions are the decision related matrices such as Option-vs-Criteria matrix, Goals-vs-Criteria matrix, Global-Parameters matrix and Design-Rationale [4], [7], [8]. In the previous version of Compendium, the above matrices were represented in external spreadsheet files. The extended Compendium provided the facilities of entering, deleting, editing and updating the matrix visually in an integrated environment. The users found the extended Compendium as very useful, as it eliminated the complication involved in importing external files and having to establish connections with external applications in order to edit and update the information. The graphical representation of the matrix in the integrated Compendium is depicted in Figure 2.

The Options-vs-Criteria matrix facilitates the selection of the best option from the all the available options considering the relevant criteria [9]. This matrix is associated with every question node, where every alternative can be evaluated against a set of criteria. It is visually represented as a table by placing the options along the top row and criteria along the left column. Each criterion is supplied with its weight, which represents its degree of importance in the decision process. Figure 2 shows the Options-vs-Criteria matrix created by the team. This matrix was implemented in a decision supporting tool called DRAMA [7], which was used in wastewater plants. It recorded all the design objectives, selected and rejected options and all the decisions made.
Decision Rationale explains the reason behind a decision made by going through all the options and their criteria available [10]. Using this information in the future will be particularly helpful to
analyse the effects of changing some of the micro decisions on overall decision. A decision made at a point of time could be inappropriate at another time. For an example, if there were options to choose from materials x, y and z and the cost of x was cheaper a year ago but now y is the cheapest. In this case, the decision to choose material based on its cost has changed and has an impact on subsequent decisions, which have been re-evaluated.
Representing the information embedded in the matrices explicitly can make the process more meaningful and less error-prone. This can be reified by utilising the Semantic Web technologies. Furthermore, creation of these matrices could be semantically guided and consistently maintained and updated. Therefore, the goal of this project has been to exploit Semantic Web technologies in order to support the development and integration of the desired functionalities to Compendium, which is described in the next section.
4. ADDING SEMANTICS TO COMPENDIUM
This section explains the process of representing the information in Semantic Web technology, which is required for the Options-vs-Criteria matrix and its dependent matrices in Compendium. Representing the information explicitly for shared conceptualisation facilitates the users to work together, reuse the information and distribute them in an efficient way [11]. In the newly extended Compendium, Semantic-Compendium, it enables users to be guided through the process of creation, selection, removal and update of the options, criteria and parameters in Compendium. An example of how the selection of criteria is guided by the newly extended system is depicted in Figure 3. Furthermore, it facilitates the decision-making process. This section describes the process of ontology creation with the use of Protégé OWL Editor, and it also explains the interactions between the information in the ontology and Compendium. The relevant information is presented to users for matrices creation. The users are guided through the process of information selection while creating and manipulating the decision matrix and its dependent matrices. Once new information or instruction is created, added or changed, the users are able to save them in the ontology maintaining its consistency and structure. The overall processes are described under the following four phases.
4.1. Compendium Ontology Creation
A domain ontology describing the states and processes of Compendium was created manually using Protégé OWL Editor. The three important elements of the ontology are classes, individuals and properties. The classes are the concepts representing the entities and their important aspects. The individuals are the entities that fall under some classes. Let’s take an example of Human as a class. Among the entities that fall under this class are MichelleObama and BarackObama; therefore considered as the individuals of Human. There are two types of properties associated with classes and individuals; object property and data-type property. The property holds two values on its two ends and the property itself sits in the middle. The value on the left is called domain and that on the right is called the range. The object property relates two individuals such as, in the above example, the object property hasWife relates BarackObama with MichelleObama as its domain and range respectively. The data-type property relates individuals with some data-type values such as string, integer, double and date. For example, in the above example, the data-type property hasPosition relates individual BarackObama with string value “President”. There is a general practice that each object property is supplied with its inverse property, where the individuals of the domain and range are flipped on the right and left respectively. The inverse property also helps in making implicit information explicit. Besides, there are two other common terms that appear in ontology description namely T-Box and A-Box. In a simplified interpretation, the T-Box can be regarded as ontology with classes and axioms without individuals. Classes represent the concepts, and the axioms specify how the classes can be determined and are related to each other. Likewise, the A-Box represents the individuals and their properties.
The concepts in the ontology are organised in such a way that they correspond to the entities in Compendium, and they were developed under a class called Compendium_Component. The immediate subclasses under this class are Issue, Position, Criteria and Parameter. The layer containing these four classes is called upper layer. Another layer of subclasses of the upper layer classes was created and called the lower layer. The classes in the lower layer were populated with individuals and their description.
Figure 4 illustrates the T-Box of the ontology.
Figure 4: T-Box of Compendium.owl ontology
The properties creation is also an important and more thoughtful task in ontology engineering. The Object Properties created in the ontology are hasCriteria, hasDecision, hasIssue, hasPosition, hasRecommendation. It follows that each object property is coupled with its inverse property. As a result, a set of inverse property was created such as isCriteriaOf, isDecisionOf, isIssueOf, isPositionOf, isRecommendationOf. Let's take an example of an Object Property, hasCriteria, whose domain can have values from the union of Position and Goal, where as its range can only have values from Criteria.
Figure 5 illustrates Protégé OWL Editor Environment with the ontology showing some individuals and properties.
4.3. Connection between the Ontology and Compendium
In order to work with ontologies in Java, there requires some communications between these technologies. Different applications and technologies use different data structures and function calls, which make the communication among the applications difficult. It can be alleviated by the use of Application Programming Interfaces (API). A Java API for ontology, Jena [12] acts as a bridge between ontology and Java. Jena retrieves the information in the ontology and converts it into a model which can be manipulated Sparql Query Language (SPARQL) [12]. In particular, the model can be queried, modified and updated using SPARQL. Jena also facilitates SPARQL features in Java. Figure 6 depicts the flow of information from ontology to Compendium.
Figure 6: Connection and flow of information in the systems
In the Semantic Compendium, Jena was imported and implemented in a class called `ConnectionJenaOWL.java`, which provides three vector objects to hold information about the list of Criteria, Goals and Parameters. Compendium accesses this information and utilises it to render drop down lists in GUI panels. After the creation of the GUI panels to display information in trees, the vectors are modified to contain a complex data structure called `OWLD_data`. The `OWLD_data` contains categories of Criteria, Goals and individuals of Criteria and Goals. Figure 6 shows the structure of vectors and data produced by the class `ConnectionJenaOWL`.

**Figure 7: Structure of vectors and data provided by class ConnectionJenaOWL**
### 4.4. Using the Ontology in Compendium
The ontological concepts are accessed by the Semantic Compendium in order to generate various graphical user interfaces such as trees of Criteria, Global-Parameters and Goals separately. The GUI tree creation processes are similar for all three components. Therefore, only the processes involved in the creation of Criteria tree is described here.
During the Criteria selection process, a user selects the Add Criterion button, which creates a UIDialog containing an expandable tree of Criteria. Criteria are grouped under their relevant category maintaining the corresponding structure as in the ontology, and they can be expanded or collapsed as required. Clicking on a criterion selects and adds it automatically to the Options-vs-Criteria matrix. The criterion selection process requires the user’s confirmation.
Figure 3 illustrates the structure of the UIDialog with the tree. The Criteria tree also provides an option to create a new Criterion, which is either created under the existing category of Criteria or a new Criteria type is created, and the Criterion is added under the newly created Criteria type. Changes made during this process are also updated in the Options-vs-Criteria matrix automatically. This information is also automatically updated in the working copy of the ontology for later uses.
### 4.5. Ontology for Consistency Checking
During this phase, the selected Criterion and the Goal in the Goals-vs-Criteria matrix are verified their relevancy by checking their specifications in the corresponding ontology. In particular, it ensures that the criterion is relevant enough to be considered for the goal in the matrix. In order to
achieve this, the *Goals-vs-Criteria* matrix is populated with check boxes. After clicking on *Show-Related* button, the system computes the relevancy checking and fills the relevant check-boxes with tick marks. If a check-box is filled with a tick mark, it means that the goal and the criterion belonging to the check-box are related and should be considered for further computation. Selection of relevant *Goals* and *Criteria* and filling or emptying the check-boxes with tick marks can also be achieved manually. These changes are automatically updated on the ontology and the matrix.
Figure 9 is featuring the GUI of *Goals-vs-Criteria* matrix panel and consistency checking. The extended system also provides options to collect a list of *Criteria* from three different sources. The sources can be the current issue node in the Compendium; all the issue nodes in the currently running Compendium project; or all the issues nodes from the ontology. A list of *Criteria* from the current issue node is collected by accessing the *Options-vs-Criteria* matrix of the current issue node. Similarly, a list of all the *Criteria* in a Compendium project can be collected by accessing the matrices of all the issue nodes. Likewise, the list of *Criteria* in the ontology is collected directly from the vector provided by the class *ConnectionJenaOWL*.
5. **CASE-STUDY AND RESULT**
The chosen project for the case study is based on a project carried out by Keong Hann Yeoh [13]. In this case study, *Carbon-Sequestration-Process* is analysed and different alternatives are compared using Compendium. This is a decision project, which has four main issue nodes and two sub-issue nodes. For the case study, the issue node about *transportation* is selected. In this node, the issue is “What is the optimal mode of transportation?”, which has four possible alternatives i.e. aeroplane, lorry, ship and train. After evaluating the given *Criteria*, the Compendium recommended the *lorry* as the best alternative.
Figure 8 shows the redesign of the GUI in Compendium with new features. The issue and the positions, which are being discussed, are highlighted with the yellow colour. A properties dialog-box was instantiated by double clicking an issue node. The dialog-box contains *multi-criteria-decision-matrix*, which consists of the *Options-vs-Criteria* matrix and other matrices.
![Figure 8: Compendium with new features][13]
In the Options-vs-Criteria matrix, a Criterion-Selection-Tree was presented. While selecting a Criterion called Time-Taken from the tree, a confirmation dialog was prompted to the user for verification. After the user verification, the Criterion was added to the matrix. Similarly, the Global-Parameters table provided a Parameter-Selection-Tree. In a Parameter selection process, the uranium was selected, and a confirmation dialog was presented to the user. The selection was verified and was added to the matrix. A Parameter contains two attributes i.e. value and unit. Likewise, in the Goals-vs-Criteria matrix, a Goals-Selection-Tree was instantiated with a confirmation dialog in order to select a Goal, called Sustainable-Energy, from the tree, and the Goal was added to the matrix.
For the consistency test, a button labelled Show-Related in the Goals-vs-Criteria matrix panel helped to check whether it would tick the all the relevant check boxes if the Goals and Criteria were related. We received the expected result with check boxes and tick marks.
Figure 9 illustrates the process with a confirmation dialog.

All the changes during these processes were updated in the system automatically, which was checked by opening the ontology in Protégé editor. The newly updated entities such as classes, individuals and properties, were thoroughly analyzed. The individual called Viable_Location was under the class named Transportation_Related. The individual had an object property called hasCriteria. Values of the property were checked to see whether there was any updated information. The values were found as Infrastructure_Availability and Practicality. The result confirmed that the information accessing and updating between Compendium and ontology had been successful, and the consistency checking process was performed accurately.
6. CONCLUSION AND FUTURE WORK
Compendium is a useful software tool, which is being used in dialogue mapping and to support the decision process. Compendium is evolving, as more and more features and functionalities are expected and being developed. One of the examples of added functionality was the integration of Options-vs-Criteria matrix in Compendium extending the previous system [4]. Further extension of the matrix with added semantics to guide users through the creation, manipulation and update has been desired in the engineering design processes. Implementation of Semantic Web technologies in the system can provide Compendium with additional adaptability and re-usability.
The project started with the aim of providing new features and tools to the existing Options-vs-Criteria matrix and its ancillary matrices with the added semantics. The main features added to the system are; 1) the communication between the system and its ontology, and 2) the availability of user guidance while creating, manipulating and updating of the featured matrices. The newly built system has met its merits in terms of successfully providing required information for all the matrices proposed in the project.
Although the project has been successfully completed in its merit, there are still many potential and desirable features for extension. The exploitation of Semantic Web technologies in order to utilize the ever evolving and useful software tool, Compendium, has been highly desirable in decision processes of scientific and engineering departments. This project is just a beginning in this long and adventurous journey of open source application development process. The best possibility of immediate extension of the work can be; 1) further development of Goals-vs-Criteria matrix and its role in the decision matrix, 2) adding more features and functionality of Protégé OWL Editor into Compendium, and 3) displaying OWL ontology in Compendium.
REFERENCES
Authors
Short Biography
|
{"Source-Url": "http://aircconline.com/ijaia/V3N5/3512ijaia01.pdf", "len_cl100k_base": 5033, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 28236, "total-output-tokens": 6363, "length": "2e12", "weborganizer": {"__label__adult": 0.0003104209899902344, "__label__art_design": 0.001209259033203125, "__label__crime_law": 0.0006189346313476562, "__label__education_jobs": 0.006591796875, "__label__entertainment": 0.00020623207092285156, "__label__fashion_beauty": 0.00023066997528076172, "__label__finance_business": 0.0010042190551757812, "__label__food_dining": 0.0004665851593017578, "__label__games": 0.0007643699645996094, "__label__hardware": 0.0007882118225097656, "__label__health": 0.0007495880126953125, "__label__history": 0.0006957054138183594, "__label__home_hobbies": 0.0001983642578125, "__label__industrial": 0.0009546279907226562, "__label__literature": 0.0009636878967285156, "__label__politics": 0.0005812644958496094, "__label__religion": 0.0006132125854492188, "__label__science_tech": 0.41357421875, "__label__social_life": 0.0003268718719482422, "__label__software": 0.09375, "__label__software_dev": 0.47412109375, "__label__sports_fitness": 0.00025010108947753906, "__label__transportation": 0.0008544921875, "__label__travel": 0.000286102294921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28200, 0.02405]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28200, 0.62301]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28200, 0.91875]], "google_gemma-3-12b-it_contains_pii": [[0, 2724, false], [2724, 6534, null], [6534, 8334, null], [8334, 10347, null], [10347, 12711, null], [12711, 14645, null], [14645, 15957, null], [15957, 16816, null], [16816, 19333, null], [19333, 21762, null], [21762, 24371, null], [24371, 28200, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2724, true], [2724, 6534, null], [6534, 8334, null], [8334, 10347, null], [10347, 12711, null], [12711, 14645, null], [14645, 15957, null], [15957, 16816, null], [16816, 19333, null], [19333, 21762, null], [21762, 24371, null], [24371, 28200, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28200, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28200, null]], "pdf_page_numbers": [[0, 2724, 1], [2724, 6534, 2], [6534, 8334, 3], [8334, 10347, 4], [10347, 12711, 5], [12711, 14645, 6], [14645, 15957, 7], [15957, 16816, 8], [16816, 19333, 9], [19333, 21762, 10], [21762, 24371, 11], [24371, 28200, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28200, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
91852c07b5183de41f34581645a01c34cd3090a2
|
Find the Bug!
/* From Linux 2.3.99 drivers/block/raid5.c */
static struct buffer_head *
get_free_buffer(struct stripe_head *sh,
int b_size) {
struct buffer_head *bh;
unsigned long flags;
save_flags(flags);
cli();
if ((bh = sh->buffer_pool) == NULL)
return NULL;
sh->buffer_pool = bh->b_next;
bh->b_size = b_size;
restore_flags(flags);
return bh;
}
Find the Bug!
/* From Linux 2.3.99 drivers/block/raid5.c */
static struct buffer_head *
get_free_buffer(struct stripe_head *sh,
int b_size) {
struct buffer_head *bh;
unsigned long flags;
save_flags(flags);
cli();
if ((bh = sh->buffer_pool) == NULL)
return NULL;
sh->buffer_pool = bh->b_next;
bh->b_size = b_size;
restore_flags(flags);
return bh;
}
Metal Interrupt Analysis
```c
#include "linux-includes.h"
sm check_interrupts {
// Variables
// used in patterns
decl { unsigned } flags;
// Patterns
// to specify enable/disable functions.
pat enable = { sti(); } |
{ restore_flags(flags); } ;
pat disable = { cli(); } ;
// States
// The first state is the initial state.
is_enabled: disable => is_disabled |
enable => { err("double enable"); }
;
is_disabled: enable => is_enabled |
disable => { err("double disable"); }
// Special pattern that matches when the SM
// hits the end of any path in this state.
| $end_of_path$ =>
{ err("exiting w/intr disabled!"); }
;
}
```
Source: Engler et al., Checking System Rules Using System-Specific, Programmer-Written Compiler Extensions, OSDI '00.
Applying the Analysis
/* From Linux 2.3.99 drivers/block/raid5.c */
static struct buffer_head *
get_free_buffer(struct stripe_head *sh, \-- initial state is_enabled
int b_size) {
struct buffer_head *bh;
unsigned long flags;
save_flags(flags);
cli(); \--- transition to is_disabled
if ((bh = sh->buffer_pool) == NULL)
return NULL; \--- final state is_disabled: ERROR!
sh->buffer_pool = bh->b_next;
bh->b_size = b_size;
restore_flags(flags); \--- transition to is_enabled
return bh; \--- final state is_enabled is OK
}
Source: Engler et al., Checking System Rules Using System-Specific, Programmer-Written Compiler Extensions, OSDI ’00.
Session Objectives
After this session, attendees will be able to:
• Understand the benefits of analysis and how it complements techniques like testing or inspection.
• Grasp the basics of static analysis technology.
• Know some analysis tools that are available, and properties of others that are on the horizon
• Evaluate current and future commercial analysis tools for use in their organization
• Develop a plan for introducing analysis into their organization
Outline
• Why static analysis?
• The limits of testing and inspection
• What is static analysis?
• What are current tools like?
• What does the future hold?
• What tools are available?
• How does it fit into my organization?
Software Errors
A problem has been detected and Windows has stopped your computer.
The problem seems to be caused by the following error:
PAGE_FAULT_IN_NONPAGED_AREA
If this is the first time you've seen this error, try the following steps:
1. Restart your computer. If this screen appears again, try these steps:
2. Check to make sure any new hardware or software is properly installed.
3. If this is a new installation, ask your helpdesk or system administrator for assistance.
mozilla.exe has encountered a problem and needs to close. We are sorry for the inconvenience.
If you were in the middle of something, the information you were working on might be lost.
Please tell Microsoft about this problem.
We have created an error report that you can send to us. We will treat this report as confidential and anonymous.
To see what data this error report contains, click here.
Send Error Report[button] Don’t Send[button]
Process, Cost, and Quality
Process intervention, conventional testing, and inspection yield first-order software quality improvement. Additional technology and tools are needed to close the gap.
Software Quality
CMM:
1 2 3 4 5
S&S, Agile, RUP, etc: less rigorous . . . more rigorous
Perfection (unattainable)
Critical Systems Acceptability
Process Rigor, Cost
Existing Approaches
- **Testing:** *is the answer right?*
- Verifies features work
- Finds algorithmic problems
- **Inspection:** *is the quality there?*
- Missing requirements
- Design problems
- Style issues
- Application logic
- **Limitations**
- Non-local interactions
- Uncommon paths
- Non-determinism
- **Static analysis:** *will I get an answer?*
- Verifies non-local consistency
- Checks all paths
- Considers all non-deterministic choices
Errors Static Analysis can Find
- Security vulnerabilities
- Buffer overruns, unvalidated input...
- Memory errors
- Null dereference, uninitialized data...
- Resource leaks
- Memory, OS resources...
- Violations of API or framework rules
- e.g. Windows device drivers; real time libraries; GUI frameworks
- Exceptions
- Arithmetic/library/user-defined
- Encapsulation violations
- Race conditions
Theme: consistently following rules throughout code
Empirical Results on Static Analysis
- Nortel study [Zheng et al. 2006]
- 3 C/C++ projects
- 3 million LOC total
- Early generation static analysis tools
- Conclusions
- Cost per fault of static analysis 61-72% compared to inspections
- Effectively finds assignment, checking faults
- Can be used to find potential security vulnerabilities
Empirical Results on Static Analysis
- InfoSys study [Chaturvedi 2005]
- 5 projects
- Average 700 function points each
- Compare inspection with and without static analysis
- Conclusions
- Fewer defects
- Higher productivity
Adapted from [Chaturvedi 2005]
Quality Assurance at Microsoft (Part 1)
• Original process: manual code inspection
• Effective when system and team are small
• Too many paths to consider as system grew
• Early 1990s: add massive system and unit testing
• Tests took weeks to run
• Diversity of platforms and configurations
• Sheer volume of tests
• Inefficient detection of common patterns, security holes
• Non-local, intermittent, uncommon path bugs
• Was treading water in Longhorn/Vista release of Windows
• Release still pending
• Early 2000s: add static analysis
• More on this later
Outline
• Why static analysis?
• What is static analysis?
• Abstract state space exploration
• What are current tools like?
• What does the future hold?
• What tools are available?
• How does it fit into my organization?
Static Analysis Definition
- Static program analysis is the systematic examination of an abstraction of a program’s state space
- Metal interrupt analysis
- Abstraction
- 2 states: enabled and disabled
- All program information—variable values, heap contents—is abstracted by these two states, plus the program counter
- Systematic
- Examines all paths through a function
- What about loops? More later…
- Each path explored for each reachable state
- Assume interrupts initially enabled (Linux practice)
- Since the two states abstract all program information, the exploration is exhaustive
How can Analysis Search All Paths?
- Exponential # paths with if statements
- Infinite # paths with loops
- Secret weapon: Abstraction
- Finite number of (abstract) states
- If you come to a statement and you’ve already explored a state for that statement, stop.
- The analysis depends only on the code and the current state
- Continuing the analysis from this program point and state would yield the same results you got before
- If the number of states isn’t finite, too bad
- Your analysis may not terminate
Example
1. void foo(int x) {
2. if (x == 0) {
3. bar(); cli();
4. } else {
5. baz(); cli();
6. }
7. while (x > 0) {
8. sti();
9. do_work();
10. cli();
11. }
12. sti();
13.}
Path 1 (before stmt): true/no loop
2: is_enabled
3: is_enabled
6: is_disabled
11: is_disabled
12: is_enabled
no errors
Example
1. void foo(int x) {
2. if (x == 0)
3. bar(); cli();
4. else
5. baz(); cli();
6. while (x > 0) {
7. sti();
8. do_work();
9. cli();
10. }
11. sti();
12. }
Path 2 (before stmt): true/1 loop
2: is_enabled
3: is_enabled
6: is_disabled
7: is_disabled
8: is_enabled
9: is_enabled
11: is_disabled
already been here
Example
1. void foo(int x) {
2. if (x == 0) {
3. bar(); cli();
4. } else {
5. baz(); cli();
6. }
7. while (x > 0) {
8. sti();
9. do_work();
10. cli();
11. }
12. }
Path 3 (before stmt): true/2+ loops
2: is_enabled
3: is_enabled
6: is_disabled
7: is_disabled
8: is_enabled
9: is_enabled
10: is_disabled
already been here
Example
1. void foo(int x) {
2. if (x == 0)
3. bar(); cli();
4. else
5. baz(); cli();
6. while (x > 0) {
7. sti();
8. do_work();
9. cli();
10. }
11. sti();
12. }
Path 4 (before stmt): false
2: is_enabled
5: is_enabled
6: is_disabled
already been here
all of state space has been explored
Soundness and Completeness
• **Soundness**
• If the analysis says the program is OK, there are no bugs
• No *false negatives*
• **Completeness**
• If the analysis gives a warning, it is real
• No *false positives*
• **Contrast:** Testing is complete, but not sound
• **No static analysis can be sound, complete, and terminating**
• Perfect static analysis is undecidable on nontrivial programs for even simple attributes
• Thus, every analysis approximates (using abstraction)
• **Many static analyses are useful nevertheless**
• E.g. a sound tool with few false positives in practice
Attribute-Specific Analysis
• Analysis is specific to
• A quality attribute
• Race condition
• Buffer overflow
• Use after free
• A strategy for verifying that attribute
• Protect each shared piece of data with a lock
• Presburger arithmetic decision procedure for array indexes
• Only one variable points to each memory location
• Analysis is inappropriate for some attributes
• Approach to assurance is ad-hoc and follows no clear pattern
• No known decision procedure for checking an assurance pattern that is followed
Outline
• Why static analysis?
• What is static analysis?
• What are current tools like?
• Example: FindBugs
• What does the future hold?
• What tools are available?
• How does it fit into my organization?
FindBugs Demonstration
Outline
• Why static analysis?
• What is static analysis?
• What are current tools like?
• What does the future hold?
• Design intent driven analysis
• What tools are available?
• How does it fit into my organization?
public class Logger {
private Filter filter;
public void setFilter(Filter newFilter) {
if (!anonymous) manager.checkAccess();
filter = newFilter;
}
...
public class Logger { ...
private Filter filter;
public void setFilter(Filter newFilter) {
if (!anonymous) manager.checkAccess();
filter = newFilter; }
public void log(LogRecord record) { ...
synchronized (this) {
if (filter != null && !filter.isLoggable(record)) return;
} ...
/*
* All methods on Logger are multi-thread safe.
*/
public class Logger {
private Filter filter;
/**
* @param newFilter a filter object (may be null)
*/
public void setFilter(Filter newFilter) {
if (!anonymous) manager.checkAccess();
filter = newFilter;
}
public void log(LogRecord record) {
synchronized (this) {
if (filter != null && !filter.isLoggable(record)) return;
}
}
}
Consider class Logger in its entirety!
/** ... All methods on Logger are multi-thread safe. */
public class Logger {
private Filter filter;
/** ... */
* @param newFilter a filter object (may be null)
*/
public void setFilter(Filter newFilter) {
if (!anonymous) manager.checkAccess();
filter = newFilter;
}
public void log(LogRecord record) {
synchronized (this) {
if (filter != null && !filter.isLoggable(record)) return;
}
}
}
Class Logger has a race condition.
/** ... All methods on Logger are multi-thread safe. */
public class Logger {
private Filter filter;
/** ... */
* @param newFilter a filter object (may be null)
*/
public synchronized void setFilter(Filter newFilter) {
if (!anonymous) manager.checkAccess();
filter = newFilter;
}
public void log(LogRecord record) {
synchronized (this) {
if (filter != null && !filter.isLoggable(record)) return;
}
}
}
Correction: synchronize setFilter()
Tool Demonstration: JSure
Models are Missing
- **Programmer design intent is missing**
- Not explicit in Java, C, C++, etc
- *What lock protects this object?*
- “This lock protects that state”
- *What is the actual extent of shared state of this object?*
- “This object is ‘part of’ that object”
- **Adoptability**
- Programmers: “Too difficult to express this stuff.”
- Annotations in tools like JSure: *Minimal effort* — concise expression
- Capture what programmers are *already thinking about*
- No full specification
- **Incrementality**
- Programmers: “I’m too busy; maybe after the deadline.”
- Tool design (e.g. JSure): Payoffs early and often
- Direct programmer utility — *negative marginal cost*
- Increments of payoff for increments of effort
- **Tooling benefits of design intent**
- *Scaleability* because analysis is local
- *Precision* (few false positives) due to avoiding incorrect assumptions
Reporting Code–Model Consistency
• Tool analyzes consistency
• No annotations ⇒ no assurance
• Identify likely model sites
• Three classes of results
+ Code–model consistency
✗ Code–model inconsistency
• Informative — Request for annotation
Design Intent Case Study: Microsoft Standard Annotation Language
• **SAL**: A language of contracts between functions
• **Preconditions**
• Statements that hold at entry to the callee
• What does a callee expect from its callers?
• **Postconditions**
• Statements that hold at exit from the callee
• What does a callee promise its callers?
• **Usage example:**
\[
a_0 \text{ RT func}(a_1 \ldots a_n \ T \ par)
\]
• Buffer sizes, null pointers, memory usage, …
**SAL Example**
```c
wchar_t wcsncpy (__out_ecount(num) wchar_t *dest, __in_ecount(num) wchar_t *src, size_t num);
```
- **__in** The function reads from the buffer. The caller provides the buffer and initializes it.
- **__out** The function writes to the buffer. If used on the return value, the function provides the buffer and initializes it. Otherwise, the caller provides the buffer and the function initializes it.
- **__bcount(size)** The buffer size is in bytes.
- **__ecount(size)** The buffer size is in elements.
- **__opt** This parameter/result can be NULL and must be checked for nullness before a dereference.
Recommendations
• If you use Microsoft’s tools…
• Turn on /analyze
• Annotate all functions that write to buffers
• Annotate all library functions
• Annotation other functions as possible
Available as part of Microsoft Visual Studio and Windows SDK
Outline
• Why static analysis?
• What is static analysis?
• How does static analysis work?
• What are current tools like?
• What does the future hold?
• What tools are available?
• How does it fit into my organization?
Error Taxonomy (incomplete list)
- **Concurrency**
- race conditions
- deadlock
- data protected by locks
- non-lock concurrency (e.g. AWT)
- **Exceptional conditions**
- integer over/underflow
- division by zero
- unexpected exceptions
- not handling error cases
- type conversion errors
- **Memory errors**
- array bounds / buffer overrun
- illegal dereference (null, integer, freed)
- illegal free (double free, not allocated)
- memory leak
- use uninitialized data
- **Resource/protocol errors**
- calling functions in incorrect order
- failure to call initialization function
- failure to free resources
- **Input validation**
- command injection
- cross-site scripting
- format string
- tainted data
- **Other security**
- privilege escalation
- denial of service
- dynamic code
- malicious trigger
- insecure randomness
- least privilege violations
- **Design and understanding**
- dependency analysis
- heap structure
- call graph
- **Code quality**
- metrics
- unused variables
June 7, 2011
MSE / NIST Seminar: Static Analysis for Software Quality
Microsoft Tools
- Static Driver Verifier (was SLAM)
- Part of Windows Driver Kit
- Uses model checking to catch misuse of Windows device driver APIs
- PREfast and the Standard Annotation Language
- Ships with Visual Studio (premium edition) and Windows SDK
- Standard Annotation Language
- Lightweight code specifications
- Buffer size, memory management, return values, tainted data
- PREfast
- Symbolically executes paths to find memory errors
- Lightweight version of PREfix analysis used internally at Microsoft
- Verifies SAL specifications
- Blogs on getting started with SAL
- Microsoft docs
- If you use Microsoft tools, use these!
FindBugs
• findbugs.sourceforge.net
• Focus: bug finding
• Language: Java
• Open source project
• Free
• Large community
• Easy to adapt and customize
• Many defect detectors
• Eclipse plugin support
• Mostly searches for localized bugs
• Memory errors
• array bounds / buffer overrun
• illegal dereference (null, integer, freed)
• double free
• memory leak
• use uninitialized data
• Input validation
• command injection
• tainted data
• Concurrency
• race conditions
• deadlock
• data protected by locks
• Resource/protocol errors
• failure to free resources
• Exceptional conditions
• integer over/underflow
• not handling error cases
• type conversion errors
• Code quality
• unused variables
Coverity Prevent/Extend
- www.coverity.com
- Focus: bugs and security
- Languages: C, C++, Java, C#
- OS: Windows, Linux, OS X, NetBSD, FreeBSD, Solaris, HPUX
- Builds on the Metal static analysis research project at Stanford
- Open source analysis project
- http://scan.coverity.com
- Selling points
- Low false positive rates
- Scales to 10 MLOC+
- Statistical bug finding approach
- Extensibility with Extend
- Seamless build integration
- Memory errors
- array bounds / buffer overrun
- illegal dereference (null, integer, freed)
- double free
- memory leak
- use uninitialized data
- Input validation
- command injection
- cross-site scripting
- format string
- tainted data
- Concurrency
- race conditions
- deadlock
- Resource/protocol errors
- calling functions in incorrect order
- BSTR library usage (Microsoft COM)
- failure to free resources
- Exceptional conditions
- not handling error cases
GrammaTech CodeSonar
- [www.grammatech.com](http://www.grammatech.com)
- Focus: bug finding
- Languages: C, C++
- OS: Windows, Linux, Solaris, OS X
- Selling points
- Strong coverage of C/C++ errors
- Minimize false negatives
- Binary analysis support
- Support for custom checks
- Easy integration with build
- CodeSurfer program understanding tool
- Memory errors
- array bounds / buffer overrun
- illegal dereference (null, freed)
- illegal free (double free, not allocated)
- memory leak
- use uninitialized data
- Input validation
- format string
- tainted data
- Concurrency
- race conditions
- deadlock
- Exceptional conditions
- integer over/underflow
- not handling error cases
- division by zero
- type conversion errors
- Design and understanding
- navigation
- dependency analysis
- ASTs, CFGs, pointer analysis
- heap structure
- call graph
Klocwork Insight
- www.klocwork.com
- Focus: security and bugs
- Languages: C, C++, Java
- OS: Windows, Linux, Solaris, AIX, OS X
- Selling points
- Strong focus on both bugs and vulnerabilities
- Built-in extensibility
- Enterprise/process support
- track quality over time
- Architectural visualization support
- Memory errors
- array bounds / buffer overrun
- illegal dereference (null, integer, freed)
- illegal free (double free, not allocated)
- memory leak
- use uninitialized data
- Input validation
- command injection
- cross-site scripting
- format string
- tainted data
- Concurrency
- race conditions
- Resource/protocol errors
- calling functions in incorrect order
- Exceptional conditions
- not handling error cases
- Other security
- insecure randomness
- least privilege violations
- Design and understanding
- dependency analysis
Fortify 360 Source Code Analyzer
- www.fortify.com
- Focus: security
- Languages: C, C++, .NET family (C#, VB), Java, ColdFusion, TSQL, PLSQL, XML
- OO support from the beginning
- Windows, Linux, OS X, Solaris, AIX, HP-UX, FreeBSD
- Sponsor of FindBugs, fully integrated FindBugs support
- Selling points
- Strong focus on security
- Built-in extensibility
- Good coverage of security errors
- Memory errors
- array bounds / buffer overrun
- illegal dereference (null, freed)
- double free
- memory leak
- use uninitialized data
- Input validation
- command injection
- cross-site scripting
- format string
- tainted data
- Concurrency
- race conditions
- deadlock
- Resource/protocol errors
- calling functions in incorrect order
- failure to call initialization function
- failure to free resources
- Exceptional conditions
- integer over/underflow
- unexpected exceptions
- not handling error cases
- Code quality
- metrics (attack surface, etc.)
PolySpace
- www.polyspace.com
- (now part of MathWorks)
- Focus: embedded system defects
- Languages: C, C++, Ada
- UML Rhapsody, Simulink models
- OS: Windows, Linux, Solaris
- Selling points
- Focus on embedded systems
- Mathematically verifies code with proof engine
- Assured code shown in green
- Errors in checked classes cannot occur
- Memory errors
- array bounds / buffer overrun
- illegal dereference (null, integer, freed)
- use uninitialized data
- reference to non-initialized class members
- Exceptional conditions
- integer over/underflow
- division by zero
- arithmetic exceptions
- type conversion errors
SureLogic JSure
- www.surelogic.com
- Focus: concurrency, architecture, API usage
- Language: Java
- Selling points
- Focus on Java concurrency
- Immediate return on investment
- Professional services
- End-to-end support for FindBugs analysis
- Sound analysis – shows assured code w/ green plus
- Errors in checked classes cannot occur
- Concurrency
- race conditions
- data protected by locks
- non-lock concurrency (e.g. AWT)
- Architecture compliance
- module structure
- Full disclosure: I have a stake in SureLogic as a consultant and potential technology provider
Lattix LDM
- **www.lattix.com**
- Focus: architectural structure
- Languages: C, C++, Java, .NET
- OS: Windows, Linux, Mac OS X
- Published in OOPSLA 2005
- Selling points
- Focus on architectural structure
- Design Structure Matrix representation
- Built automatically from code
- Analysis extracts layered architecture
- Checks design rules
- Downloadable trial version
- **Design and understanding**
- dependency analysis
- impact analysis
- architecture violations
Source: OOPSLA 2005 paper
Headway Software Structure 101
- www.headwaysoftware.com
- Focus: architectural structure
- Languages: Java, .Net
- OS: Windows, Linux, OS X
Selling points
- Focus on architectural structure
- Supports design structure matrices, other notations
- Structural analysis
- dependencies
- impact of change
- architectural evolution
- Downloadable trial version
Design and understanding
- dependency analysis
- impact analysis
- architectural violations
- complexity metrics
Source: Headway Software web site
Outline
- Why static analysis?
- What is static analysis?
- How does static analysis work?
- What are current tools like?
- What does the future hold?
- What tools are available?
- How does it fit into my organization?
- Lessons learned at Microsoft & eBay: Introduction, measurement, refinement, check in gates
- Microsoft source: Manuvir Das
- eBay source: Ciera Jaspan
Introducing Static Analysis
- Incremental approach
- Begin with early adopters, small team
- Use these as champions in organization
- Choose/build the tool right
- Not too many false positives
- Good error reporting
- Show error context, trace
- Focus on big issues
- Something developers, company cares about
- Ensure you can teach the tool
- Suppress false positive warnings
- Add design intent for assertions, assumptions
- Bugs should be fixable [Manuvir Das]
- Easy to fix, easy to verify, robust to small changes
- Support team
- Answer questions, help with tool
Tool Customization
• Tools come with many analyses
• Some relevant, some irrelevant
• eBay example [Jaspan et al. 2007]
• Dead store to local is a critical performance bug if the dead code is a database access
• Process
• Turn on all defect detectors
• Estimate value of reports, false positives
• Assign each detector a priority
• Tied to enforcement mechanism, e.g. prohibited on check-ins
Cost/Benefit Analysis
• Costs
• Tool license
• Engineers internally supporting tool
• Peer reviews of defect reports
• Benefits
• How many defects will it find, and what priority?
• Experience at eBay [Jaspan et al. 2007]
• Evaluated FindBugs
• Found less severe bugs than engineer equivalent
• Clearly found more bugs than engineer equivalent
• Ultimately incorporated tool into process
• See OOPSLA 2007 practitioner report, Understanding the Cost of Program Analysis Tools
Enforcement
• Microsoft: check in gates
• Cannot check in code unless analysis suite has been run and produced no errors
• Test coverage, dependency violation, insufficient/bad design intent, integer overflow, allocation arithmetic, buffer overruns, memory errors, security issues
• eBay: dev/QA handoff
• Developers run FindBugs on desktop
• QA runs FindBugs on receipt of code, posts results
• High-priority fixes required
• Requirements for success
• Low false positives
• A way to override false positive warnings
• Typically through inspection
• Developers must buy into static analysis first
Root Cause Analysis
• Deep analysis
• More than cause of each bug
• Identify patterns in defects
• Understand why the defect was introduced
• Understand why it was not caught earlier
• Opportunity to intervene
• New static analyses
• written by analysis support team
• Other process interventions
Impact at Microsoft
• Thousands of bugs caught monthly
• Significant observed quality improvements
• e.g. buffer overruns latent in codebase
• Widespread developer acceptance
• Check-in gates
• Writing specifications
Analysis Maturity Model
*Caveat: not yet enough experience to make strong claims*
- **Level 1**: use typed languages, ad-hoc tool use
- **Level 2**: run off-the-shelf tools as part of process
- pick and choose analyses which are most useful
- **Level 3**: integrate tools into process
- check in quality gates, milestone quality gates
- integrate into build process, developer environments
- use annotations/settings to teach tool about internal libraries
- **Level 4**: customized analyses for company domain
- extend analysis tools to catch observed problems
- **Level 5**: continual optimization of analysis infrastructure
- mine patterns in bug reports for new analyses
- gather data on analysis effectiveness
- tune analysis based on observations
Analysis, Now and in the Future
• Static analysis is revolutionizing QA practices in leading companies today
• Exhibit A: Microsoft
• Comprehensive analysis is centerpiece of QA for Windows
• Now affects every part of the engineering process
• Static analysis enables organizations to:
• increase quality while enhancing functionality
• differentiate themselves from the competition
Questions?
|
{"Source-Url": "http://mse.isri.cmu.edu/software-engineering/documents/faculty-publications/miranda/aldrichstatic-analysisforsoftwarequality.pdf", "len_cl100k_base": 6903, "olmocr-version": "0.1.53", "pdf-total-pages": 59, "total-fallback-pages": 0, "total-input-tokens": 83579, "total-output-tokens": 9587, "length": "2e12", "weborganizer": {"__label__adult": 0.0003199577331542969, "__label__art_design": 0.0002384185791015625, "__label__crime_law": 0.000263214111328125, "__label__education_jobs": 0.0008072853088378906, "__label__entertainment": 4.678964614868164e-05, "__label__fashion_beauty": 0.0001226663589477539, "__label__finance_business": 0.00015044212341308594, "__label__food_dining": 0.00023925304412841797, "__label__games": 0.000644683837890625, "__label__hardware": 0.0007042884826660156, "__label__health": 0.000232696533203125, "__label__history": 0.00013327598571777344, "__label__home_hobbies": 6.824731826782227e-05, "__label__industrial": 0.00023937225341796875, "__label__literature": 0.0001652240753173828, "__label__politics": 0.00015938282012939453, "__label__religion": 0.00030612945556640625, "__label__science_tech": 0.00286102294921875, "__label__social_life": 7.903575897216797e-05, "__label__software": 0.00437164306640625, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00029397010803222656, "__label__transportation": 0.0003483295440673828, "__label__travel": 0.000164031982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29207, 0.01288]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29207, 0.36359]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29207, 0.74679]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 398, false], [398, 796, null], [796, 1633, null], [1633, 2319, null], [2319, 2785, null], [2785, 3018, null], [3018, 3951, null], [3951, 4361, null], [4361, 4838, null], [4838, 5300, null], [5300, 5654, null], [5654, 5923, null], [5923, 6512, null], [6512, 6736, null], [6736, 7362, null], [7362, 7892, null], [7892, 8248, null], [8248, 8626, null], [8626, 9008, null], [9008, 9358, null], [9358, 9962, null], [9962, 10518, null], [10518, 10727, null], [10727, 10750, null], [10750, 10971, null], [10971, 11157, null], [11157, 11493, null], [11493, 11995, null], [11995, 12498, null], [12498, 13016, null], [13016, 13042, null], [13042, 13974, null], [13974, 14227, null], [14227, 14706, null], [14706, 15333, null], [15333, 15592, null], [15592, 15812, null], [15812, 16942, null], [16942, 18000, null], [18000, 18748, null], [18748, 19697, null], [19697, 20683, null], [20683, 21583, null], [21583, 22586, null], [22586, 23247, null], [23247, 23847, null], [23847, 24370, null], [24370, 24885, null], [24885, 25353, null], [25353, 25961, null], [25961, 26373, null], [26373, 26869, null], [26869, 27494, null], [27494, 27809, null], [27809, 28033, null], [28033, 28803, null], [28803, 29197, null], [29197, 29207, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 398, true], [398, 796, null], [796, 1633, null], [1633, 2319, null], [2319, 2785, null], [2785, 3018, null], [3018, 3951, null], [3951, 4361, null], [4361, 4838, null], [4838, 5300, null], [5300, 5654, null], [5654, 5923, null], [5923, 6512, null], [6512, 6736, null], [6736, 7362, null], [7362, 7892, null], [7892, 8248, null], [8248, 8626, null], [8626, 9008, null], [9008, 9358, null], [9358, 9962, null], [9962, 10518, null], [10518, 10727, null], [10727, 10750, null], [10750, 10971, null], [10971, 11157, null], [11157, 11493, null], [11493, 11995, null], [11995, 12498, null], [12498, 13016, null], [13016, 13042, null], [13042, 13974, null], [13974, 14227, null], [14227, 14706, null], [14706, 15333, null], [15333, 15592, null], [15592, 15812, null], [15812, 16942, null], [16942, 18000, null], [18000, 18748, null], [18748, 19697, null], [19697, 20683, null], [20683, 21583, null], [21583, 22586, null], [22586, 23247, null], [23247, 23847, null], [23847, 24370, null], [24370, 24885, null], [24885, 25353, null], [25353, 25961, null], [25961, 26373, null], [26373, 26869, null], [26869, 27494, null], [27494, 27809, null], [27809, 28033, null], [28033, 28803, null], [28803, 29197, null], [29197, 29207, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29207, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29207, null]], "pdf_page_numbers": [[0, 0, 1], [0, 398, 2], [398, 796, 3], [796, 1633, 4], [1633, 2319, 5], [2319, 2785, 6], [2785, 3018, 7], [3018, 3951, 8], [3951, 4361, 9], [4361, 4838, 10], [4838, 5300, 11], [5300, 5654, 12], [5654, 5923, 13], [5923, 6512, 14], [6512, 6736, 15], [6736, 7362, 16], [7362, 7892, 17], [7892, 8248, 18], [8248, 8626, 19], [8626, 9008, 20], [9008, 9358, 21], [9358, 9962, 22], [9962, 10518, 23], [10518, 10727, 24], [10727, 10750, 25], [10750, 10971, 26], [10971, 11157, 27], [11157, 11493, 28], [11493, 11995, 29], [11995, 12498, 30], [12498, 13016, 31], [13016, 13042, 32], [13042, 13974, 33], [13974, 14227, 34], [14227, 14706, 35], [14706, 15333, 36], [15333, 15592, 37], [15592, 15812, 38], [15812, 16942, 39], [16942, 18000, 40], [18000, 18748, 41], [18748, 19697, 42], [19697, 20683, 43], [20683, 21583, 44], [21583, 22586, 45], [22586, 23247, 46], [23247, 23847, 47], [23847, 24370, 48], [24370, 24885, 49], [24885, 25353, 50], [25353, 25961, 51], [25961, 26373, 52], [26373, 26869, 53], [26869, 27494, 54], [27494, 27809, 55], [27809, 28033, 56], [28033, 28803, 57], [28803, 29197, 58], [29197, 29207, 59]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29207, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
be4de1d74a4583b58bd48c9182cceae8b95688ad
|
Implementation of App Engine and Cloud Storage as REST API on Smart Farm Application
Khoirul Azkiya1*, Muhamad Irsan2, Muhammad Faris Fathoni3
1,2,3Telkom University, Indonesia
1khoirul.azkya@gmail.com, 2irsanfaiz@telkomuniversity.ac.id, 3mfarisfwork@telkomuniversity.ac.id
Submitted: Jan 11, 2024 | Accepted: Jan 14, 2024 | Published: Apr 1, 2024
Abstract: Smart Farm is an agricultural application that uses machine learning and cloud computing technology to improve efficiency in the farming process. Technological advancement and sustainable agriculture are two essential aspects of supporting global food security. This research investigates the implementation of App Engine and Cloud Storage in developing REST API in Smart Farm applications. By utilizing cloud computing technology, such as App Engine, and cloud storage, such as Cloud Storage, we can create efficient solutions to monitor and manage agriculture better. This research implements an App Engine and Cloud Storage to develop a REST API that allows Smart Farm application users to access data and control farming devices efficiently. The authors designed, developed, and tested this system to ensure optimal performance and reliability in agricultural data collection and distribution. This method has several significant advantages. First, App Engine allows for easy scalability, ensuring the system can handle increased data demand without disruption. Secondly, Cloud Storage provides secure and scalable storage for agricultural data, which can be accessed from anywhere. This provides easy and quick access to critical data for farmers. Moreover, the use of cloud technology also reduces infrastructure and maintenance costs. The developed system integrates the App Engine and Cloud Storage with the Smart Farm application. The App Engine is a processing engine that receives user requests via the REST API, processes the required data, and provides appropriate responses. Like image data, farm data is stored and managed on Cloud Storage. Users can access this data through the Smart Farm app or other devices, enabling better farming monitoring and decision-making.
Keywords: REST API, App Engine, Cloud Storage, Smart Farm, black box.
INTRODUCTION
Agriculture has become the main focus of efforts to maintain food security in a globalized era that demands sustainability (Harison et al., 2017). In Indonesia, agriculture is a source of livelihood and a key factor in meeting people's needs (Harison et al., 2017). Unfortunately, the efficiency of farmers’ work is often less than optimal. The decisions made affect agricultural yields and increase the time spent (Achyar et al., 2020). Although some have adequate knowledge of the agricultural sector, many farmers in Indonesia continue to rely on personal experience and instinct in making decisions on the farm (Achyar et al., 2020).
The rapid development of technology has facilitated the development of web services and web-based applications as a solution in supporting various human activities. Many web applications today are inseparable from two main components: front end and back end. The front end is the part of the web application that interacts directly with the user, while the back end includes aspects of the server, application, and database (Guntara & Azkarin, 2023). Back-end development focuses on the server-side aspects. The program code of the back-end application allows interaction between the browser and retrieval of data from the database (Guntara & Azkarin, 2023).
So, the Smart Farm application was developed using Application Programming Interface (API) technology. API is a software interface that contains a collection of instructions or functions, allowing communication between various software applications (Ariantara et al., 2020). Representational State Transfer (REST) is an API architecture that facilitates the transfer and request of data over the HTTP protocol, known for its ease in the mobile cloud paradigm (Suzanti et al., 2020). The REST API here will integrate the application with the server and database. This application is planned to support farmers by providing quick access through the application to...
optimize the farmer's work process.
This application development incorporates cloud computing technologies, such as App Engine for building web applications and Cloud Storage for data storage. App Engine provides efficient infrastructure, while Cloud Storage provides flexibility in storing the application's data needs (Cloud, 2023). They are an essential part of the cloud technology infrastructure that supports multiplatform application development and management. Through web service architecture, applications can be integrated with various platforms through the same protocol, enabling more efficient communication between programs (Choirudin & Adil, 2019).
By implementing App Engine and Cloud Storage as REST API on the Smart Farm application, it is expected that farmers can access the application and upload images of their plants more efficiently. App Engine and Cloud Storage as REST API on Smart Farm application will increase farmers' productivity, reduce the risk of crop failure due to plant diseases, and strengthen the country's food security. Using technology, Indonesia is expected to become 'Lumbung Padi Asia' capable of producing high-quality rice and maintaining stable and quality food availability.
## LITERATURE REVIEW
### Related Research
Research related and relevant to the background of the Smart Farm application problem. Table 1 is a summary of previous research related to this research, as shown in Table 1.
<table>
<thead>
<tr>
<th>Author / Year</th>
<th>Titles</th>
<th>Problems</th>
<th>Solutions</th>
<th>Results</th>
</tr>
</thead>
<tbody>
<tr>
<td>Wiji Sulistiani, Wiwin Sulistyo / 2020</td>
<td>Implementasi Web Service dengan Metode REST Berbasis Golang pada Layanan Google Cloud Platform di PT Sumber Alfaria Trijaya, Tbk</td>
<td>PT Sumber Alfaria Trijaya, Tbk, faced problems synchronization master data between on-premises servers and cloud storage on the Google Cloud Platform. This problem impacts the Master Data Management (MDM) strategy.</td>
<td>Implemented Golang-based Web Service application with REST method and connected on-premises application with cloud storage. Allows real-time access to synchronized master data, optimally used for various business needs in multiple applications and divisions.</td>
<td>The results showed that the implementation successfully synchronized the Master Data thoroughly. The system running on Google Cloud Platform services and accessed through the MasterStore application runs smoothly, facilitating fast data exchange with smaller data sizes in JSON format. This gives management better capabilities in making business decisions.</td>
</tr>
<tr>
<td>Adam Firdaus, Dean Apriana Ramadhan / 2021</td>
<td>Pengembangan Back End Berbasis REST API pada Sistem E-Partisipasi dan E-Inisiatif Patriot Pangan</td>
<td>The Food Insecurity Agency has a system that needs to be more responsive to detect food insecurity in the community. The limitation of public participation in the Patriot Pangan application is limited to data reporting only, so it is necessary to develop features in the application that aim to attract public involvement.</td>
<td>This research involves the development of a REST API-based backend using Node.JS with Express.Js framework and MongoDB non-relational database. This research also includes the implementation of access tokens in the form of JSON Web Tokens (JWT) to improve API access security.</td>
<td>The development of the REST API-based back end of the E-Participation and E-Initiative Food Patriot application uses Node.JS with the Express.Js framework and the MongoDB non-relational database. This development was carried out using the scrum method for five sprints and resulted in a total of 23 APIs that have been successfully tested. This system is expected to help detect food insecurity cases early and help the community achieve crowdfunding to overcome food insecurity cases quickly.</td>
</tr>
</tbody>
</table>
Rangga Gelar Guntara, Varinia Azkarin / 2023
Implementasi dan Pengujian REST API Sistem Reservasi Ruang Rapat dengan Metode Black Box Testing (Studi Kasus: PT Lizzie Parra Kreasi)
Highlighting the shortcomings in the backend application and database management system that still needs to be available to facilitate the management of meeting room reservation data at PT Lizzie Parra Kreasi.
REST API development is needed as an intermediary between the front end and the server to facilitate data exchange. The author designed and implemented a REST API using the Node.js platform to improve the efficiency of the reservation system at PT Lizzie Parra Kreasi.
Implementing the REST API using Node.js provides superior performance compared to implementing the REST API using the PHP programming language. Testing with the black box testing method also shows that each function on each API endpoint runs according to the expected output, both through the HTTP GET, POST, PUT, and DELETE methods. These findings confirm that the REST API for the meeting room reservation system at PT Lizzie Parra Kreasi can be designed and implemented effectively, allowing the client application to exchange data with the database. Thus, the reservation history can be stored properly.
Representational State Transfer
REST (Representational State Transfer) is a standard protocol for web communication using Hypertext Transfer Protocol (HTTP) links (Susanti & Mailoa, 2020). It enables clients to send requests via HTTP methods, and servers respond with REST Responses. REST messages consist of headers and bodies. The HTTP header records each transaction on HTTP, while the body contains the data to be sent (Susanti & Mailoa, 2020). The primary REST methods include GET, POST, PUT, and DELETE (Perkasa & Setiawan, 2018).
One of REST advantages is its utilization of commonly used Internet communication protocols like HTTP. It is lightweight and widely adopted in cloud-based API development by companies such as Amazon, Microsoft, and Google (Nurfauzia, 2021). Due to this, REST is easily implementable across various platforms. Services on the web that adhere to REST principles are often referred to as RESTful API.
App Engine on Google Cloud Platform
Google's App Engine serves as a hosting service designed to assist developers in running web applications without the need to delve into intricate infrastructure aspects (Yusrizal et al., 2017). Given its utilization of a non-relational database system, App Engine proves suitable for managing and storing unstructured data (Nugroho & Mustofa, n.d.). Within the Smart Farm context, the App Engine functions as a pivotal platform for hosting cloud-based applications. Its roles include backend management, provision of REST API, seamless integration with Cloud Storage, and fortification of application security measures.
Furthermore, App Engine offers robust security and authentication functionalities, enhancing the protection of sensitive data within Smart Farm applications.
Cloud Storage on Google Cloud Platform
Cloud Storage is commonly characterized as a digital data storage service accessible over the Internet. Its utilization presents several advantages, including heightened data security, adaptable data accessibility, and a reduced risk of data loss or corruption (Farizy & Eriana, 2011). Within the realm of Smart Farming, Cloud Storage serves as a pivotal solution for storing various data types, including images and configuration files. This strategic usage enables Smart Farms to optimize their data management and analysis processes, fostering effective and efficient operations.
Node.js
Node.js stands as a JavaScript framework utilized in web development, catering to both client and server-side...
scripting (Kurniawan et al., 2020). In the context of Smart Farm applications, Node.js serves as the backend technology for implementing REST API.
Its primary strengths lie in delivering exceptional responsiveness and performance, facilitating seamless integration with various devices, and optimizing applications to enhance efficiency and productivity within farming operations.
**MongoDB**
MongoDB stands out as a NoSQL database designed for application development. Unlike traditional SQL databases, MongoDB stores documents in BSON (Binary JSON) format, enhancing readability (Bhaswara et al., 2017). Its architecture offers flexibility, enabling seamless database scaling and integration with a diverse array of programming languages.
In the context of Smart Farm applications, MongoDB proves invaluable as a data storage solution, particularly for managing access information of farmers who have registered accounts within the Smart Farm system.
**JSON**
JSON stands for JavaScript Object Notation, a data exchange format invented by Douglas Crockford in 2006 (Warsito et al., 2017). Designed to be understood by both computers and humans (Triawan & Prasetiyo, 2019). JSON has a lighter data size than XML, so the data exchange process becomes faster (Warsito et al., 2017). The use of JSON format is necessary in designing applications that use RESTful API and Web Service methods, as it allows efficient storage and data exchange between various application components (Warsito et al., 2017).
**Black Box Testing**
Black Box testing focuses on the functionality of the software without paying attention to details (Wijaya & Astuti, 2021). This method involves observing the execution results through test data, thus emphasizing the system's functionality (Hanifah et al., 2012). A series of tests on inputs and functions are carried out to find errors or errors so that the system can be improved and run as expected (Wijaya & Astuti, 2021; Wiradiputra et al., 2021). In the context of testing in the Smart Farm application, Black Box Testing with the Equivalence Partitions technique will be used. This technique helps prepare test cases and test functionality and detect input errors on the test page (Wijaya & Astuti, 2021). Testing is done by entering data that does not match the data type or using random data (Hanifah et al., 2012).
**METHOD**
The core of this research is the implementation of App Engine and Cloud Storage as REST API in the Smart Farm application. It involves concept understanding, requirement identification, designing, and practical implementation. Concept understanding is necessary to configure App Engine and Cloud Storage.
After understanding what is needed, the author will design a system so that the App Engine and Cloud Storage can be integrated into a REST API, making it easier to access data and services. This involves configuring the App Engine and Cloud Storage. Next is the technical implementation to develop the REST API using NodeJS, which will bridge the Smart Farm application and the cloud, as shown in Figure 1.
This step results in a working URL or endpoint to access data and services in Cloud Storage via the App Engine. Through this testing, there is great potential to improve the performance of the Smart Farm application and enable more efficient use of cloud technology.
The system architecture in this research helps organize how the required components, such as App Engine, Cloud Storage, REST API as a Web Service API, and MongoDB as a Database, will interact, as shown in Figure 2. This system architecture will be helpful in planning functionality or structure to achieve the desired goals. App Engine provides computing and storage services for web and mobile applications and runs applications created with various programming languages and frameworks. Cloud Storage provides data storage services in the cloud and supports various data types, including images. NodeJS, as a REST API, will provide an interface to access data and functionality from cloud services. The interaction on the App Engine will access data from Cloud Storage using NodeJS as a REST API; the App Engine will process the data and generate a response, and then the response will be sent to the user via the REST API.
The method that will be used in this research is the Black Box Testing method. Black Box Testing will focus on the functionality of the Endpoint generated by the App Engine without paying attention to details. The technique that will be used in this Black Box Testing is Equivalence Partitions. This technique will compile test cases and functionality and detect input errors on the test page (Wijaya & Astuti, 2021). Testing is done by entering data that does not match the data type or using random data (Hanifah et al., 2012).
**RESULT**
**Endpoint Design**
The following Endpoint design includes URI (Uniform Resource Identifier). This character identifies from a web, or it can also be called the location of a web, which functions to process input from the user. Method or HTTP Method is the method of HTTP Request that will be done on the URI, and Process is what will happen if the user accesses the URI, as shown in Table 2.
<table>
<thead>
<tr>
<th>Method</th>
<th>URI</th>
<th>Process</th>
</tr>
</thead>
<tbody>
<tr>
<td>GET</td>
<td>/</td>
<td>Main page</td>
</tr>
<tr>
<td>POST</td>
<td>/api/v1/users/signin</td>
<td>User logging in</td>
</tr>
<tr>
<td>POST</td>
<td>/api/v1/users/signup</td>
<td>Registering user into the database</td>
</tr>
<tr>
<td>POST</td>
<td>/api/v1/uploads</td>
<td>User uploads an image</td>
</tr>
</tbody>
</table>
**Cloud Computing Implementation**
The following Endpoint design includes URI (Uniform Resource Identifier). This character identifies from a web, or it can also be called the location of a web, which functions to process input from the user. Method or HTTP Method is the method of HTTP Request that will be done on the URI, and Process is what will happen if the user accesses the URI.
The results obtained from the App Engine Implementation are that the NodeJS script created and deployed on the App Engine will display the main page of the REST API, which uses the HTTP Method GET as an HTTP Request, as shown in Figure 3.
The results obtained from the Cloud Storage Implementation are buckets that store images from user upload input in the Smart Farm application, as shown in Figure 4.
After designing, the author implements each endpoint, and the author implements the App Engine with the database when the user logs in or registers in the Smart Farm application. The author tests the endpoint using the Postman application, which functions to manage HTTP Request and HTTP Method on a REST API that has been created.
The payload generates JSON as a message that the image has been successfully uploaded to Cloud Storage with data containing a URL from the image that can be accessed, as shown in Figure 5.
The payload will produce JSON in the form of a success status, which means that the request requested by the user when inputting data into the Smart Farm application has been successfully received by the server, as shown in Figure 6. The payload token is JSON Web Token (JWT), which provides an identifier to the user that the user has logged in. Payload data contains details of user input from the Smart Farm application during registration; the password will be encrypted with the crypt method. The data has been added to MongoDB as collections, as shown in Figure 7.
Next, the author logs in to the Smart Farm application. The resulting payload is a message that has successfully logged into the Smart Farm application with the identifier given to the token in JSON Web Token, as shown in Figure 8.
**Testing**
After the implementation stage, the next step is to test the API with the black box method. This stage aims to determine whether the function of the application has run well and following the expected output. Testing is carried out on each endpoint, with the results presented in the following Table 3.
<table>
<thead>
<tr>
<th>Testing Activities</th>
<th>Test Case</th>
<th>Expected Outputs</th>
<th>Results</th>
<th>Conclusion</th>
</tr>
</thead>
</table>
* Corresponding author
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
DISCUSSION
This research has focused on developing a Smart Farm application by integrating App Engine technology and Cloud Storage as a REST API. The API Endpoint design, involving URIs, HTTP methods, and processes, provides a clear structure for the application's interaction with the server and database. The implementation of App Engine and Cloud Storage has been successfully carried out, and testing shows that the Smart Farm application has been well designed, offering the potential for overall performance improvement. Despite the success, the research also identified some limitations that must be addressed. Application scalability can be a challenge if the number of users increases significantly. Data security and privacy, especially farmer data and plant images stored on Cloud Storage, as the buckets on Cloud Storage are public, thus requiring further attention. The use of APIs in this research is limited to the login process, user account registration, and image uploading, creating room for further development in future research. Comparing this research with previous research shows progress in endpoint design and cloud computing integration. JSON Web Token (JWT) for user authentication reflects a more enhanced security approach than traditional methods. The success of this research also positively contributes to supporting sustainable agriculture through the utilization of API technology and cloud computing and provides a basis for further development.
CONCLUSION
Based on the research results, implementing the App Engine and Cloud Storage on the Smart Farm Application can be designed and implemented correctly. The stages passed in this research include problem identification, data collection and literature study, design, implementation, and testing with the black box method. The use of APIs in this research allows the Smart Farm Application to exchange data with the database and store images on Cloud Storage so that user account data that register and upload images to the Smart Farm Application can run properly. As for the test results with the black box testing method, which shows that all functions of each API endpoint run according to the expected output, through the HTTP method, all endpoints can run well, starting from the user registering, the user logging in, and the user uploading the image with the POST method.
REFERENCES
* Corresponding author
This is an Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
|
{"Source-Url": "https://jurnal.polgan.ac.id/index.php/sinkron/article/download/13386/2426", "len_cl100k_base": 4547, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27553, "total-output-tokens": 6763, "length": "2e12", "weborganizer": {"__label__adult": 0.0004253387451171875, "__label__art_design": 0.0003437995910644531, "__label__crime_law": 0.0005011558532714844, "__label__education_jobs": 0.00122833251953125, "__label__entertainment": 7.826089859008789e-05, "__label__fashion_beauty": 0.00022292137145996096, "__label__finance_business": 0.0008788108825683594, "__label__food_dining": 0.0009822845458984375, "__label__games": 0.0006475448608398438, "__label__hardware": 0.0021381378173828125, "__label__health": 0.0008091926574707031, "__label__history": 0.000324249267578125, "__label__home_hobbies": 0.000255584716796875, "__label__industrial": 0.004138946533203125, "__label__literature": 0.0002899169921875, "__label__politics": 0.00027751922607421875, "__label__religion": 0.0005288124084472656, "__label__science_tech": 0.1439208984375, "__label__social_life": 0.00013124942779541016, "__label__software": 0.02410888671875, "__label__software_dev": 0.81591796875, "__label__sports_fitness": 0.00045180320739746094, "__label__transportation": 0.0013113021850585938, "__label__travel": 0.0002363920211791992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27326, 0.02527]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27326, 0.13906]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27326, 0.806]], "google_gemma-3-12b-it_contains_pii": [[0, 4194, false], [4194, 8022, null], [8022, 11796, null], [11796, 14879, null], [14879, 17900, null], [17900, 19160, null], [19160, 19989, null], [19989, 24176, null], [24176, 27326, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4194, true], [4194, 8022, null], [8022, 11796, null], [11796, 14879, null], [14879, 17900, null], [17900, 19160, null], [19160, 19989, null], [19989, 24176, null], [24176, 27326, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27326, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27326, null]], "pdf_page_numbers": [[0, 4194, 1], [4194, 8022, 2], [8022, 11796, 3], [11796, 14879, 4], [14879, 17900, 5], [17900, 19160, 6], [19160, 19989, 7], [19989, 24176, 8], [24176, 27326, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27326, 0.11765]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
005b321522a401d66e1550a2fa8d53165405dcb5
|
Analyzing Client-Side Interactions to Determine Reading Behavior
David Hauger\textsuperscript{1} and Lex Van Velsen\textsuperscript{2}
\textsuperscript{1}Institute for Information Processing and Microprocessor Technology
Johannes Kepler University, Linz, Austria
\textsuperscript{2}Department of Technical and Professional Communication
University of Twente, Enschede, Netherlands
Abstract
Traditional monitoring and user modeling techniques in adaptive hypermedia systems consider pages as atomic units although different sections may refer to different concepts. This has been mainly due to the fact that most user interactions being monitored referred to the request of a new document and there was too little activity information to differentiate between sections of a page. Client-side monitoring can provide additional information on user interactions inside the browser window and may relate them to areas within a document. A user study was carried out to show whether and how this data might be used to identify which parts of a page have been read.
1 Introduction
It has been a widely accepted fact for several years now that “the user can prefer some nodes and links over others and some parts of a page over others” [Brusilovsky, 1996]. Opening a page does not necessarily mean that a user read all its contents. Consequently, adaptive hypermedia systems (AHS) should monitor these nodes separately to tell (a) how much of a page has been read, and (b) what parts of a page have been read or are of particular interest, especially if they concern different topics.
Most AHS try to (partially) meet these demands by monitoring requests to the server, which makes it possible to determine the links a user followed. Nevertheless, concerning text nodes (or links that have not been followed), most AHS treat pages as atomic items. Elaborate algorithms try to add additional information to user models by analyzing requests (e.g., to calculate the estimated “time spent reading” based on the time difference between requests [Farzan and Brusilovsky, 2005]), but there are hardly any attempts to treat different parts of a page separately [Hauger, 2008].
The approach put forward in this paper shows how monitoring user interactions inside the browser could help to overcome these limitations. A user study has been carried out to determine how users interact and how it is possible to determine whether a page has been read.
2 Related Work and State of the Art
Traditional user modeling techniques of AHS log requests of resources on the server and use this information as a basis for modeling. However, most interactions of users do not cause requests to the server (mouse movements, scrolling, etc.) and are therefore not monitored.
Several attempts have been made to use client-side interactions in AHS. Hjikata [Hjikata, 2004] showed that text tracing, link pointing, link clicking and text selection are an indicator for interest. Goecks and Shavlik [Goecks and Shavlik, 2000] defined a “level of activity” based on mouse and scrolling activities monitored via JavaScript. They used it for a neural network inside the browser. Hofmann et al. [Hofmann et al., 2006] sent timestamps of interactions to the server to calculate periods of inactivity.
Claypool et al. [Claypool et al., 2001] developed “The Curious Browser” to log interaction events inside the browser. The results were used to establish a connection between user interaction and the level of interest. Although this solution is effective, it is not ideal because in order to be able to use client-side information in common e-learning situations, additional hardware and software requirements should be avoided and standard technologies should be used for monitoring and transmitting the data. Putzinger [Putzinger, 2007] used mouse and keyboard events on input elements to determine the “focus of attention”. This information has been sent to the server to adaptively provide help.
Nevertheless, most systems referred to pages as a whole. Differentiating between sections requires new monitoring techniques. Eye-tracking is one possibility to identify the locus of attention [Conati et al., 2007]. As the applicability of this approach is limited due to additional hardware and software requirements, other solutions using standard technologies need to be found.
Client-side user monitoring as described in [Hauger, 2009] is able to (a) retrieve additional information on user interactions and (b) treat different sections of a page separately. The work described in this paper tries to find out whether and how the information that can be retrieved may be used to determine which parts of a page have been read.
3 Client-Side User Monitoring
In order to overcome the limitations of traditional approaches using server-side logs as the only source of information, the monitoring process itself could be improved by monitoring activities within the browser window [Hauger, 2008]. For this reason a JavaScript library has been developed which monitors these client-side events and maps them to parts of a page [Hauger, 2009].
3.1 Page Fragmentation
Different sections of a page in an AHS might need to be treated separately. As exact mouse positions might be difficult to compare and evaluate, alternative segmentation techniques need to be considered that are robust to changes
in the size and topology of page elements. The library that has been developed supports different approaches to split pages:
- **split by vertical position**: Independently from the actual content a page may be vertically divided into \( k \) segments; each one representing \( \frac{1}{k} \) of the page. This type of segmentation may be used to calculate how much of a document has been read and it may easily be applied for static and unstructured pages.
- **split by content type**: In order to identify learning style preferences it is for example possible to monitor images separately to make assumptions on whether users prefer textual or graphical content.
- **split by semantic meta data available**: If there is already semantic meta data available (concepts, keywords, etc.), it is possible to monitor items including such additional information and relate the activity information to this data.
- **split by source**: For “composed” pages with items derived from multiple sources it is possible to automatically link user interactions to the original source of the fragment.
- **split by structural information**: Structural information (if available) like headlines may be used to distinguish between different sections of a page.
- **add custom fragments**: In addition to all mentioned splitting techniques, each HTML element may (even at runtime) be manually defined as a fragment that has to be monitored.
3.2 Monitored Interactions
As JavaScript is used to monitor interactions, the library logs the events already available (including mouse moves, clicks, keyboard activities, scrolling, window resizing and window events like focus and blur) and uses them for further processing (e.g. for mapping positions to fragments). In addition to those predefined JavaScript events, a number of custom events has been created; e.g. to identify text selections (which may be used to identify text tracing) and inactivity (no interactions for a longer period of time), as well as to determine events of a temporal basis. The monitored variables in detail:
- **visible time**: The time a fragment has been visible on the screen. This can be regarded as a requirement for reading. Printing a page, saving it for offline use, etc. may allow to read parts of a text never having been visible within the browser window, but this may be regarded as an exception.
- **mouse over time**: The total time the mouse has been placed above a specified fragment. Some people place their mouse above the text they are currently reading. Therefore this is being monitored to check whether it can really be used as indicator to identify reading.
- **mouse on same y time**: The total time the mouse has been placed within the vertical borders of a fragment. This is similar to the “mouse over time”, but ignoring the horizontal position of the mouse. If there is only one (“main”) column of text (as in the current experiment), the two variables should be similar. For two or more columns there might be differences, e.g. if a user always places the mouse on the right side of the screen, independent from the horizontal position
at which the user is reading. This, however, will be part of future work.
- **number of mouse moves**: Amount of mouse moves taking place above the current fragment. Mouse moves within 500ms have been regarded as a single mouse movement. Passing an item with the mouse in less than half a second has not been counted.
- **number of clicks**: The total amount of clicks performed on the fragment.
- **number of text selections**: Counting how often a user has selected text within a specific fragment.
The main premise of the work described in this paper is that based on these interactions it should be possible to draw additional assumptions on users’ reading behavior, interests, etc.
4 User Study
In order to determine how client-side user interactions and reading behavior are related, a user study with 53 volunteers has been carried out. The results of client-side user monitoring should be compared to explicit feedback given by the users. The main goal was the identification of client-side user activities that may be used to identify which parts of a page have been read.
A single page containing a number of news items (20–23) from an Austrian news page (http://oesterreich.orf.at) has been provided. Each item consisted of a thumbnail (width: \( \approx 100 − 150px \)) on the left side and a headline with 4 – 6 lines (\( \approx 20 − 40 \) words) of additional text (short summary of an article) next to it. Internally, the page was split automatically in order to monitor each news item separately. As the system should focus on interaction information that cannot be gained through server-side monitoring, links to the extended articles were disabled. The page was updated twice a day to increase the probability users have not read the news before, which should result in higher interest.
The study itself was entirely anonymous – the participants were not even asked to enter demographic information. Participation was possible via the web. On a first page the experiment was explained and users were instructed to read only whatever interested them, as if they were visiting the news page in a normal context of use.
While they were reading, their interactions within the browser were monitored using the library mentioned in section 3. Information on the absolute location of events were mapped to the news items to be able to compare them later on. The preprocessed events were sent to the server and stored in a database, as well as the values for the variables mentioned in section 3.2 (per user and news item). In addition to this, the total time for a page being requested was recorded, which is the only information that could have been retrieved by server-side monitoring as well.
After reading the users were asked to fill a small questionnaire. For each news item they had to select whether they read this item, glanced at it or skipped it. The page for reading and the feedback form are shown in Figure 1.
It has to be stated that the feedback of the users is subjective and there may be differences in what single users regarded as reading, glancing or skipping.
The evaluation of the results should show how reading and client-side interactions within the browser are correlated. The final goal is the establishment of an algorithm that is able to tell with a satisfactory level of certainty whether a fragment has been read or not. Although the scope of the experiment is not sufficient for getting exact values and parameters for an overall algorithm, this user study should show directions towards creating it.
5 Results
A total of 53 participants completed the questions related to the news items. They provided feedback on 20 to 23 items, with an average of 22.32 items. The participants spent, on average, 2 minutes and 9 seconds reading the news page, with a standard deviation of 2 minutes and 36 seconds.
The items related to user feedback (item skipped, glanced or read) were scored in a dummy variable to enable data analysis. Each feedback option was made into a separate variable with a score of either 0 (item not skipped, not glanced or not read) or 1 (item skipped, glanced or read). The responses were based on the participants’ subjective assessment of their own behavior, and thus there might be differences in what users regarded as read, skipped or glanced at. For some users, skipping meant not even scrolling down to the bottom, while others showed quite some interactions with items they marked as skipped. Additionally, “reading” for some users meant “reading carefully”, while others marked items as read that were visible for four seconds only. Nevertheless, the results should be able to point out how information on client-side interactions could be used to identify reading.
Table 1 displays the minimum, maximum, mean and standard deviation of all the recorded variables. The table shows that the mouse cursor was, on average, just a few seconds above each item or on the same y-level. The time items were visible on screen differed widely, with an average of 25.15s and a standard deviation of 21.14s.
<table>
<thead>
<tr>
<th>variable</th>
<th>N</th>
<th>min</th>
<th>max</th>
<th>mean</th>
<th>std</th>
</tr>
</thead>
<tbody>
<tr>
<td>mouse time above item</td>
<td>1183</td>
<td>0</td>
<td>64</td>
<td>7.45</td>
<td>5.66</td>
</tr>
<tr>
<td>mouse time on same y-level as item</td>
<td>1183</td>
<td>0</td>
<td>64</td>
<td>4.79</td>
<td>5.99</td>
</tr>
<tr>
<td>time item is visible in browser window</td>
<td>1183</td>
<td>0</td>
<td>120</td>
<td>25.18</td>
<td>21.14</td>
</tr>
<tr>
<td>amount of mouse moves above item</td>
<td>1183</td>
<td>0</td>
<td>50</td>
<td>1.28</td>
<td>2.94</td>
</tr>
<tr>
<td>number of mouse clicks on item</td>
<td>1183</td>
<td>0</td>
<td>9</td>
<td>0.11</td>
<td>0.65</td>
</tr>
<tr>
<td>number of text selections inside item</td>
<td>1183</td>
<td>0</td>
<td>2</td>
<td>0.01</td>
<td>0.09</td>
</tr>
</tbody>
</table>
Table 1: Descriptives of assessed variables
Finally, more than half of the news items (57%) were skipped, about one quarter was glanced at (23%) and 20%, on average, was read by the participants.
The first step in determining which factors influence item skipping, glancing or reading behavior was to assess the correlations among the variables. The results can be found in Table 2. It shows correlations between all the recorded mouse actions and time measurements and item skipping or reading behavior. Participants’ glancing behavior is not correlated with any of the assessed variables. In other words, item glancing behavior cannot be predicted with any of the measured variables. All measured variables have a positive correlation with reading and a negative one with skipping items. This shows that they might be used to determine whether something has been read or skipped.
<table>
<thead>
<tr>
<th>item skipped</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
</tr>
</thead>
<tbody>
<tr>
<td>item glanced</td>
<td>$\rho$</td>
<td>$p$</td>
<td>$\rho$</td>
<td>$p$</td>
<td>$\rho$</td>
<td>$p$</td>
</tr>
<tr>
<td>item read</td>
<td>$\rho$</td>
<td>$p$</td>
<td>$\rho$</td>
<td>$p$</td>
<td>$\rho$</td>
<td>$p$</td>
</tr>
</tbody>
</table>
Table 2: Correlations among variables
However, the direct correlation between the assessed variables and reading behavior is not very strong, which is due to the fact that reading an item does not necessarily result in observable interactions. Nevertheless, the variables may be used as unidirectional indicators for reading behavior. One example is the selection of text. If text has been selected, the item has definitely not been skipped. However, as in 99.7% of the presented items no selection of text took place, the lack of text selections does not give any information at all.
Similarly, all assessed variables have been analyzed to find implications to be derived from the observed data. Table 3 shows how often information on client-side behavior could be retrieved and how measuring interaction times or the occurrence of interactions were related to users’ responses on whether an item has been read.
<table>
<thead>
<tr>
<th>item read</th>
<th>mouse time above item</th>
<th>mouse time on same y-level as item</th>
<th>time item is visible in browser window</th>
<th>amount of mouse moves above item</th>
<th>number of mouse clicks on item</th>
<th>number of text selections inside item</th>
</tr>
</thead>
<tbody>
<tr>
<td>value $>0$</td>
<td>mouse time above item</td>
<td>mouse time on same y-level as item</td>
<td>time item is visible in browser window</td>
<td>amount of mouse moves above item</td>
<td>number of mouse clicks on item</td>
<td>number of text selections inside item</td>
</tr>
</tbody>
</table>
Table 3: Occurrence of interactions
Generally it may be said that the observation of client-side interactions at least doubles the probability that an item has been read. 80% of the items with no monitored interactions or an interaction time $< 0.5s$ (rounded to 0) have not been read and most of them have been skipped.
However, half of the items where interaction times have been measured or mouse moves have been monitored have been skipped as well. Therefore, the second part of the current section consists of a closer analysis of the assessed variables and should show how higher activity values correspond to a higher probability that something has been read.
Analyzing Mouse Over Time 52.8% of all items that have been read had a mouse over time of more than 3 seconds. Items with a total mouse over time of more than 8 seconds (12.9% of all cases) have a 0.50 probability of having been read and a 0.77 probability that the item has not been skipped.
As shown in Figure 2 a higher mouse over time goes along with a higher probability that an item has been read.

Analyzing Vertical Mouse Position The time the mouse cursor has been placed at a vertical position within the borders of the news item is similar to the mouse over time and therefore the results as well (see Figure 3).
Compared to the mouse over time, the probability of the mouse never having been on the same y position as an item is lower (of course; as hovering an item implies that the mouse is also on the same vertical position). Comparing mouse over < 1s and same y < 2s both cover more or less the same test cases. Generally, the small differences between mouse over time and the vertical mouse position lead to the assumption that users that placed their mouse cursor inside the page tended to place it above the news items. However, this effect might have been different if more items had been placed next to each other on the same vertical position. Further work needs to be done to tell whether the y-position of the mouse or exact hovering is more significant in a different context.

Analyzing Visibility Time 81.1% of the items that have been visible for less than 5 seconds (13.4% of all cases) have been marked as skipped. The probability that an item has not been read (i.e. skipped or glanced at) if it has been visible for less than 5 seconds is 0.93. Only 1% of all items have been marked as read and were visible for less than 5 seconds (no surprise as items have to be visible to be read).
Other than this the visibility time does not provide any relevant information. As shown in Figure 4 the probability that an item has been read increases only slightly with a higher visibility time. This increase is definitely not sufficient for drawing further conclusions.

Nevertheless, taking into account the screen size and consequently the number of items displayed at the same time, it might have been possible to derive a weighted metric combining visible time with screen size that might have been more informative than the time by itself. Moreover, the relative position of the item within the screen might give additional information if it can be found that for instance users tend to read text that is displayed in the center of the screen. These two aspects will be considered in future experiments.
Analyzing Mouse Moves 91.1% of the skipped items had only 2 or less registered mouse moves and 98.5% of all skipped items had 5 or less registered mouse moves. Moreover, 54.8% of the items that have been read had at least one registered mouse move. No registration of mouse moves is a good indicator for having been skipped and a high amount leads to the assumption something has been read. Only 0.8% of all monitored news items have been marked as skipped despite having more than 5 mouse moves.
Detailed information can be found in Figure 5.

Analyzing Click Events Only in 5.4% of the cases clicks have been registered. However, 70.3% of them have been marked as read. Only 0.7% of the test cases showed clicks despite having been marked as skipped. This shows that although clicks do not occur frequently, they are a strong indicator that something has been read.
Analyzing Select Events As already mentioned, text selections occur even less frequently than click events (only 0.3% of all test cases). Nevertheless, text selections are the strongest indicator for reading and none of the items where text selections took place has been marked as skipped.
6 Towards an Algorithm
The results of this user study show that information on client-side user interaction is definitely suited for determining which parts of a page have been read or skipped. However, the observed variables provide different types of information. In some cases (especially interaction times) the lack of information is an indicator for skipping, and in others (especially interactions) there is little probability that something can be observed, but if interactions have been monitored they serve as an indicator for reading. The visibility time works very well for identifying skipped items, but high visibility times do not really increase the probability that an item has been read (although, as discussed in the previous section, this effect might be reduced by considering the size of the browser window and the relative position of the items within). Clicks and text selections help to identify read items, but do not work for identifying skipped items.
Based on this information it may be said that when looking for an algorithm returning a probability that an item has been read, linear algorithms are definitely not the best choice. Linear models can still be informative though in terms of the viability of using specific factors and indices into the algorithm. To explore this premise we started our analysis using the following composite metric (which was only intended to give a quick impression on whether the variables might be suited to analyze reading behavior): \(1 + \text{mouse over time}) * (\text{visible time}) * (1 + \text{mouse moves}) * (1 + \text{clicks})\). The value for the mouse on the same y position is part of the mouse over time and text selections hardly ever occurred, so these two variables have been left out. If the visible time is 0 the item can be regarded as skipped, but for all other variables even a value of 0 could mean it has been read – depending on the other variables. Therefore those variables have been added with 1. Using this simple algorithm 68% of all read items had a value above 108 and 68% of the skipped items had a value below 108. These values are of course specific to the experimental data at hand, and would in all likelihood differ significantly in other cases. However, the results do indicate that these factors do indeed have discriminatory capacity, and, possibly in an appropriately weighted form, can indeed be used as the basis for an algorithmic approach.
Having established at least some of the factors that an algorithm could incorporate, we turned our attention to the nature of the algorithm that could be used to identify page segments that had been read. The primary design requirements were:
- **real-time:** The algorithm should be fast enough to provide just-in-time information for several users while continuously monitoring user interactions.
- **predictive:** The algorithm should be able to handle continuously updating information without relying on an analysis after a user left the page.
- **white box:** The algorithm should consist of semantically understandable parts in order to be able to extend the algorithm and add factors later (or set different factors for different contexts).
Based on these requirements we decided to direct our attention to rule-based approaches, Bayesian networks and decision trees, as well as hybrid approaches comprising the above and potentially complementary ones as well.
In order to find a more appropriate algorithm the open source data mining software Weka [Witten and Frank, 2005] has been used applying different machine learning algorithms for classification. This software may be used to automatically generate models for classification algorithms by using a subset of the raw data. The other part of the raw data is used to evaluate the deriving models in order to determine how well data sets can be classified. The exact way in which the data set is split affects the performance of the algorithms. Therefore, a 100-fold cross validation has been used, i.e. the data set is randomly split 100 times and the result refers to the average value for all test cases.
The data from the user study was used to get an algorithm for predicting whether an item has been read fully or not. For the purposes of the analysis presented herein, “glanced at” and “skipped” have been combined to a single group. For each of these two classes the number of correctly classified items has been calculated as well as the precision (the probability that the item has really been read / not read, if the algorithm classified it this way).
Most algorithms had an overall precision of \(\approx 80\%\). They showed good results especially for identifying items that have not been read. More than 95% of the “not read” items have been classified correctly (with an overall precision above 80%). However, the algorithms were less successful in identifying items that have been read. The total precision for items classified as “read” was \(\approx 60\% - 70\%\) and only 15% – 30% of the read items have been correctly classified as read.
As an example three different classifiers will be discussed in details. They have all been tested using a 100-fold cross validation in Weka. The results are listed in Table 4.
One simple approach for classification is a rule based algorithm:
\[
\begin{align*}
\text{read} = & (\text{mouse_moves} >= 2 && \text{same_y} >= 13) || \\
& (\text{visible_time} >= 17 \&\& \text{clicks} >= 1)
\end{align*}
\]
The highest precision for read items was reached by the Bayesian Network shown in Figure 6. On the other hand it only classified 16% of the read items correctly.
Figure 6: model for setting up a Bayesian Network
The highest average precision and the highest percentage of correctly classified read items was reached by the decision tree shown in Figure 7.
The above results clearly indicate that more work needs to be expended in devising a generic algorithm, as well as in understanding how different interaction- and context- characteristics influence the significance of the identified factors (and of how to incorporate these varying levels of significance in the algorithm itself). Nevertheless, the results seem promising in terms of being able to use client-side interactions to make assumptions on reading behavior.
7 Ongoing Work and Future Perspective
As a next step an experiment comparing the current work with eye-tracking will be performed. This should show how mouse positions are related to the locus of attention and whether client-side monitoring could provide parts of the information available through eye-tracking. Moreover, it should show whether users have preferences concerning the relative position of what they are currently reading (i.e. whether they focus more on elements that are displayed at the center of the screen). If this is found to be relevant, the library will be extended to monitor the relative position of page fragments on the screen and get more fine-grained information on the visibility time.
Furthermore, the library will be extended to get more fine-grained information on user behavior. This includes monitoring the scrolling speed and the size of the browser window. As the number of items visible in parallel depends on the window size (e.g.: big screen vs. mobile device), this may help to better use the visible time (fewer items on a screen increase the probability for a single one being read). Moreover, client-side monitoring should be used in different contexts; the way of reading a news page may be different from reading text in an e-learning course.
As a strong correlation between “mouse over” and “mouse on same y” has been found, it has to be tested whether this is also true if several items are placed at the same vertical position or whether it is possible to ignore information on the horizontal position of the mouse.
Another important factor for future research is the length of the text within a single item. This length is important to estimate the time required for reading. For the current experiment only elements of almost the same structure and length have been used to reduce the complexity of the test. The average reading speed as well as the estimated personal reading speed in relation to the length of the text comprised by a monitored page fragment are additional factors that we believe may need to be considered as factors and incorporated into the algorithm. Based on the estimated required reading time the visibility times and interaction times could possibly provide additional information.
The main work however lies in the further development of an algorithm (or a number of algorithms that work for different contexts). The results of this work should be integrated into a version of AHA [De Bra and Ruiter, 2001] running in the open source learning platform Sakai [Sakai, 2009] to provide the findings of ongoing research for a larger audience and help to improve existing AHS.
Acknowledgments
Part of the work reported in this paper was funded by the “Adaptive Support for Collaborative E-Learning” (AS-COLLA) project, supported by the Austrian Science Fund (FWF; project number P20260-N15).
References
|
{"Source-Url": "https://ris.utwente.nl/ws/files/17973179/abis2009_hauger.pdf", "len_cl100k_base": 6662, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21443, "total-output-tokens": 7665, "length": "2e12", "weborganizer": {"__label__adult": 0.0004892349243164062, "__label__art_design": 0.0023822784423828125, "__label__crime_law": 0.0004718303680419922, "__label__education_jobs": 0.06964111328125, "__label__entertainment": 0.0003478527069091797, "__label__fashion_beauty": 0.00035881996154785156, "__label__finance_business": 0.0008816719055175781, "__label__food_dining": 0.0005445480346679688, "__label__games": 0.001220703125, "__label__hardware": 0.0021076202392578125, "__label__health": 0.0012388229370117188, "__label__history": 0.0010471343994140625, "__label__home_hobbies": 0.0002243518829345703, "__label__industrial": 0.00072479248046875, "__label__literature": 0.0014944076538085938, "__label__politics": 0.0005002021789550781, "__label__religion": 0.0007281303405761719, "__label__science_tech": 0.289794921875, "__label__social_life": 0.00030875205993652344, "__label__software": 0.10498046875, "__label__software_dev": 0.51904296875, "__label__sports_fitness": 0.0003709793090820313, "__label__transportation": 0.0007114410400390625, "__label__travel": 0.0004661083221435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33574, 0.02454]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33574, 0.50537]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33574, 0.93723]], "google_gemma-3-12b-it_contains_pii": [[0, 5368, false], [5368, 10636, null], [10636, 17638, null], [17638, 21441, null], [21441, 28046, null], [28046, 33574, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5368, true], [5368, 10636, null], [10636, 17638, null], [17638, 21441, null], [21441, 28046, null], [28046, 33574, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33574, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33574, null]], "pdf_page_numbers": [[0, 5368, 1], [5368, 10636, 2], [10636, 17638, 3], [17638, 21441, 4], [21441, 28046, 5], [28046, 33574, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33574, 0.11111]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
29aded48e247b8b3331160a895ad89fedcfe88e9
|
This article is not for beginners
BPF has evolved
Many docs were true in 2015 but not today
... not the best start
Newcomers keep re-posting old info as new
... out of date
This talk is Jun 2021
BPF is no longer an acronym.
BPF is a bytecode and execution environment.
How to get quick and easy BPF performance wins
Think like a sysadmin
Not like a programmer
```c
8 # define BPF program
9 prog = ""
10 int hello(void *ctx) {
11 bpf_trace_printk("Hello, World!
12 return 0;
13 }
14 ""
15
16 # load BPF program
17 b = BPF(text=prog)
18 b.attach_kprobe(event=b.get_syscall_fnname("clone"), fn_name=
```
Think like a **sysadmin**
Get it installed everywhere and use it.
```bash
# apt-get install bcc-tools
# PATH=$PATH:/usr/share/bcc/tools
# execsnoop
# opensnoop
# tcplife
# ext4slower
# biosnoop
[...]```
Think like a **sysadmin**
Get it installed everywhere and use it.
```
# apt-get install bcc-tools
# PATH=$PATH:/usr/share/bcc/tools
# execsnoop Anything periodic running? crontab?
# opensnoop Any misconfigurations? File not found?
# tcplife Any unexpected TCP sessions?
# ext4slower Any file system I/O slower than 10ms?
# biosnoop Any unusual disk access patterns? Outliers?
[...]```
# iostat -xz 1
Linux 4.15.0-1052-aws (cass-xxx) 12/04/2019 _x86_64_ (8 CPU)
<table>
<thead>
<tr>
<th>avg-cpu: %user</th>
<th>%nice</th>
<th>%system</th>
<th>%iowait</th>
<th>%steal</th>
<th>%idle</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.67</td>
<td>0.13</td>
<td>0.25</td>
<td>0.00</td>
<td>0.00</td>
<td>94.95</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Device</th>
<th>rrqm/s</th>
<th>wrqm/s</th>
<th>r/s</th>
<th>w/s</th>
<th>rkB/s</th>
<th>wkB/s</th>
<th>avgrq-sz</th>
<th>avgqu-sz</th>
<th>await</th>
<th>r-await</th>
<th>w-await</th>
<th>svctm</th>
<th>%util</th>
</tr>
</thead>
<tbody>
<tr>
<td>xvda</td>
<td>0.00</td>
<td>0.00</td>
<td>1.00</td>
<td>0.00</td>
<td>8.00</td>
<td>0.00</td>
<td>16.00</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<td>xvdb</td>
<td>0.00</td>
<td>0.00</td>
<td>32.00</td>
<td>0.00</td>
<td>152.00</td>
<td>0.00</td>
<td>9.50</td>
<td>0.16</td>
<td>5.00</td>
<td>5.00</td>
<td>0.00</td>
<td>5.00</td>
<td>16.00</td>
</tr>
<tr>
<td>xvdc</td>
<td>0.00</td>
<td>0.00</td>
<td>36.00</td>
<td>1.00</td>
<td>192.00</td>
<td>4.00</td>
<td>10.59</td>
<td>0.19</td>
<td>5.19</td>
<td>5.33</td>
<td>0.00</td>
<td>5.19</td>
<td>19.20</td>
</tr>
<tr>
<td>xvdd</td>
<td>4.00</td>
<td>0.00</td>
<td>37.00</td>
<td>0.00</td>
<td>181.00</td>
<td>0.00</td>
<td>9.78</td>
<td>0.18</td>
<td>4.97</td>
<td>4.97</td>
<td>0.00</td>
<td>4.54</td>
<td>16.80</td>
</tr>
<tr>
<td>xvde</td>
<td>0.00</td>
<td>0.00</td>
<td>27.00</td>
<td>0.00</td>
<td>156.00</td>
<td>0.00</td>
<td>11.56</td>
<td>0.18</td>
<td>6.67</td>
<td>6.67</td>
<td>0.00</td>
<td>6.67</td>
<td>18.00</td>
</tr>
<tr>
<td>xvdf</td>
<td>0.00</td>
<td>0.00</td>
<td>35.00</td>
<td>0.00</td>
<td>164.00</td>
<td>0.00</td>
<td>9.37</td>
<td>0.19</td>
<td>5.37</td>
<td>5.37</td>
<td>0.00</td>
<td>5.37</td>
<td>18.80</td>
</tr>
<tr>
<td>xvdg</td>
<td>0.00</td>
<td>0.00</td>
<td>25.00</td>
<td>1.00</td>
<td>136.00</td>
<td>4.00</td>
<td>10.77</td>
<td>0.14</td>
<td>5.23</td>
<td>5.44</td>
<td>0.00</td>
<td>5.23</td>
<td>13.60</td>
</tr>
<tr>
<td>md0</td>
<td>0.00</td>
<td>0.00</td>
<td>195.00</td>
<td>2.00</td>
<td>965.00</td>
<td>8.00</td>
<td>9.88</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
</tr>
</tbody>
</table>
[...]
### Case Study: BCC biosnoop, cont.
# /usr/share/bcc/tools/biosnoop
<table>
<thead>
<tr>
<th>TIME(s)</th>
<th>COMM</th>
<th>PID</th>
<th>DISK</th>
<th>T SECTOR</th>
<th>BYTES</th>
<th>LAT(ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.000000</td>
<td>perl</td>
<td>7755</td>
<td>xvdc</td>
<td>R 610822184</td>
<td>4096</td>
<td>8.57</td>
</tr>
<tr>
<td>0.000812</td>
<td>biosnoop</td>
<td>32196</td>
<td>xvda</td>
<td>R 269480</td>
<td>4096</td>
<td>0.43</td>
</tr>
<tr>
<td>0.006197</td>
<td>perl</td>
<td>3285</td>
<td>xvde</td>
<td>R 610737856</td>
<td>4096</td>
<td>6.10</td>
</tr>
<tr>
<td>0.006390</td>
<td>perl</td>
<td>23937</td>
<td>xvde</td>
<td>R 377704624</td>
<td>4096</td>
<td>0.10</td>
</tr>
<tr>
<td>0.015040</td>
<td>perl</td>
<td>7755</td>
<td>xvdb</td>
<td>R 732825200</td>
<td>4096</td>
<td>8.51</td>
</tr>
<tr>
<td>0.022842</td>
<td>perl</td>
<td>3285</td>
<td>xvdc</td>
<td>R 732953880</td>
<td>4096</td>
<td>7.72</td>
</tr>
<tr>
<td>0.023019</td>
<td>perl</td>
<td>23937</td>
<td>xvdb</td>
<td>R 732907064</td>
<td>4096</td>
<td>0.09</td>
</tr>
<tr>
<td>0.034443</td>
<td>perl</td>
<td>7755</td>
<td>xvdg</td>
<td>R 732998328</td>
<td>4096</td>
<td>11.28</td>
</tr>
<tr>
<td>0.039648</td>
<td>perl</td>
<td>23937</td>
<td>xvdg</td>
<td>R 733127392</td>
<td>4096</td>
<td>5.08</td>
</tr>
<tr>
<td>0.039863</td>
<td>perl</td>
<td>31913</td>
<td>xvdg</td>
<td>R 732868048</td>
<td>4096</td>
<td>0.10</td>
</tr>
<tr>
<td>0.049431</td>
<td>perl</td>
<td>3285</td>
<td>xvdg</td>
<td>R 732906896</td>
<td>4096</td>
<td>9.45</td>
</tr>
<tr>
<td>0.058521</td>
<td>perl</td>
<td>27565</td>
<td>xvdg</td>
<td>R 610744920</td>
<td>4096</td>
<td>8.99</td>
</tr>
<tr>
<td>0.070843</td>
<td>perl</td>
<td>27565</td>
<td>xvdg</td>
<td>R 377706520</td>
<td>4096</td>
<td>12.26</td>
</tr>
<tr>
<td>0.080564</td>
<td>perl</td>
<td>31913</td>
<td>xvdg</td>
<td>R 610951744</td>
<td>4096</td>
<td>9.62</td>
</tr>
<tr>
<td>0.080804</td>
<td>perl</td>
<td>7755</td>
<td>xvdc</td>
<td>R 732858664</td>
<td>4096</td>
<td>0.14</td>
</tr>
<tr>
<td>0.086932</td>
<td>perl</td>
<td>27565</td>
<td>xvdg</td>
<td>R 732937416</td>
<td>4096</td>
<td>6.01</td>
</tr>
<tr>
<td>0.087093</td>
<td>perl</td>
<td>27565</td>
<td>xvde</td>
<td>R 610853240</td>
<td>4096</td>
<td>0.09</td>
</tr>
</tbody>
</table>
## Case Study: BCC biosnoop, cont.
<table>
<thead>
<tr>
<th>#</th>
<th>User</th>
<th>PID</th>
<th>PPID</th>
<th>User</th>
<th>Time</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>root</td>
<td>3285</td>
<td>3274</td>
<td>1</td>
<td>14:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>7755</td>
<td>7748</td>
<td>1</td>
<td>04:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>11366</td>
<td>11359</td>
<td>1</td>
<td>10:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>15054</td>
<td>15049</td>
<td>2</td>
<td>16:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>19675</td>
<td>19670</td>
<td>1</td>
<td>06:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>23937</td>
<td>23930</td>
<td>1</td>
<td>12:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>27565</td>
<td>27561</td>
<td>2</td>
<td>18:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>28232</td>
<td>28223</td>
<td>1</td>
<td>02:16</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
<tr>
<td>root</td>
<td>31913</td>
<td>31907</td>
<td>1</td>
<td>08:15</td>
<td>?</td>
<td>/usr/bin/perl /apps/...ec2rotatelogs.pl</td>
</tr>
</tbody>
</table>
[...]
---
**Note:** The commands shown are examples and may not reflect the exact output of the `ps -ef | grep perl` command.
Many more tools to try!
bcc tools
bpftool trace tools
(from my book, open source)
Solve >90% of perf issues with canned observability (tracing) tools
This is BPF observability in one pic
Print on your office/home wall, use as a checklist
Suspected disk issue?
Try these first:
Print on your office/home wall, use as a checklist
Suspected disk issue? Try these first:
Then walk up the stack: file system tools, VFS tools, syscall tools, etc.
The future of BPF perf observability ... is **GUIs**. The end user may not even know it’s BPF.
This GUI is in development by Susie Xia, Netflix
Example real-time BPF observability UI
Netflix Vector (now retired) uses this model
- Instance agent pcp pmdabcc
- https://github.com/Netflix/vector
Example real-time BPF observability UI #2
Client
GUI
Web Server
GUI
Webpage
bpftrace tools
Server
Instance agent
bpf-trace
Application
Server
Instance agent
bpf-trace
Application
Netflix FlameCommander UI
• (not yet open source)
Think like a sysadmin
Please try to use bcc/bpftrace tools as-is and fetch regular updates
Many tools are sandcastles, and require frequent rebuilding to match kernel changes
Fortunately many companies and engineers maintain these versions
- Facebook
- Netflix
- Isovalent
- (many more)
Sysadmins sometimes program
shell scripting
awk
sed one-liners
bpftrace tools
bpftrace one-liners
Think like a **programmer** if
You have a real-world problem that tools don’t solve
You’re a BPF-based startup
You’re debugging your own code*
You’re doing networking/security/etc.
You really want to learn BPF internals
* JIT-ed runtimes like Java are currently complex to trace
Recommended tracing front-ends
I want to run some tools
- bcc, bpftrace
I want to hack up some new tools
- bpftrace
I want to spend weeks developing a BPF product
- bcc libbpf C, bcc Python (maybe), gobpf, libbbpf-rs
New, lightweight, CO-RE & BTF based
Requires LLVM; becoming obsolete / special-use only
Recommended tracing front-ends
I want to run some tools
- bcc, bpftrace
I want to hack up some new tools
- bpftrace
I want to spend weeks developing a BPF product
- bcc libbpf C, bcc Python (maybe), gobpf, libbbpf-rs
Unix analogies
- /usr/bin/*
New, lightweight, CO-RE & BTF based
Requires LLVM; becoming obsolete / special-use only
C, C++
bash, awk
# readahead.bt
Attaching 5 probes...
^C
Readahead unused pages: 128
Readahead used page age (ms):
@age_ms:
<table>
<thead>
<tr>
<th>Range</th>
<th>Age (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>[1]</td>
<td>2455</td>
</tr>
<tr>
<td>[2, 4)</td>
<td>8424</td>
</tr>
<tr>
<td>[4, 8)</td>
<td>4417</td>
</tr>
<tr>
<td>[8, 16)</td>
<td>7680</td>
</tr>
<tr>
<td>[16, 32)</td>
<td>4352</td>
</tr>
<tr>
<td>[32, 64)</td>
<td>0</td>
</tr>
<tr>
<td>[64, 128)</td>
<td>0</td>
</tr>
<tr>
<td>[128, 256)</td>
<td>384</td>
</tr>
</tbody>
</table>
#!/usr/local/bin/bpftrace
kprobe:__do_page_cache_readahead { @in_readahead[tid] = 1; }
kretprobe:__do_page_cache_readahead { @in_readahead[tid] = 0; }
kretprobe:__page_cache_alloc @in_readahead[tid]/
{ @birth[retval] = nsecs;
@rapages++;
}
kprobe:mark_page_accessed @birth[arg0]/
{ @age_ms = hist((nsecs - @birth[arg0]) / 1000000);
delete(@birth[arg0]);
@rapages--;
}
END
{ printf("\nReadahead unused pages: %d\n", @rapages); printf("\nReadahead used page age (ms):\n"); print(@age_ms); clear(@age_ms);
clear(@birth); clear(@in_readahead); clear(@rapages); }
Fits on one slide!
BCC libbpf tool example
```bash
# ./opensnoop
PID COMM FD ERR PATH
27974 opensnoop 28 0 /etc/localtime
1482 redis-server 7 0 /proc/1482/stat
[...]
# ldd opensnoop
linux-vdso.so.1 (0x00007ffdddf3f1000)
libelf.so.1 => /usr/lib/x86_64-linux-gnu/libelf.so.1 (0x00007f9fb7836000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f9fb7619000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9fb7228000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9fb7c76000)
# ls -lh opensnoop opensnoop.stripped
-rwxr-xr-x 1 root root 645K Feb 28 23:18 opensnoop
-rwxr-xr-x 1 root root 151K Feb 28 23:33 opensnoop.stripped
151 Kbytes for a stand-alone BPF program!
(Note: A static bpftrace/BTF + scripts will also have a small average tool size)
CONFIG_DEBUG_INFO_BTF=y
E.g., Ubuntu 20.10, Fedora 30, and RHEL 8.2 have it
BPF Future: Event-based Applications
- User-mode Applications
- Kernel-mode Applications (BPF)
- Scheduler
- Kernel Events
- Hardware Events (incl. clock)
# A New Type of Software
<table>
<thead>
<tr>
<th></th>
<th>Execution model</th>
<th>User defined</th>
<th>Compilation</th>
<th>Security</th>
<th>Failure mode</th>
<th>Resource access</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>User</strong></td>
<td>task</td>
<td>yes</td>
<td>any</td>
<td>user based</td>
<td>abort</td>
<td>syscall, fault</td>
</tr>
<tr>
<td><strong>Kernel</strong></td>
<td>task</td>
<td>no</td>
<td>static</td>
<td>none</td>
<td>panic</td>
<td>direct</td>
</tr>
<tr>
<td><strong>BPF</strong></td>
<td>event</td>
<td>yes</td>
<td>JIT, CO-RE</td>
<td>verified, JIT</td>
<td>error message</td>
<td>restricted helpers</td>
</tr>
</tbody>
</table>
Take Away
To get started with **BPF performance wins**, think like a **sysadmin**:
1. Install BCC & bpftrace tools
2. Run them
3. Get some wins
References
This is also where I recommend you go to learn more:
- https://github.com/iovisor/bcc/blob/master/docs/tutorial.md
- https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md
- BPF Performance Tools, Addison Wesley 2020
- Systems Performance 2nd Edition, Addison Wesley 2021
- https://ebpf.io/what-is-ebpf
Thanks
**BPF**: Alexei Starovoitov (Facebook), Daniel Borkmann (Isovalent), David S. Miller (Red Hat), Jakub Kicinski (Facebook), Yonghong Song (Facebook), Martin KaFai Lau (Facebook), John Fastabend (Isovalent), Quentin Monnet (Isovalent), Jesper Dangaard Brouer (Red Hat), Andrey Ignatov (Facebook), and Stanislav Fomichev (Google), Linus Torvalds, and many more in the BPF community
**BCC**: Brenden Blanco (VMware), Yonghong Song, Sasha Goldsthein (Google), Teng Qin (Facebook), Paul Chaignon (Isovalent), Vicent Martí (PlanetScale), and many more in the BCC community
**bpftrace**: Alastair Robertson (Yellowbrick Data), Dan Xu (Facebook), Bas Smit, Mary Marchini (Netflix), Masanori Misono, Jiri Olsa, Viktor Malík, Dale Hamel, Willian Gaspar, Augusto Mecking Caringi, and many more in the bpftrace community
|
{"Source-Url": "https://www.brendangregg.com/Slides/Scale2021_BPF_performance_getting_started_2021.pdf", "len_cl100k_base": 5342, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 34396, "total-output-tokens": 5916, "length": "2e12", "weborganizer": {"__label__adult": 0.00022029876708984375, "__label__art_design": 0.00017595291137695312, "__label__crime_law": 0.00015735626220703125, "__label__education_jobs": 0.0003573894500732422, "__label__entertainment": 4.661083221435547e-05, "__label__fashion_beauty": 7.295608520507812e-05, "__label__finance_business": 0.00013649463653564453, "__label__food_dining": 0.00017058849334716797, "__label__games": 0.0002837181091308594, "__label__hardware": 0.0008482933044433594, "__label__health": 0.0001742839813232422, "__label__history": 8.845329284667969e-05, "__label__home_hobbies": 6.777048110961914e-05, "__label__industrial": 0.00018978118896484375, "__label__literature": 9.453296661376952e-05, "__label__politics": 9.649991989135742e-05, "__label__religion": 0.00019156932830810547, "__label__science_tech": 0.004276275634765625, "__label__social_life": 6.937980651855469e-05, "__label__software": 0.01311492919921875, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.00013589859008789062, "__label__transportation": 0.00016379356384277344, "__label__travel": 0.00010132789611816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 11603, 0.07991]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 11603, 0.04508]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 11603, 0.58705]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 201, false], [201, 276, null], [276, 323, null], [323, 641, null], [641, 846, null], [846, 1233, null], [1233, 2760, null], [2760, 4104, null], [4104, 5280, null], [5280, 5470, null], [5470, 5561, null], [5561, 5727, null], [5727, 5872, null], [5872, 6022, null], [6022, 6265, null], [6265, 6555, null], [6555, 6655, null], [6655, 6936, null], [6936, 7248, null], [7248, 7610, null], [7610, 7937, null], [7937, 8579, null], [8579, 9362, null], [9362, 9439, null], [9439, 9595, null], [9595, 10193, null], [10193, 10339, null], [10339, 10786, null], [10786, 11603, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 201, true], [201, 276, null], [276, 323, null], [323, 641, null], [641, 846, null], [846, 1233, null], [1233, 2760, null], [2760, 4104, null], [4104, 5280, null], [5280, 5470, null], [5470, 5561, null], [5561, 5727, null], [5727, 5872, null], [5872, 6022, null], [6022, 6265, null], [6265, 6555, null], [6555, 6655, null], [6655, 6936, null], [6936, 7248, null], [7248, 7610, null], [7610, 7937, null], [7937, 8579, null], [8579, 9362, null], [9362, 9439, null], [9439, 9595, null], [9595, 10193, null], [10193, 10339, null], [10339, 10786, null], [10786, 11603, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 11603, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 11603, null]], "pdf_page_numbers": [[0, 0, 1], [0, 201, 2], [201, 276, 3], [276, 323, 4], [323, 641, 5], [641, 846, 6], [846, 1233, 7], [1233, 2760, 8], [2760, 4104, 9], [4104, 5280, 10], [5280, 5470, 11], [5470, 5561, 12], [5561, 5727, 13], [5727, 5872, 14], [5872, 6022, 15], [6022, 6265, 16], [6265, 6555, 17], [6555, 6655, 18], [6655, 6936, 19], [6936, 7248, 20], [7248, 7610, 21], [7610, 7937, 22], [7937, 8579, 23], [8579, 9362, 24], [9362, 9439, 25], [9439, 9595, 26], [9595, 10193, 27], [10193, 10339, 28], [10339, 10786, 29], [10786, 11603, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 11603, 0.22394]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
62d76732494b5b41b52a2b660752d6c42a26a09d
|
Parallelization of Stochastic-Local-Search Algorithms using High-Level Parallel Patterns
Michael Rossbory
Software Competence Center Hagenberg
michael.rossbory@scch.at
Georgios Chasparis
Software Competence Center Hagenberg
georgios.chasparis@scch.at
Abstract
Mathematical models for optimization can help companies optimizing their production and planning processes and therefore reduce costs and increase quality. But applying such models effectively is challenging. Developers need expertise in mathematics and skills in software development to implement them. Furthermore optimization algorithms are inherently computationally very intensive. Parallelization reduces this computation time severely, but adds additional complexity, especially when low-level parallelization techniques are applied. Therefore developers would have to be experts in concurrent programming, too.
In this paper we present a stochastic-local-search algorithm to solve such an optimization problem from industry encountered in the slitting of metal sheets used in the production of electrical transformers. Furthermore we introduce a high-level pattern based parallelization approach that has been developed in the ParaPhrase project, depict how it can easily be applied to parallelize this optimization algorithm, without introducing the additional complexity of traditional low-level approaches, and describe why and how parallelization improves the result of the optimization process.
Keywords parallel pattern, discrete optimization, paraphrase, stochastic local search algorithms
1. Introduction
In order to increase competitiveness of any industry, effective utilization of all kind of resources, from machines and raw material to energy and human resources, and highly optimized process cycles are crucial. Examples are minimization of waste for in roll cutting in clothing [5] or paper industry [6], optimization of control loops in chemical plants or oil refineries or optimization in supply chain management or logistics, just to name a few. The case study in this paper deals with an optimization problem encountered in the slitting of metal sheets used in the production of electrical transformers. The problem is a generalized version of the so-called 1/V/V/R cutting-stock problem, since the objective corresponds to appropriately placing a set of metal stripes (bands) into a set of available metal coils, so that the overall metal waste is minimized. A detailed description of the case study will be given later in this paper.
The research reported in this paper has been partly supported by the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH.
Several methodologies have been developed to address cutting-stock problems or bin-packing problems, including linear-programming (LP) based approaches [8, 9] and heuristic-based approaches based on dynamic programming [10]. However, the complexity of the optimization problem as well as the large number of (potentially nonlinear) constraints cannot effectively be addressed through LP-based approximations. At the same time, experts knowledge is required for effectively implementing heuristic-based approaches as in [10]. To this end, we focus on stochastic-local-search algorithms [11] for addressing a generalized class of such cutting-stock problems, since a) no explicit assumptions are imposed regarding the form of the constraints, and b) the design of such algorithms does not necessarily require experts knowledge.
The performance of stochastic-local-search algorithms may not necessarily be robust to the specifics of the optimization problem, while convergence to local minima cannot easily be excluded. To this end, it is usually required that several forms of diversification strategies (cf., [11]) need to be implemented, including a) reprocessing candidate solutions from earlier optimization stages, and b) experimenting alternative processing paths.
In this paper, we investigate the utility of parallelization in the execution of such diversification strategies. However, parallelization adds additional complexity to the system and makes it harder to develop and maintain and more error-prone, especially when low-level parallelization techniques are used. Expertise in concurrent programming is therefore highly required.
Applying a high-level pattern-based parallelization approach, as developed in the ParaPhrase project, reduces this additional complexity caused by parallelization to a minimum and reduces needed expertise in concurrent programming.
The remaining of the paper is structured as follows. After a description of the use case and the basic idea optimization algorithm, an overview about ParaPhrase technology with focus on high-level and domain-specific patterns will be given. After that the application of the pattern used for parallelization will be explained. An evaluation of the parallelization approach and the resulting performance will be given at the end of the paper.
2. Use Case Description & Objective
In this paper, we are particularly concerned with a generalized version of the so-called 1/V/V/R cutting-stock problem [7]. Cutting-stock problems are encountered often in industrial environments and the ability to address them efficiently usually results in large economic benefits.
Such cutting-stock optimization is encountered in the electrical transformers’ industry as described in detail in [10]. In particular, in the production of the core of an electrical transformer, great quantities of silicon-steel sheets are required which could reach in weight up to 300 tones for large transformers. The silicon-steel
sheets required vary in width and need to be slit from available rolls of material. The problem can be translated in a straightforward manner into a classical cutting-stock problem [10].
### 2.1 Cutting-stock problem (background)
In this section, we briefly describe the one-dimensional cutting-stock optimization problem. There exists a set of available objects (or rolls) of material, denoted by \( \mathcal{I} \cong \{1, 2, \ldots, m\} \). Each of these objects \( i \in \mathcal{I} \) is characterized by its width \( b_i \), its length \( \ell_i \) and its density \( d_i \). Let also \( w_i \) denote the overall weight of roll \( i \). In Figure 1 one such roll is depicted.
We are also provided with a set of items (or bands) of certain types \( \mathcal{J} \cong \{1, 2, \ldots, n\} \), each of which is characterized by its width \( b_j \) and its desired weight \( w_j \). The purpose of the optimization is to compute an assignment of the desired bands into a set of available rolls so that the total weight of the rolls used is minimized. Formally, let us denote \( x_{ij} \in \mathbb{Z}_+ \) as the number of bands of type \( j \) assigned to item \( i \). The objective is to find an assignment \( X = \{x_{ij}\}_{i,j} \) of the desired bands into the set of available rolls, so that:
1. the overall weight of each band type \( j \) exceeds its desired weight \( w_j \), i.e.,
\[
b_j \sum_{i=1}^{m} x_{ij} \ell_i \geq w_j, \tag{1}
\]
2. the sum of the bands assigned to each roll \( i \) does not exceed the width of the roll, \( b_i \), while at the same time the residual band, denoted
\[
r_i(X) \equiv b_i - \sum_{j=1}^{n} x_{ij} b_j, \tag{2}
\]
should always be within a finite set \( \mathcal{R} \) of allowable residual widths.
Both of the above constraints are hard constraints and need to be satisfied for any assignment. Unfortunately, in most but trivial cases, there might be a multiplicity of admissible assignments, each of which might be utilizing a different subset of rolls \( \mathcal{I} \). We wish to minimize the overall weight of the rolls utilized by an assignment, i.e., we wish to address the following optimization problem:
\[
\min_{X \in \mathbb{R}^{m \times n}} \sum_{i \in \mathcal{I}} w_i \mathbb{I}_A, \tag{3}
\]
where
\[
\mathbb{I}_A = \begin{cases}
1 & \text{if } A = \text{true}, \\
0 & \text{else.}
\end{cases} \tag{4}
\]
In other words, we would like to penalize the weight of the rolls which are slit.
The objective function (3) subject to the job-admissibility constraint (1) and the rest-width-admissibility constraint (2) formulate the so-called \textit{(one-dimensional) cutting-stock problem}.
Alternative objective criteria may be considered (e.g., minimization of trim loss, as considered in [7], or minimization of rolls needed by the assignment when the objects are identical, as considered by [9]).
### 2.2 Objective
In practical scenarios, as in the case of electrical transformers industry [10], additional constraints may also be imposed (due, e.g., to cutters specifications, transformer final specifications, etc.). Given that most of such constraints may be nonlinear in nature, traditional methods based on linear programming (as described in [9]) are not appropriate for such problems. Furthermore, a large number of constraints may reduce dramatically the set of feasible solutions, making the search over optimal solutions even harder.
To this end, stochastic-based approaches have been considered to address such complex optimization problems (e.g., the stochastic-local-search algorithms discussed in [11]). Such approaches consist of a sequence of (local) modification steps onto the current candidate solution so that the overall objective function is reduced. The advantage of such methods is the ability to provide suboptimal solutions within reasonable execution times. On the other hand, due to their stochastic nature, the resulting performance may vary, depending on the details of the optimization problem, the running time and the details of the stochastic search method.
Due to the performance variability, it is often required that we provide a set of \textit{diversification strategies} that decrease the probability of converging to a local optimum, or, equivalently, increase the probability of converging to the global optimum. One example of such diversification strategies includes the ability to periodically reprocess candidate solutions starting from earlier stages of the optimization. Another diversification strategy may also include the ability to \textit{experiment} alternative processing paths starting from the same candidate solution.
The objective of this paper is to explore the utility of \textit{parallelization} for improving the performance of stochastic-local-search algorithms in the context of hard-combinatorial problems as the cutting-stock problem presented above. In particular, we wish to explore the utility of parallelization in improving the effect of the diversification strategies a) reprocessing candidate solutions, and b) experimenting alternative processing paths onto the optimization performance.
### 3. Optimization Algorithm
In this section, we propose a framework for implementing a stochastic local-search approach for addressing such cutting-stock problems.
More specifically, the proposed framework consists of the following building blocks:
- \( \Psi = \text{init}(\pi) \);
- \( X = \text{optimize}(\Psi) \);
- \( \Psi = \text{filter}(X, \mathcal{P}) \);
- \( \text{terminate}(\Psi, \text{timer}()) \).
The role of the \( \text{init}(\pi) \) function is the establishment of candidate solutions satisfying (at least) constraint (1), but not necessarily all constraints. The large number of constraints in a cutting-stock problem imposes difficulties in finding even feasible solutions. In such cases, such initialization phase is rather important in a) computing feasible solutions, b) reducing the overall optimization time, or equivalently c) improving the overall performance. This initialization phase may correspond to standard First-Fit-Decreasing (cf. [12, Section 3.3]) type of algorithms, whose goal is to simply allocate the required items/bands onto the available objects so that the job-admissibility constraint (1) is satisfied. However, in most cases, satisfying even constraint (1) may be rather challenging, thus more sophisticated initialization algorithms may be required. An
initial set \( \mathcal{P} \) of candidate solutions is defined as the output of this initialization phase.
The \textit{optimize(\( \mathcal{P} \))} function constitutes the core of the overall optimization framework. Its role is the execution of appropriate (local) modification steps (called \textit{operations}) onto the candidate solutions \( X \in \mathcal{P} \), accompanied with appropriate random perturbations. The role of these operations is to search for a) candidate solutions which satisfy all imposed constraints, and b) candidate solutions which improve the cost of (3). Responsible for the execution of these improvement steps and/or perturbations of the existing candidate solutions are the working units, briefly called \textit{workers}. Each worker serves a distinctive role (e.g., creating admissible solutions with respect to a constraint or reducing the cost function or locally perturbing the solution). Designing the operation of such workers requires a careful design of stochastic-local search strategies.
One of the main drawbacks of stochastic-local search algorithms is the fact that convergence to local minima may occur with high probability (see, e.g., [11, Section 5.2]). This is particularly evident as the number of constraints increases requiring for additional treatment. To this end, a careful design of so-called \textit{diversification strategies} (cf., [11, Section 5.2]) is required. An example of such diversification strategies includes the ability to reprocess candidate solutions from earlier processing stages, which will increase the probability of escaping from local optima. Such diversification strategies can be designed within the \textit{filter(\( X \), \( \mathcal{P} \))}. Its primarily goal is to determine the population of the pool \( \mathcal{P} \) which will feed back to the \textit{optimize()} function. Its impact on the overall behavior of the optimization algorithm is equally important to the role of the \textit{optimize()} operations.
Since the goal of this paper is to present a generalized framework that may accommodate stochastic-local-search algorithms for cutting-stock problems, the details of the functions \textit{optimize(\( \mathcal{P} \))} and \textit{filter(\( X \), \( \mathcal{P} \))} go beyond the scope of this paper and its details will not be presented here.
4. Parallel Patterns
The \textit{ParaPhrase} [17] project introduced a new way of parallelism in software development for heterogeneous architectures using...
high-level parallel design patterns in conjunction with advanced refactoring technologies.
Parallel design patterns describe solutions of recurring problems in parallel programming. They are an abstract entity that provide no implementation that can be directly used in an application. Design patterns are independent from the actual software implementation and hardware architecture. Parallel design pattern are comparable to traditional design patterns in software development [13]. Patterns are designed to be simple and generic and it is not intended to provide a pattern for every parallelization problem that might occur. To achieve the desired parallel behavior patterns can be combined or nested to solve a particular problem.
The patterns itself are implemented in terms of skeletons, that might be higher order functions or template classes. In ParaPhrase these skeletons in turn are implemented in C++ based on the FastFlow [18, 19] library and in Erlang based on the Skel [20] library. They encapsulate all the low-level parallelization details like thread creation, communication or data access and hide them from the application programmer. Therefore the developer can focus on the design of the parallel behavior and just chooses the appropriate patterns or a combination of them.
The patterns introduced in ParaPhrase are classified in the following levels of abstraction [14]:
- **Core patterns**: basic building block of a parallel computation, that is a minimal set of quite simple as well as basic parallel exploitation patterns supporting composition and such that they can be used, alone or in combination, to describe a wide range of complex patterns. This set includes: pipe, farm, seq, map, and reduce.
- **High-level patterns**: general parallel patterns appearing in domain specific contexts and closer to the application programmers viewpoint and programming habit with respect to the core skeletons/patterns. This set includes: Divide and conquer, search, sort, pool evolution, and work-flow graph interpreter.
4.1 Pool Evolution Pattern
This section describes the pool evolution pattern in more detail, since this pattern is finally applied for parallelizing the optimization algorithm and is in use at our customers site.
The pool evolution pattern is inspired by the idea of evolutionary algorithms, generic population-based optimization algorithms, that in turn are based on mechanisms from biological evolution, like reproduction, mutation, or selection.
The workflow of the pool evolution pattern can be summarized as follows [14]:
A “candidate selection” function \( s \) selects a subset of objects belonging to an unstructured object pool \( P \). The selected objects are processed by means of a “evolution” function \( e \). The evolution function can produce any number of new/modified objects out of the input one. The set of objects computed by the evolution function on the selected object are filtered through a “filter” function \( f \) and eventually inserted into the object pool. At any insertion/extraction into/from the object pool a “termination” function \( t \) is evaluated onto the object pool, to determine whether the evolution process has to be stopped or it has to be iterated.
A pool evolution pattern therefore computes \( P \) as result of the following algorithm:
1. **while** \( \text{not}(t(P)) \) **do**
2. \( N = e(s(P)) \)
3. \( P = P \cup f(N, P) \)
4. **end while**
Although initially designed to solve problems from the soft computing application domain, especially the evolutionary computation field, due to its generic implementation the patterns can easily be used to solve problems from other domains [11]
5. Algorithm Parallelization
This section discusses the parallelization of the previously described optimization algorithm. To achieve a basic parallelization often more than one pattern is applicable. But not all suitable patterns give the same performance gain and are equal easy to deploy. Therefore we want to describe the possibilities parallelizing the optimization algorithm using different core as well as high-level patterns.
The basis for the parallelization was an already working sequential implementation of the whole optimization program including the algorithm described above. The main construct of the core optimization part as shown (Figure 3) is the loop that iterates over the candidate solutions in a pool. In every iteration one candidate solution is picked from the pool of solutions and first passed to the optimize function, where the demanded constraints are applied and local optimization steps are executed, and afterwards to the filter function, where the on the one hand the current candidate solution is inspected and the whole optimization process is observed and decided whether to continue or terminate.
Investigation of the static code structure and the runtime behavior showed that this loop is the part of the code where the vast bulk of execution time is spent. Furthermore it turned out that this part also has the greatest potential for parallelization since the optimize function can be applied on all candidate solutions simultaneously as the solutions are independent from each other and do not share any state that could lead to race conditions.
Within the optimize function the contraints_worker and optimization_worker would bear further potential for parallelization. But exploiting them too would lead to a huge number of threads running on the system and to achieve a great speedup also a huge number of cores would be needed, which is not the case for our deployment system. Therefore we focus on parallelization the main loop construct only.
Different parallelization technologies have been investigated. The decision which one to use is based on several criteria.
- Since the code has already been very complex, additional complexity introduced by parallelization should be as less as possible.
- Further development and maintenance will be done by non parallelization-experts, which is why code changes should be easy to understand and restricted to a small code area.
- Nevertheless, the parallelization model has to provide flexibility, since though the algorithm is iterative, the number of iterations is not known at the beginning. Termination of the optimization process is based on different conditions that might be fulfilled anytime during the the process.
Obviously a simple for-loop parallelization, using e.g. OpenMP or the par-for pattern, is not possible, since the iteration space is not known in prior. The high-level pattern-based approach of ParaPhrase perfectly meets those criteria stated above. Possible core patterns that can be applied are farm and pipe and a combination of them. (Details about these patterns can be found in [15] [16] [14]). Furthermore the pool evolution pattern, as high level pattern, can be used.
5.1 Core Pattern Application
The following describes the application of different core patterns.
**Pipe** The pipe pattern fits to the basic idea of the optimization algorithm. Every step of optimization and constraint enforcing steps
to solve the optimization problem can be mapped directly to one stage in the pipeline pattern. The effort applying this pattern is equal to the Farm pattern. Applying the parfor pattern or OpenMP would be less effort, but this they do not provide the demanded flexibility. The existing code has to be changed in several places. Some classes have to derive from FF_node and additional classes have to be implemented that serve as emitter and collector. The degree of parallelization is bound to the number of steps in the optimization chain. E.g. if the chain consists of 10 steps (1 for the emitter thread) cores will be used even if more would be available. So at least 11 cores have to be available to achieve a good speedup. An advantage of the pipeline approach is that one optimization step is accessed only once at a time, since only one instance of each step exists.
Farm In the farm pattern the emitter of the farm takes the partial solutions from the pool and passes them to the workers. A worker wraps the loop that iterates over all the optimization steps. In a first implementation also a collector has been used, to keep changes in the code as small as possible. The collector was responsible for calculating the objective value and pushing the processed solution back into the pool. In a second step the calculation of the objective value has been refactored so that it could be included into the worker of the farm and therefore calculated in parallel. Furthermore the collector could be omitted and replaced by the feedback channel to send the solutions back to the emitter. The emitter then decides whether to dismiss the solution, to process it further or to terminate the whole algorithm. Implementation effort of the first version is similar to the pipe approach. But refactoring of the implementation to be able to omit the collector to get a greater speedup increased the effort, but decreased execution time. Since there are shared resources whose access needs to be synchronized a linear speedup will not be possible.
Farm with nested pipes In this implementation the farm pattern and the pipeline pattern have been combined by nesting a pipe in each farm worker. This idea has the greatest potential for speedup, but even with the smallest reasonable number of farm workers, which is two, and an average length of the optimization chain of ten, 23 threads a running. So if the system does not have at least 23 cores (which was the case for our evaluation platform, it had 20 cores), there will be no speedup since the overhead for scheduling between the threads would be too high. Additionally, the higher the number of threads that have to be synchronized accessing the shared resources, the less efficient it is.
Common to all core patterns is that, although the effort is much less compared to traditional low-level approaches, the existing legacy code has to be changed on several places. Applying high-level patterns solved this problem for our use case.
5.2 High-Level Pattern Application
All of the above mentioned criteria are perfectly fulfilled by the pool-pattern, which was the pattern of choice at the end. The semantic behind the pool pattern fits perfectly to the basic idea of the optimization algorithm which is also based on a pool of possible solutions that are going to be optimized.
The effort of applying this pattern is limited to mapping the existing code to the functions for selection, evolution, filter and termination.
• The selection function simply returns all items of the pool, since in every iteration all candidate solutions are processed.
• The evolution function can be directly mapped to the optimize function.
• To filter the solution after the evolution phase the already existing filter function can be used.
• Also the implemented termination function can be directly mapped to the needed terminate function of the pattern.
The following code snippet illustrates the instantiation and use of the pool evolution pattern:
```cpp
poolEvolution<shared_ptr<Workpiece<T>>, Env_t<T>> pool(_numWorker, _pool, selection, evolution, filter, termination, Env_t<T>(this));
```
pool.run_and_wait_end();
The template class of the pool pattern needs two template arguments for instantiation:
• Type of the items in the pool
• Type of an optional structure that holds additional data needed during pool execution
Constructor parameters:
• _numWorker: the number of workers used for execution the pool pattern; equal to the parallelization degree
• _pool: iterable structure that holds the pool items
• selection, evolution, filter, termination: pointers to functions
• Env_t<T>(this): needed environment data needed during optimization process
The command pool.run_and_wait_end() synchronously executes the pool pattern and returns once the execution terminates.
In the original implementation of the pool evolution pattern only the evolution of the pool items is executed in parallel. That means for our particular case that the optimize function is applied on all selected candidate solutions simultaneously. Further loop parallelization within the optimize function is possible by simply nesting e.g. a par-for pattern. But as already stated earlier this is only meaningful on system with a very high number of cores. Additional speedup can be achieved by parallelizing the filter function. For example by nesting another pattern like par-for or pipe.
6. Evaluation
This section discusses the performance of the parallelized application concerning the performance of the parallelization and its effect on the result of the optimization.
Performance evaluation of the parallel version using the pool evolution pattern has been performed on a 64 bit Windows system (Windows 7) with two processor 6 cores each (dual-hexacore, Intel Xeon X5690, 3.47 GHz) with 24 GB of memory. HyperThreading support has been disabled, as usual in HPC.
To evaluate the performance two scenarios have been considered.
• How does the execution time of the optimization algorithm change with an increasing number of workers (constantly increasing the parallelization degree) while the result of the optimization is kept constant. Figure 4 shows a significant speedup and reduction of execution time with an increasing number of workers while the result of the optimization is always constant.
• How does the result of the optimization is improved when the execution time is kept constant and the number of workers is constantly increased.
Figure 5 shows the evolution of the transformer weight with increasing number of workers. For a medium sized transformer, as in this example, the weight reduces about 2 tons which reduces the overall costs significantly.

**Figure 4.** Speedup and Execution Time.

**Figure 5.** Weight decrease.
7. Conclusion
In this paper we addressed the parallelization of an optimization problem currently encountered in the slitting of metal sheets used in the production of electrical transformers. Due to the large number of constraints, convergence to local optima is rather common. This can be addressed by a larger number of initial admissible solutions and reprocessing from stages further away from the currently best solution. To counteract the resulting increase of processing time we decided to parallelize the optimization algorithms using high-level parallel patterns developed in the ParaPhrase project.
As already remarked compared to low-level parallelization approaches direct usage of the threading API, applying high-level methodologies has several advantages, like ability to focus on parallel behavior than on low-level details, better maintainability due to encapsulation of parallelization code within the parallel patterns, or increased portability between different operating systems.
Evaluation showed a significant decrease of execution time and improvement of the optimization process as direct result of parallelization. But furthermore we encountered that a parallelization degree higher than ten does not lead to further decrease of execution time (Figure 4). On the other hand using a parallelization degree of twelve still leads to a further weight decrease (Figure 5).
We are still investigating the reason for this phenomenon. One possibility is that the still sequential implementation of the filter function causes a bottleneck. Higher parallelization degree leads to a higher number of processed candidate solutions in an equal time period. Maybe the filter is not able to handle this increasing number and the candidate solutions queue up before the filter. Therefore we are currently working on a parallelization of the sequential implementation of the filter part to maybe solve the problem of stagnating speedup and to decrease the processing time even further. But even more important than the reduced processing time is the increase of the overall performance of the result of the optimization process.
References
|
{"Source-Url": "http://rephrase-eu.weebly.com/uploads/3/1/0/9/31098995/hlpgpu_scch.pdf", "len_cl100k_base": 6123, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20730, "total-output-tokens": 7458, "length": "2e12", "weborganizer": {"__label__adult": 0.00036025047302246094, "__label__art_design": 0.00032973289489746094, "__label__crime_law": 0.00035834312438964844, "__label__education_jobs": 0.0004801750183105469, "__label__entertainment": 6.920099258422852e-05, "__label__fashion_beauty": 0.0001742839813232422, "__label__finance_business": 0.0003800392150878906, "__label__food_dining": 0.00040650367736816406, "__label__games": 0.0005507469177246094, "__label__hardware": 0.0014295578002929688, "__label__health": 0.0005497932434082031, "__label__history": 0.0002868175506591797, "__label__home_hobbies": 0.00012189149856567384, "__label__industrial": 0.0014200210571289062, "__label__literature": 0.0001951456069946289, "__label__politics": 0.0002913475036621094, "__label__religion": 0.0005044937133789062, "__label__science_tech": 0.06378173828125, "__label__social_life": 7.37905502319336e-05, "__label__software": 0.008056640625, "__label__software_dev": 0.9189453125, "__label__sports_fitness": 0.0003635883331298828, "__label__transportation": 0.0007910728454589844, "__label__travel": 0.00020492076873779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33859, 0.0179]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33859, 0.38432]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33859, 0.91228]], "google_gemma-3-12b-it_contains_pii": [[0, 5767, false], [5767, 12239, null], [12239, 14750, null], [14750, 21886, null], [21886, 28374, null], [28374, 33859, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5767, true], [5767, 12239, null], [12239, 14750, null], [14750, 21886, null], [21886, 28374, null], [28374, 33859, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33859, null]], "pdf_page_numbers": [[0, 5767, 1], [5767, 12239, 2], [12239, 14750, 3], [14750, 21886, 4], [21886, 28374, 5], [28374, 33859, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33859, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
ac6ce5af21390fab0ad358cd916f0dd40128dfc0
|
Abstract
In this paper, we present a compiler extension for applications targeting high performance embedded systems. It analyzes the graph of a dataflow application in order to adapt its parallelism degree. Our approach consists in the detection and the substitution of built-in patterns in the dataflow. Modifications applied on the graph do not alter the semantic of the application. A parallelism reduction engine is also described to perform an exhaustive search of the best reduction. Our proposition has been implemented within an industry-grade compiler for the Sigma-C dataflow language. It shows that for dataflow applications, the parallelism reduction extension helps the user focus on the algorithm by hiding all parallelism tuning considerations. Experimentations demonstrate the accuracy and the performance of the reduction engine for both synthetic and real applications.
Keywords: Parallelism, Dataflow Programming, Pattern Detection and Substitution, Reduction Engine, Sigma-C
1. Introduction
Nowadays, it turns out that the quest for computational power leads - again - to an important increase of the number of processing units [1]. Computing grids and many-cores are examples of systems that rely on massive scaling, up to thousands of units connected over continents or within a single chip. There is a real challenge to make parallel programming efficient on large-scale systems while staying appealing to developers. Most of the modern approaches still rely on explicit parallel programming in which an emphasis has to be made on managing communications and synchronizations between tasks. This is particularly true with programming languages like MPI [2], OpenMP [3] and OpenCL [4], all languages being widely used on distributed systems. Some paradigms have been introduced to limit these drawbacks: for example, agent-based and dataflow programming languages offer implicit mechanisms intended to hide low-level communications, as well as inter-tasks synchronizations.
However, in the field of high performance computing, applications have to be finely tuned in order to take benefit from the underlying execution infrastructure. This step largely modifies the application design by adding constraints that are not related to solve the original problem, as well as portability issues to target different architectures. One relevant example is the sizing of the application regarding the number of concurrent tasks. This sizing has to ensure the best execution speedup: too few concurrent tasks results in an idle system while too many concurrency implies an overhead due to task communications and context switches. It also implies the allocation of enough space to store all task contexts, where memory is essentially a precious resource in embedded high performance computing.
both cases, execution speedup will not be optimal. Furthermore, the sizing problem closely depends on parallelism granularity. For example, a video processing application can be designed creating concurrent tasks per frame, per row, per macro-block, or even per pixel. In most cases, the smaller the granularity is, the better the parallelism degree can be tuned, what helps in reaching the best speedup. As a counterpart, this will also require a tight understanding of the application design and may involve a very time-consuming iterative process to find the best solution. This process is often based on experimentations and know-how.
In order to deal with the application sizing, several approaches have been proposed: from the widely-used hand-made configuration files specifically designed for a given execution platform, to the use of pragma instructions indicating that the compiler or the runtime can modify the code to fit into the host target. Pragma instructions are part of the CUDA [5], OpenMP and the upcoming MP Designer [6] languages. In these situations, applications still have to be written with explicit parallelism instructions. One approach to ease parallelism development is to rely on automatic parallelism extraction. In this scenario, a static analysis is performed on the source code in order to detect loops that meet all the requirements allowing to execute inner instructions in a concurrent way. This approach has several drawbacks: static analysis is still a very complex process that makes loop detection difficult to perform, some loops and other code statements may not be detected even if they are eligible for the treatment and last but not least, this approach encourages developers to keep writing applications in the single threaded model instead of moving to the distributed model.
With the democratization of parallel programming, application developers cannot be expected to master architecture related intricacies and should be allowed to focus on designing algorithms with only two goals in mind: 1) solve a problem and 2) get the smallest parallelism granularity in order to keep enough tuning possibilities for speedup. The overall optimizations and parallelism tuning should be transparently handled by the compiler, without any pragma hints or user text decorations. In this paper, we focus on the Sigma-C dataflow programming language [7], a language which has been specifically designed for programming high performance computing applications over massively parallel architectures. One of the key aspect of this language, over all aspects offered by dataflow programming, is the ability to specify the productions and consumptions of each task. This crucial information is used at compile-time for checkings such as buffer sizing, placement, routing, deadlock detection and, as presented in this paper, parallelism tuning.
In this paper, we propose a compiler extension that analyzes the application dataflow graph and the tasks communication behavior in order to tune the parallelism degree of a dataflow program. This extension is called parallelism reduction. Our approach is based on the detection and substitution of patterns within the application task instantiation graph. These patterns are part of a built-in library. On top of that, we designed a parallelism reduction engine that is able to select relevant patterns and to calculate a substitution order by making an exhaustive or heuristic search. This engine ensures to get, if possible, the desired parallelism degree. This paper is organized as follow. Section 2 introduces the Sigma-C dataflow programming language. Sections 3 and 4 respectively present the parallelism reduction patterns and engine. Section 5 gives some insights on the implementation and the preliminary evaluation. Section 6 discusses the related works. Finally, section 7 concludes and gives some perspectives.
2. Sigma-C: a Dataflow Programming language for Large-Scale Infrastructures
Sigma-C [7] is an agent-based dataflow programming model and language designed for efficient parallel programming on large-scale infrastructures such as many-core processors and computing grids. The model is based on process networks with process behavior specifications. The language is an extension of the ANSI C language and provides keywords to define and connect agents. The Sigma-C application is described as a static instantiation graph with no change during the execution. Agents communicate through point-to-point, unidirectional and typed links. They are defined using three main sections, as shown in Listing 1. The interface section is used to declare input and output communication ports, as well as a specification of the consumptions and productions. This specification allows formal analysis to enforce properties such as absence of deadlock and memory bounded execution. In Listing 1, the specification section of agent Filter specifies the consumption of width integers on the input port, 1 float on the random port and the production of width floats on the output port. The map section is used to instantiate agents and connect ports. The last section is dedicated to user functions: the start function is the entry point of the agent and is repeatedly executed. The Sigma-C language also provides system agents that ease data reorganization. The three main agents are Split, Join (both for round-robin distribution of data) and Dup (duplicate data).
Listing 1: A simple filter agent definition in Sigma-C
```c
agent Filter (int width) {
interface {
in<int> input;
in<float> random;
out<float> output;
spec {input[width]; random; output[width]};
}
map {
agent myRandom = new Random();
connect(myRandom.output, random);
}
void start() exchange(input i[width], random r, output o[width]) {
int k = 0;
for (k = 0; k < width; k++)
o[k] = i[k] + r;
}
}
```
One leitmotiv of the Sigma-C language is to let the programmer determine the finest granularity level according to his à priori knowledge of the application, not regarding the final execution performances. This generally leads to rather highly parallel applications. For example, a matrix multiplication application can intuitively be written using one instance per resulting cell, leading to millions of agents. The parallelism speedup is nonetheless a trade-off between the number of parallel tasks and their management overhead. Too few concurrent tasks leads to idle processing units, too many concurrent tasks lead to significant memory use in order to store contexts. The optimal number of concurrent tasks depends on several parameters: from the tasks execution time, physical memory use and data communication intensity, to the final hardware number of processing units and network topology. These parameters are either hard to calculate, or target dependent, making close to impossible to reuse the code as this. The developer should focus on algorithm parts of the program and the compiler should take care of optimizations to ensure best (at least acceptable) execution performances. Our approach consists in giving a target number of instances per processing units (ratio), the number of processing units and let the compiler adapt the application with these parameters. We propose to add a compiler extension to the Sigma-C toolchain, dedicated to parallelism tuning. This extension takes the dataflow instantiation graph and applies modifications using pattern detection and substitution.
3. Parametrized Dataflow Patterns
The application instantiation graph describes all the instances and connections declared in the Sigma-C source file. In order to fit a given execution infrastructure with a given degree of parallelism, we choose to apply a set of modifications on this instantiation graph. These modifications are not subject to limitations: instance and port creation, deletion or modification can be achieved, as long as these operations hardly respect the following rule: no alteration of the semantic of the application, neither of the user code. Our approach relies on the use of pattern detection and substitution, in which some parts of the graph - called subgraphs in the remaining of the paper - are recognized and replaced by another subgraph. A parametrized subgraph is a subgraph description in which some of the structural aspects or instance properties are set by parameters. Therefore, patterns are defined as two parametrized subgraphs and a set of substitution rules. These rules are applied to modify the first parametrized subgraph in order to build the second one.
We motivate the need for parametrized subgraphs with the possibility to identify several class of subgraph. For example, the Split-*-Join pattern is defined by a Split system agent, connected to a set of equivalent subgraphs - the star in the pattern name - that are in turn connected to a Join system agent. It is parametrized by the number of subgraphs sitting in the middle of the Split-Join scheme. This pattern, widely used in dataflow applications, is illustrated twice in the simple version of the Laplacian image processing application, as shown in Figure 1. In this application, images are loaded as a sequence of lines in a contiguous memory block. A first set of filters processes the image line-by-line, while a second set processes column-by-column. Split and Join system agents are used to access, reorganize and distribute data to feed filters. Consumption and production of system agents and filters are given by the \( k \) parameters. Using this level of granularity, the application size directly depends on the image size: a \( W \times H \) image.
instantiates $H$ line filters and $W$ column filters. This intuitive implementation of the Laplacian filter can generate large applications if applied to large images: as an example, a 1080p HDTV frame would require 3000 filters, far more processes than today’s regular embedded encoder-decoder platforms are able to run.
In some particular conditions regarding productions and consumptions, the *Split-* pattern can safely be modified by removing or adding some of the subgraphs connected between Split and Join system agents. Such modifications are used to adapt the number of instances started at runtime. Figure 2 shows how this can be applied on the simple Laplacian image filter: some filters are removed in both line and column filter sets. According to the round-robin data distribution, the remaining filters have to process more data per frame. The reduction works fine for line filters that may receive more than one line to process. However, data reorganization does not work for column filters: the resulting input columns are not well-formed and the filters do not read the expected data. As a matter of fact, the reduction of parallelism ensures to preserve data reorganizations if and only if the equivalence of pointer is preserved within the whole pattern. It is also a desired behavior that simplifies the programmer’s work.
A connexion between two agents preserves the equivalence of pointer if all reads and writes are made into contiguous blocks of memory. In Figure 2, line filters read exactly the same number of data ($W$) than written by the split agent. In this first configuration, the round-robin distribution ensures to preserve the equivalence of pointer. In the second configuration, column filters read $H$ data while the split agent writes one by one. The $H$ data written in the input block of the column filter come from non-contiguous addresses in the original input of the split. Here, the equivalence of pointer property is not preserved and the reduction cannot be applied. Therefore, the detection of the Split-* pattern must check the following condition: $remaining(S_k/F_k) = 0$, where $S_k$ refers to the Split productions and $F_k$ to the filters consumptions. This example illustrates that some pattern detections have to match data reorganizations: this is possible to analyze if the language provides a specification of the consumptions and productions of the instances. Finally, it is only possible to add or remove subgraphs between the Split and Join system agents if these subgraphs are stateless, that is, subgraphs do not keep local information between two invocations. This property is determined thanks to a static analysis of the source code during the parse phase.
---
1 Some unfriendly data reorganization can be directly fixed in the application design. As an example, a trivial work-around for the Laplacian application consists in applying two matrix transposition operators, before and after the column filters, hence falling back into the advantageous line filters case. A matrix transposition operator can be expressed using a Split and a Join, directly connected through their $W$ pins, with respectively 1 production and $H$ consumptions.
Other often met patterns based on the Split-Join scheme include the Cascade pattern, the Butterfly pattern and the Matrix Multiplication pattern, as shown in Figures 4 and 3. The Cascade pattern (Figure 4(a)) is a recursive case of the Split-*-Join pattern, each stage of which being another Split-*-Join pattern. This pattern is used in image processing, for macro-block decomposition and regular strided memory access patterns. By definition, all patterns of the same stage share the same properties. Therefore, the reduction of parallelism can be processed on each level of the tree. Applying reduction on the root node removes a large number of instances but also offers a few control on the tuning of parallelism compared to if applied on the leaves. The Butterfly pattern (Figure 4(b)) is another example of recursive Split-*-Join pattern. It is used in the Deriche image processing application [9] and differs from the Cascade pattern by interlacing the leaves outputs to spread data to all filters. Finally, the Matrix Multiplication pattern (Figure 3) consists in a double Split-Dup cascade and a Join cascade. It splits the rows of matrix A and the columns of matrix B in order to feed each multiplication cell and calculate the resulting matrix. This pattern can be reduced by removing rows and columns.
All patterns include both detection and substitution functions. These functions are specific to each pattern. The detection function takes a root instance and returns true if there is a match between the given pattern and the instantiation graph starting at the root instance. Therefore, the detection of all pattern instances is done by applying this function to all eligible root instances. Pattern may also include the type of the root instance. For example, patterns presented in this paper are detected starting with a Split root instance. This speeds the process up by allowing to only apply the function to a small list of instances instead of processing the entire graph. The detection is programmatically done by navigating through the application instance graph and using built-in matching functions offered by the compiler.
Some of these functions rely on Floyd-Warshall routing tables [10] to determine the shortest paths between two agents, as well as graph isomorphism algorithms to determine if two subgraphs are equivalent. In case of a positive pattern match, the removable parts of the subgraphs are identified and a reduction capacity indicator is calculated by counting all the removable user instances. This indicator will be later used by the reduction engine.
The substitution function takes a root instance and replaces the detected subgraph by a semantic-equivalent subgraph. It is parametrized by a factor of reduction that is used to control the level of parallelism reduction. This factor is a floating point number ranging between 0 and 1, where 0 means no reduction and 1 applies the full reduction, based on the reduction capacity indicator. The substitution is programmatically done by using a graph altering API that allows to remove agent and subgraph instances with port auto-reconnect. The function then returns the number of user instances that have been removed. This result will be later used by the reduction engine to evaluate a solution.
Patterns presented in this section are built-in patterns, but one can imagine that some specific patterns, tightly related to an application field, can be added to the engine.
4. Parallelism Reduction Engine
The parallelism reduction engine is in charge of performing the detection of patterns and applying patterns on the application graph. Figure 5 illustrates the two main steps of the engine. The first step takes an application graph, a pattern list and returns the detected patterns as a set of instantiated patterns. Instantiated patterns are defined by a pattern, a root instance and a capacity. The second step takes the application graph, a set of instantiated patterns and returns a sequence of instantiated patterns. This sequence gives three information: 1) the list of instantiated patterns (some of them can be discarded), 2) the order of appliance and 3) the factors of reduction attached to each pattern. Many sequences can be built by ordering instantiated patterns and modifying reduction factors. Therefore, the reduction engine has to select the best solution, according to the application and execution platform configurations.
A factor of reduction is attached to each instantiated pattern. This allows the engine to apply patterns with different strengths, going further with parallelism tuning. This is particularly relevant in the context of iterative compilation in which information gathered on target can be fed back to save or modify patterns. Other considerations such as energy savings, thermal dissipation and load balancing can be part of the decision. However, for the sake of simplicity, we choose to calculate a global reduction factor, that will be set to all instantiated pattern. This global factor is given using the following formula:
$$\text{factor}_{\text{global}} = \frac{\text{nbinstances} - (\text{nbunits} \times \text{ratio})}{\text{capacity}}$$
Where nbinstances is the number of user instances in the application graph, nbunits is the number of processing units in the targeted execution platform, ratio is the reduction ratio expressed in number of user instances per processing units and capacity is the number of user instances that can be removed if a full reduction is applied to all
instantiated patterns. The $nbunits$ and $ratio$ constants are given as parameters to the compiler. For example, let’s consider a 386-user-instance application built for a 64-processing unit target. The current ratio is 6 instances per processing unit and the user targets a 2.4 ratio. If pattern detection returns that the overall reduction capacity indicator is up to 381 removable instances, the resulting global reduction factor will be set to 0.61.
Using this global reduction factor makes possible to perform an exhaustive search to find the best pattern sequence. This exhaustive search consists in testing all pattern permutations and in keeping the one that is closer to the desired parallelism ratio. At this point, calculating the number of user instances that will be removed for each permutation cannot be done by just summing all pattern capacities of the sequence: some patterns, once applied, may disable or modify patterns that follow. For example, the Cascade pattern (Figure 4(a)) is made of recursively-defined patterns. If the first instantiated pattern of the sequence starts at the Split root instance of the tree, the following included patterns may be removed once this first reduction applied. Therefore, permutations have to be applied in real conditions on the application instance graph in order to evaluate the solution. This process ensures to find the best solution but obviously makes the algorithm more complex and slower than just guessing the final result.
One drawback coming with the exhaustive approach is the number of permutations. With only 10 instantiated patterns, the engine has to explore $10! = 3628800$ sequences, what would be far too long to process on a regular workstation. Most of the applications of our knowledge, in the field of video encoding or motion detection use a very few patterns, up to 3 or 4 patterns, allowing to evaluate all possibilities. However, we designed the reduction engine to restrict the exhaustive field of search when the number of patterns exceed a given value $P_{max}$. In this case, the engine sorts patterns by decreasing capacities, keeps the $P_{max}$ first ones and discards the others. The engine then evaluates the $P_{max}!$ possibilities, where $P_{max}$ is set according to the hardware that hosts the compiler and the time the user is ready to pay. A more elaborated approach is to build the pattern intersection matrix that gives, for each pattern couple, the number of removable user instances shared by both patterns. This matrix is then used to select the $P_{max}$ patterns from the sorted list with the least shared instances. This ensures to keep instance-independent patterns while having large capacities. The engine also evaluates the full sequence including all patterns with an arbitrary order, for example based on the order of detection.
The parallelism reduction engine uses a recursive algorithm that performs, for each sequence evaluation, a deep copy of the instantiation graph. It applies patterns sequentially and compares each step with the best solution found until then. Further developments and optimizations of the engine include code parallelization, iterative compilation in order to benefit from performance feedback, as well as greedy randomized adaptive search procedures [11].
5. Experimental Evaluation
The following results are given to demonstrate that the reduction engine is accurate and that the inner algorithms are efficient. In this paper, we do not evaluate the influence of parallelism reduction over the application execution performances. We only focus on modifications applied on the application, comparing the initial and final states and measuring how long it takes to process. All experimentations have been passed on a Intel Core 2 Duo CPU P8600 at 2.40GHz running Linux kernel 3.0.0. Only one core out of the two was used in the system. We choose to run the experimentations on this mobile device in order to demonstrate that the compiler extension can be used on a regular
laptop station. Experimentations follow the same methodology: for each application, we stress the reduction engine with 64 configurations. These configurations differ on the number of processing units and the desired parallelism ratio. By multiplying these two parameters, we obtain the number of targeted user instances in the application. This number ranges from near 35 to near 2456 instances (the ratio is a floating point number). It corresponds to different reduction goals: from hard to very light, sometimes no reduction is required. Each configuration is run 21 times and we take the average results on the 20 last runs, even if we do not notice any startup effect on the first run, everything being properly loaded into memory. Five applications have been selected, three of them are made of a single pattern, the other two are real applications. Figure 6 lists all five applications with their respecting stats.
Figure 7(a) and 7(b) illustrate the accuracy of the reduction engine. In the first figure, two simple patterns are evaluated with different reduction goals. The accuracy is given as a percentage of the resulting ratio \( R_r \) (user instances per processing units) comparing to the expecting one \( R_e \), using the formula \( (R_r - R_e) \times 100 / R_e \). Therefore, 0 means a perfect match. A negative percentage means that the engine has removed more instances than needed. The engine calculates a near-to-perfect reduction with the Split-*-Join pattern in all configurations. This is due to the simplicity of the pattern that only contains one user instance per split branch. This makes possible to remove at the instance granularity. The engine is a bit less accurate with the Matrix pattern: the reduction is at the row and column granularity. Figure 7(b) shows the number of removed instances and the factor of reduction applied to the Deriche application in all configurations. The number of removed instances also features error bars showing the number of instances that should have been removed to reach the goal. In the first part of the graph, up to 400 targeted number of instances, the reduction goal cannot be reached: the number of removable instances (only 958 instances out of 1461 instances are part of a pattern) is less than required, and the reduction factor is set to 1. Then, the factor decreases and the error bars disappear as long as the pressure on the reduction engine decreases.
Figure 8(a) shows the benefit of running the exhaustive search on as many patterns as possible. The experimentation is based on a Cascade pattern with 63 imbricated patterns. Two configurations are highlighted: one with 32 processing units and one with 64, both using a 1.2 ratio. The number of instances that could not be removed is given, according to the number of patterns selected by the exhaustive engine. This shows that the more pattern we keep in the process, the more accurate tuning we get. Figure 8(b) gives the factor of reduction and the processing time for the first configuration of this experiment. Adding patterns to the engine increases the removal capacity (going from 189 instances to 327) and decreases the reduction factor. As a counterpart, the processing time to find the best solution explodes, from 0.03 seconds for 1 pattern to 107 seconds for 7 patterns. This time corresponds to the evaluation of \( 7! = 5040 \) pattern sequences.
The experiments presented in this paper show that the exhaustive reduction engine returns very accurate results. However, when exceeding a given number of patterns, the factorial-growth processing time does not allow to use the exhaustive engine on the full pattern list. Discarding patterns also decreases the removal capacity and may, in turn, eliminate good solutions.
6. Related Work
The parallelism degree has been well studied in parallel and distributed systems, from the onboard graphical processing unit to the cluster and grid infrastructures. Most of the applications that benefit from the tuning of parallelism are designed as independent processing codes. These processing codes are duplicated either by the compiler [4, 5] or in a dynamic way by the runtime [12, 13] during the execution. These systems are similar to a simple dataflow application made of a Split-*-Join pattern. Our approach is to provide a set of relevant, more complex patterns. Most of the dataflow programming language for embedded systems [14, 15, 16, 17, 18] do not address the problem of parallelism reduction. Developers have to take care of their application sizes unless the runtime will not start to execute. Some instrumentation languages [19] helps in auto-tuning the application. However, they rely on the use of pragmas and therefore add more design work to developers. This approach mainly relates to Streamit [20]. Streamit provides a high-level stream abstraction in which filters are connected within a directed graph. Its compiler performs stream-specific optimizations. In [21], several graph transformations are described: fusion, fission and re-ordering operations are applied to adjacent filters in order to modify the granularity of the program. These operations are applied on either a filter pipeline, that is, a sequence of filters with one input and one output or, on a split-join scheme in which the inner branches are pipelines. Unlike Streamit, our compiler extension is based on a pattern approach that allows to modify more complex subgraphs as shown with the cascade, the butterfly and the matrix multiplication patterns.
<table>
<thead>
<tr>
<th>Application</th>
<th>Instances Nr</th>
<th>Capacity</th>
<th>Links Nr</th>
<th>WFI (s)</th>
<th>Patterns Nr</th>
<th>D_avg (s)</th>
<th>S_min (s)</th>
<th>S_max (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Simple Split-*-Join</td>
<td>1925</td>
<td>1919</td>
<td>3842</td>
<td>18.7</td>
<td>1</td>
<td>0.07</td>
<td>0.12</td>
<td>0.27</td>
</tr>
<tr>
<td>Simple Cascade</td>
<td>1074</td>
<td>963</td>
<td>1198</td>
<td>3.3</td>
<td>63/4</td>
<td>0.06</td>
<td>0.54</td>
<td>0.61</td>
</tr>
<tr>
<td>Simple Matrix</td>
<td>1867</td>
<td>1721</td>
<td>5319</td>
<td>17</td>
<td>1</td>
<td>0.006</td>
<td>0.10</td>
<td>0.13</td>
</tr>
<tr>
<td>Bitonic Sort</td>
<td>1132</td>
<td>903</td>
<td>1624</td>
<td>3.8</td>
<td>150/5</td>
<td>0.12</td>
<td>4.8</td>
<td>5.2</td>
</tr>
<tr>
<td>Deriche</td>
<td>1461</td>
<td>958</td>
<td>3561</td>
<td>8.2</td>
<td>2</td>
<td>14.8</td>
<td>0.24</td>
<td>0.26</td>
</tr>
</tbody>
</table>
Figure 6: Application stats: Number of instances, number of removable instances (capacity), number of links, time to build the Floyd-Warshall routing table (WFI), number of patterns (/ number of selected patterns), average time for detecting all patterns, minimum and maximum times for applying all substitutions.
Figure 7: (7(a)) Deviation percentage of the resulting ratio comparing to the expected ratio, for both simple Split-*-Join and Matrix patterns. (7(b)) Number of removed instances, number of instances that should have been removed (error bars) and the factor of reduction applied for each configuration (the targeted number of instances) on the Deriche application [9], with two Split-*-Join patterns and 1461 instances.
Figure 8: The influence of selecting different numbers of patterns: experimentations on a synthetic Split-*-Join Cascade including 63-imbricated patterns, 5 levels of deepness with 1074 instances. (8(a)) Number of instances that should have been removed, for two configurations of processing units and ratio, and (8(b)) factor of reduction and processing time given for the first configuration (32 * 1.2).
7. Conclusions
This paper presents a dataflow compiler extension for parallelism tuning. It processes the graph of a dataflow application to find and substitute patterns without modifying the semantic, neither the user code. We believe this approach let the developer focus on the algorithmic part while encouraging the maximum degree of parallelism when designing applications. We have identified four patterns that are widely used in dataflow programming. These patterns are tuned by the reduction engine to reach a given goal. The extension has been implemented within an industry-grade compiler [22] for the Sigma-C language. It has shown to be very precise on tuning and efficient when used with a reasonable number of patterns. We have a number of improvements scheduled for this work. In a first step, we plan to go further with the experimentations. We have to keep in mind that the final goal of such a work is to improve the application execution performances. Therefore, the next experimentations should focus on the benefit of adapting the parallelism degree on the execution performances. We also plan to add more reduction patterns, generic and specific to industrial applications. Other improvements of the reduction engine can be obtained with the parallelization of the exhaustive search, allowing to take advantage of running over distributed computing systems. We should be able to save computing time and calculate solutions with more patterns. In a second step, we plan to integrate the reduction extension within a performance feedback loop. Iterative compilation would make possible to auto-adapt the parallelism ratio and calculate an appropriate reduction factor for each pattern. As a perspective, we plan to take benefit of the SJD intermediate representation [23] for data reorganization.
References
|
{"Source-Url": "http://sirdeyre.free.fr/Papiers_etc/2012_Parallelism_reduction_based_on_pattern_substitution_in_dataflow_oriented_programming_languages.pdf", "len_cl100k_base": 6927, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28521, "total-output-tokens": 8703, "length": "2e12", "weborganizer": {"__label__adult": 0.00045371055603027344, "__label__art_design": 0.00043702125549316406, "__label__crime_law": 0.0004334449768066406, "__label__education_jobs": 0.0003085136413574219, "__label__entertainment": 8.445978164672852e-05, "__label__fashion_beauty": 0.0001931190490722656, "__label__finance_business": 0.00024056434631347656, "__label__food_dining": 0.00043487548828125, "__label__games": 0.0007510185241699219, "__label__hardware": 0.003604888916015625, "__label__health": 0.00057220458984375, "__label__history": 0.0003273487091064453, "__label__home_hobbies": 0.00011467933654785156, "__label__industrial": 0.0007948875427246094, "__label__literature": 0.00020444393157958984, "__label__politics": 0.0003948211669921875, "__label__religion": 0.0007281303405761719, "__label__science_tech": 0.05401611328125, "__label__social_life": 6.133317947387695e-05, "__label__software": 0.00600433349609375, "__label__software_dev": 0.92822265625, "__label__sports_fitness": 0.0004284381866455078, "__label__transportation": 0.0009889602661132812, "__label__travel": 0.0002791881561279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38747, 0.03373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38747, 0.46914]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38747, 0.89926]], "google_gemma-3-12b-it_contains_pii": [[0, 2812, false], [2812, 8265, null], [8265, 12495, null], [12495, 15706, null], [15706, 17857, null], [17857, 21249, null], [21249, 25272, null], [25272, 30815, null], [30815, 32855, null], [32855, 38747, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2812, true], [2812, 8265, null], [8265, 12495, null], [12495, 15706, null], [15706, 17857, null], [17857, 21249, null], [21249, 25272, null], [25272, 30815, null], [30815, 32855, null], [32855, 38747, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38747, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38747, null]], "pdf_page_numbers": [[0, 2812, 1], [2812, 8265, 2], [8265, 12495, 3], [12495, 15706, 4], [15706, 17857, 5], [17857, 21249, 6], [21249, 25272, 7], [25272, 30815, 8], [30815, 32855, 9], [32855, 38747, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38747, 0.07]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8332c8fd0d2a879060258504ea3cabdc1791965b
|
Parallel Systems Course: Chapter VIII
Sorting Algorithms
Jan Lemeire
ETRO Dept.
November 2014
Kumar Chapter 9
Vrije Universiteit Brussel
Overview
1. Parallel sort – distributed memory
2. Parallel sort – shared memory
3. Sorting Networks
A. Odd-even
B. Bitonic
4. Parallel sort - GPU
Overview
1. Parallel sort – distributed memory
2. Parallel sort – shared memory
3. Sorting Networks
A. Odd-even
B. Bitonic
4. Parallel sort – GPU
Mission
Sort array *asap* by exploiting parallel system with distributed memory
*Idea*: based on quicksort
*leads to most-optimal parallel algorithm?*
Quicksort
Quicksort is one of the most common sorting algorithms for sequential computers because of its simplicity, low overhead, and optimal average complexity.
Quicksort selects one of the entries in the sequence to be the pivot and divides the sequence into two - one with all elements less than the pivot and other greater.
The process is recursively applied to each of the sublists.
1. procedure QUICKSORT (A, q, r) begin
2. if q < r then begin
3. x := A[q];
4. s := q;
5. for i := q + 1 to r do
6. if A[i] ≤ x then begin
7. s := s + 1;
8. swap(A[s], A[i]);
9. end if
10. swap(A[q], A[s]);
11. QUICKSORT (A, q, s);
12. QUICKSORT (A, s + 1, r);
13. end if
14. end QUICKSORT
15. end
The sequential quicksort algorithm.
Quicksort
Example of the quicksort algorithm sorting a sequence of size \( n = 8 \).
The performance of quicksort depends critically on the quality of the pivot.
In the best case, the pivot divides the list in such a way that the larger of the two lists does not have more than $\alpha n$ elements (for some constant $\alpha$).
In this case, the complexity of quicksort is $O(n \log n)$.
v1. Parallel Quicksort
- Lets start with recursive decomposition - the list is partitioned serially and each of the subproblems is handled by a different processor.
- The time for this algorithm is lower-bounded by $\Omega(n)$!
- Since the partitioning is done on single processor
- Can we parallelize the partitioning step - in particular, if we can use $n$ processors to partition a list of length $n$ around a pivot in $O(1)$ time, we have a winner.
- Then we obtain a runtime of $O(\log n)$!!
- This is difficult to do on real machines, though.
Parallel Quicksort
**Step 1**
- n operations
- n/2 words to send over network
**Step 2**
- n/2 operations
- n/4 words to send over network
**Step 3**
- n/4 operations
- n/8 words to send over network
**Step 4**
- n/8 \cdot \log_2(n/8) operations
- 7 \cdot n/8 words to send over network
Execution Profile
- MASTER
- CPU1
- CPU2
- CPU3
- CPU4
- CPU5
- CPU6
- CPU7
- CPU8
- Tp
- : COMMUNICATION
- : BLOCKING (IDLING)
- : COMPUTING
Can we resolve the load imbalances?
- Make sure that each processor has the same number of elements locally.
- Merge results
- Merge sort!
- Actually better than quicksort
- Disadvantage: not in place (need copy of matrix)
- Use quicksort for local sort
v2. based on merge sort
Similar communication overhead, but without load imbalances!
v3. Can we overcome the limited parallelism in the beginning?
- A simple message passing formulation is based on the recursive halving of the machine.
- Assume that each processor in the lower half of a $p$ processor ensemble is paired with a corresponding processor in the upper half.
- A designated processor selects and broadcasts the pivot.
- Each processor splits its local list into two lists, one less ($L_i$), and other greater ($U_i$) than the pivot.
- A processor in the low half of the machine sends its list $U_i$ to the paired processor in the other half. The paired processor sends its list $L_i$.
After this step:
- all elements < pivot in the low half of the machine
- all elements > pivot in the high half.
The above process is recursed until each processor has its own local list, which is sorted locally.
The time for a single reorganization is $\Theta(\log p)$ for broadcasting the pivot element, $\Theta(n/p)$ for splitting the locally assigned portion of the array, $\Theta(n/p)$ for exchange and local reorganization.
Note that this time is identical to that of the corresponding shared address space formulation.
However, it is important to remember that the reorganization of elements is a bandwidth sensitive operation.
Overview
1. Parallel sort – distributed memory
2. Parallel sort – shared memory
3. Sorting Networks
A. Odd-even
B. Bitonic
4. Parallel sort - GPU
A list of size $n$ equally divided across $p$ processors.
A pivot is selected by one of the processors and made known to all processors.
Each processor partitions its list into two, say $L_i$ and $U_i$, based on the selected pivot.
All of the $L_i$ lists are merged and all of the $U_i$ lists are merged separately.
The set of processors is partitioned into two (in proportion of the size of lists $L$ and $U$). The process is recursively applied to each of the lists.
Parallel Sorting
Jan Lemeire
Remaining problem: global reorganization (merging) of local lists to form $L$ and $U$.
- The problem is one of determining the right location for each element in the merged list.
- Each processor computes the number of elements locally less than and greater than pivot.
- It computes two sum-scans (also called prefix sum) to determine the starting location for its elements in the merged $L$ and $U$ lists.
- Once it knows the starting locations, it can write its elements safely.
Scan operation
**Parallel prefix sum**: every node got sum of previous nodes + itself
Efficient global rearrangement of the array.
The parallel time depends on the split and merge time, and the quality of the pivot.
The latter is an issue independent of parallelism, so we focus on the first aspect, assuming ideal pivot selection.
One iteration has four steps: (i) determine and broadcast the pivot; (ii) locally rearrange the array assigned to each process; (iii) determine the locations in the globally rearranged array that the local elements will go to; and (iv) perform the global rearrangement.
The first step takes time $\Theta(\log p)$, the second, $\Theta(n/p)$, the third, $\Theta(\log p)$, and the fourth, $\Theta(n/p)$.
The overall complexity of splitting an $n$-element array is $\Theta(n/p) + \Theta(\log p)$.
The process recurses until there are $p$ lists, at which point, the lists are sorted locally.
Therefore, the total parallel time is:
$$T_P = \Theta\left(\frac{n}{p} \log \frac{n}{p}\right) + \Theta\left(\frac{n}{p} \log p\right) + \Theta(\log^2 p).$$ \hspace{1cm} (4)
Alternative: PRAM Formulation
- We assume a **CRCW** (concurrent read, concurrent write) PRAM with concurrent writes resulting in an *arbitrary write succeeding* (!!).
- The formulation works by creating pools of processors. Every processor is assigned to the same pool initially and has one element.
- Each processor attempts to write its element to a common location (for the pool).
- Each processor tries to read back the location. If the value read back is greater than the processor's value, it assigns itself to the `left' pool, else, it assigns itself to the `right' pool.
- Each pool performs this operation recursively, *in lockstep*.
- Note that the algorithm generates a tree of pivots. The depth of the tree is the expected parallel runtime. The average value is $O(\log n)$.
Parallel Sorting
Jan Lemeire
while (true) {
write value to pool
read pivot from pool
if (pivot == value)
break
else if (pivot < value)
pool = pool->left
else
pool = pool->right
}
Performed by all threads in lock-step
⇒ GPU:
Within warps OK, otherwise barrier
Parallel Quicksort: PRAM Formulation
A binary tree generated by the execution of the quicksort algorithm. Each level of the tree represents a different array-partitioning iteration.
If pivot selection is optimal, then the height of the tree is $\Theta(\log n)$, which is also the number of iterations. **Which is almost the ideal speedup!** Overhead = pivot selection.
The execution of the PRAM algorithm on the array shown in (a).
Overview
1. Parallel sort – distributed memory
2. Parallel sort – shared memory
3. Sorting Networks
A. Odd-even
B. Bitonic
4. Parallel sort - GPU
Mission
- Digital circuit that transforms an unsorted list (input) into a sorted list (output)
- Idea: parallel processing! By putting components in parallel *(width)!!*
- So: runtime is determined by *depth*
- Goal: minimal depth
A typical sorting network. Every sorting network is made up of a series of columns, and each column contains a number of comparators connected in parallel.
Sorting Networks
- Networks of comparators designed specifically for sorting (time $< \Theta(n \log n)$).
- Specific-designed parallel system.
- A comparator is a device with two inputs $x$ and $y$ and two outputs $x'$ and $y'$. For an increasing comparator, $x' = \min\{x, y\}$ and $y' = \max\{x, y\}$; and vice-versa for a decreasing comparator.
- We denote an increasing comparator by $\oplus$ and a decreasing comparator by $\ominus$.
- The speed of the network is proportional to its depth.
Basic component: Comparators
A schematic representation of comparators: (a) an increasing comparator, and (b) a decreasing comparator.
Best algorithm to hardwire?
- Can we sort $n$ elements in time $O(\log n)$?
- $= \text{quicksort performance}$
- Quicksort not possible: communication paths are not fixed
- Best: using $O(n \cdot \log n)$ comparators, but with a quite large constant (many thousands)
- Not practical
- **Bitonic sort** and **odd-even sort**: sort $n$ elements in time $O(\log^2 n)$
Sorting: Overview
- One of the most commonly used and well-studied kernels.
- Sorting can be *comparison-based* or *non-comparison-based*.
- Non-comparison: determine rank (index) in list of element
- We focus here on comparison-based sorting algorithms.
- The fundamental operation of comparison-based sorting is *compare-exchange*.
- The lower bound on any comparison-based sort of $n$ numbers is $\Theta(n \log n)$, the quicksort performance.
A parallel compare-exchange operation. Processes $P_i$ and $P_j$ send their elements to each other. Process $P_i$ keeps $\min\{a_i, a_j\}$, and $P_j$ keeps $\max\{a_i, a_j\}$.
What is the parallel counterpart to a sequential comparator?
- If each processor has one element, the compare exchange operation can be done in $t_s + t_w$ time (startup latency and per-word time).
- If we have more than one element per processor, we call this operation a **compare split**. Assume each of two processors have $n/p$ elements.
- After the compare-split operation, the smaller $n/p$ elements are at processor $P_i$ and the larger $n/p$ elements at $P_j$, where $i < j$.
- The time for a compare-split operation is $(t_s + t_w n/p)$, assuming that the two partial lists were initially sorted.
A compare-split operation. Each process sends its block of size $n/p$ to the other process. Each process merges the received block with its own block and retains only the appropriate half of the merged block. In this example, process $P_i$ retains the smaller elements and process $P_i$ retains the larger elements.
There are alternatives! With more communication, however…
Overview
1. Parallel sort – distributed memory
2. Parallel sort – shared memory
3. Sorting Networks
A. Odd-even
B. Bitonic
4. Parallel sort - GPU
Bubble Sort and its Variants
The sequential bubble sort algorithm compares and exchanges adjacent elements in the sequence to be sorted:
1. \textbf{procedure} BUBBLE\_SORT(n)
2. \textbf{begin}
3. \hspace{1em} for \(i := n - 1\) \textbf{downto} 1 \textbf{do}
4. \hspace{2em} for \(j := 1\) \textbf{to} \(i\) \textbf{do}
5. \hspace{3em} \textit{compare-exchange}(a_j, a_{j+1});
6. \textbf{end} BUBBLE\_SORT
Sequential bubble sort algorithm.
Bubble Sort and its Variants
- The complexity of bubble sort is $\Theta(n^2)$.
- Bubble sort is difficult to parallelize since the algorithm has no concurrency.
- A simple variant, though, uncovers the concurrency.
- Complexity is lower than quicksort, but parallelization is more efficient.
Odd-Even Transposition
1. procedure ODD-EVEN(n)
2. begin
3. for $i := 1$ to $n$ do
4. begin
5. if $i$ is odd then
6. for $j := 0$ to $n/2 - 1$ do
7. compare-exchange($a_{2j+1}, a_{2j+2}$);
8. if $i$ is even then
9. for $j := 1$ to $n/2 - 1$ do
10. compare-exchange($a_{2j}, a_{2j+1}$);
11. end for
12. end ODD-EVEN
Sequential odd-even transposition sort algorithm.
Sorting \( n = 8 \) elements, using the odd-even transposition sort algorithm. During each phase, \( n = 8 \) elements are compared.
Odd-Even Transposition
- After \( n \) phases of odd-even exchanges, the sequence is sorted.
- Each phase of the algorithm (either odd or even) requires \( \Theta(n) \) comparisons.
- Serial complexity is \( \Theta(n^2) \).
- Parallel version can be implemented by 1 network which is used iteratively!
- **Conclusion:** very simple, but not the fastest
Consider the one item per processor case.
There are $n$ iterations, in each iteration, each processor does one compare-exchange.
The parallel run time of this formulation is $\Theta(n)$.
This is cost optimal with respect to the base serial algorithm but not to the optimal one ($\Theta(n \log n)$).
Parallel formulation of odd-even transposition.
1. procedure ODD-EVEN_PAR(n)
2. begin
3. id := process’s label
4. for i := 1 to n do
5. begin
6. if i is odd then
7. if id is odd then
8. compare-exchange_min(id + 1);
9. else
10. compare-exchange_max(id - 1);
11. if i is even then
12. if id is even then
13. compare-exchange_min(id + 1);
14. else
15. compare-exchange_max(id - 1);
16. end for
17. end ODD-EVEN_PAR
Parallel Odd-Even Transposition
- Consider a block of \( \frac{n}{p} \) elements per processor.
- The first step is a local sort.
- In each subsequent step of \( p \) steps, the compare exchange operation is replaced by the compare split operation (\( \frac{n}{p} \) comparisons).
- The parallel run time of the formulation is
\[
T_P = \Theta \left( \frac{n}{p} \log \frac{n}{p} \right) + \Theta(n) + \Theta(n).
\]
Overview
1. Parallel sort – distributed memory
2. Parallel sort – shared memory
3. Sorting Networks
A. Odd-even
B. Bitonic
4. Parallel sort - GPU
A bitonic sorting network sorts $n$ elements in $\Theta(\log^2 n)$ time.
A **bitonic sequence** has two tones - increasing and decreasing, or vice versa.
$\langle 1, 2, 4, 7, 6, 0 \rangle$ is a bitonic sequence, because it first increases and then decreases.
*Not important here*: Any cyclic rotation of a two-tone sequence is also considered bitonic. $\langle 8, 9, 2, 1, 0, 4 \rangle$ is another bitonic sequence, because it is a cyclic shift of $\langle 0, 4, 8, 9, 2, 1 \rangle$.
The kernel of the network is the rearrangement of a bitonic sequence into a sorted sequence.
Let $s = \langle a_0, a_1, ..., a_{n-1} \rangle$ be a bitonic sequence such that $a_0 \leq a_1 \leq \cdots \leq a_{n/2-1}$ and $a_{n/2} \geq a_{n/2+1} \geq \cdots \geq a_{n-1}$.
Consider the following subsequences of $s$:
$s_1 = \langle \min\{a_0, a_{n/2}\}, \min\{a_1, a_{n/2+1}\}, \ldots, \min\{a_{n/2-1}, a_{n-1}\} \rangle$
$s_2 = \langle \max\{a_0, a_{n/2}\}, \max\{a_1, a_{n/2+1}\}, \ldots, \max\{a_{n/2-1}, a_{n-1}\} \rangle$
$s_1$ and $s_2$ are both bitonic and each element of $s_1$ is less than every element in $s_2$.
We can apply the procedure recursively on $s_1$ and $s_2$ to get the sorted sequence.
Bitonic sort’s basic merge component
**Basic operation:** change a bitonic array into a sorted array.
For 16 elements this can be done in 4 steps.
<table>
<thead>
<tr>
<th>Original sequence</th>
<th>3</th>
<th>5</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>12</th>
<th>14</th>
<th>20</th>
<th>95</th>
<th>90</th>
<th>60</th>
<th>40</th>
<th>35</th>
<th>23</th>
<th>18</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>1st Split</td>
<td>3</td>
<td>5</td>
<td>8</td>
<td>9</td>
<td>10</td>
<td>12</td>
<td>14</td>
<td>0</td>
<td>95</td>
<td>90</td>
<td>60</td>
<td>40</td>
<td>35</td>
<td>23</td>
<td>18</td>
<td>20</td>
</tr>
<tr>
<td>2nd Split</td>
<td>3</td>
<td>5</td>
<td>8</td>
<td>0</td>
<td>10</td>
<td>12</td>
<td>14</td>
<td>9</td>
<td>35</td>
<td>23</td>
<td>18</td>
<td>20</td>
<td>95</td>
<td>90</td>
<td>60</td>
<td>40</td>
</tr>
<tr>
<td>3rd Split</td>
<td>3</td>
<td>0</td>
<td>8</td>
<td>5</td>
<td>10</td>
<td>9</td>
<td>14</td>
<td>12</td>
<td>18</td>
<td>20</td>
<td>35</td>
<td>23</td>
<td>60</td>
<td>40</td>
<td>95</td>
<td>90</td>
</tr>
<tr>
<td>4th Split</td>
<td>0</td>
<td>3</td>
<td>5</td>
<td>8</td>
<td>9</td>
<td>10</td>
<td>12</td>
<td>14</td>
<td>18</td>
<td>20</td>
<td>23</td>
<td>35</td>
<td>40</td>
<td>60</td>
<td>90</td>
<td>95</td>
</tr>
</tbody>
</table>
Merging a 16-element bitonic sequence through a series of \( \log_{16} \) bitonic splits.
The complete network will be based on this component.
We can easily build a sorting network to implement this bitonic merge algorithm.
Such a network is called a **bitonic merging network**.
The network contains $\log n$ columns. Each column contains $n/2$ comparators and performs one step of the bitonic merge.
We denote a bitonic merging network with $n$ inputs by $\oplus BM[n]$.
Replacing the $\oplus$ comparators by $\ominus$ comparators results in a decreasing output sequence; such a network is denoted by $\ominus BM[n]$.
A bitonic merging network for \( n = 16 \). The input wires are numbered \( 0, 1, \ldots, n - 1 \), and the binary representation of these numbers is shown. Each column of comparators is drawn separately; the entire figure represents a \( \oplus \text{BM}[16] \) bitonic merging network. The network takes a bitonic sequence and outputs it in sorted order.
How do we sort an unsorted sequence using a bitonic merge?
- We must first build a single bitonic sequence from the given sequence.
- A sequence of length 2 is a bitonic sequence.
- A bitonic sequence of length 4 can be built by sorting the first two elements using $\oplus_{BM[2]}$ and next two, using $\ominus_{BM[2]}$.
- This process can be repeated to generate larger bitonic sequences.
A schematic representation of a network that converts an input sequence into a bitonic sequence. In this example, $\oplus BM[k]$ and $\Theta BM[k]$ denote bitonic merging networks of input size $k$ that use $\oplus$ and $\Theta$ comparators, respectively. The last merging network ($\oplus BM[16]$) sorts the input. In this example, $n = 16$.
Parallel Sorting
Jan Lemeire
The comparator network that transforms an input sequence of 16 unordered numbers into a bitonic sequence.
Sorting Networks: Bitonic Sort
- The depth of the network is $\Theta(\log^2 n)$.
- Each stage of the network contains $n/2$ comparators. A serial implementation of the network would have complexity $\Theta(n\log^2 n)$.
Mapping Bitonic Sort to Hypercubes
- Map on a general-purpose parallel computer.
- Consider the case of one item per processor. The question becomes one of how the wires in the bitonic network should be mapped to the hypercube interconnect.
- Note from our earlier examples that the compare-exchange operation is performed between two wires only if their labels differ in exactly one bit!
- A direct mapping of wires to processors; all communication is nearest neighbor!
Communication during the last stage of bitonic sort. Each wire is mapped to a hypercube process; each connection represents a compare-exchange between processes.
Mapping Bitonic Sort to Hypercubes
Communication characteristics of bitonic sort on a hypercube. During each stage of the algorithm, processes communicate along the dimensions shown.
Mapping Bitonic Sort to Hypercubes
1. **procedure** BITONIC_SORT(*label*, *d*)
2. begin
3. for *i* := 0 to *d* − 1 do
4. for *j* := *i* downto 0 do
5. if (*i* + 1)$^*$ bit of *label* ≠ *j*$^*$th* bit of *label* then
6. comp_exchange_max(*j*);
7. else
8. comp_exchange_min(*j*);
9. end BITONIC.Sort
Parallel formulation of bitonic sort on a hypercube with $n = 2^d$ processes.
During each step of the algorithm, every process performs a compare-exchange operation (single nearest neighbor communication of one word).
Since each step takes $\Theta(1)$ time, the parallel time is
$$T_p = \Theta(\log^2 n)$$
This algorithm is cost optimal w.r.t. its serial counterpart, but not w.r.t. the best sorting algorithm ($\Theta(n \log n)$).
The connectivity of a mesh is lower than that of a hypercube, so we must expect some overhead in this mapping.
Consider the row-major shuffled mapping of wires to processors.
Different ways of mapping the input wires of the bitonic sorting network to a mesh of processes: (a) row-major mapping, (b) row-major snakelike mapping, and (c) row-major shuffled mapping.
The last stage of the bitonic sort algorithm for $n = 16$ on a mesh, using the *row-major shuffled mapping*. During each step, process pairs compare-exchange their elements. Arrows indicate the pairs of processes that perform compare-exchange operations.
Mapping Bitonic Sort to Meshes
- In the row-major shuffled mapping, wires that differ at the \( i^{th} \) least-significant bit are mapped onto mesh processes that are \( 2^{\lfloor (i-1)/2 \rfloor} \) communication links away.
- The total amount of communication performed by each process is:
\[
\sum_{i=1}^{\log n} \sum_{j=1}^{i} 2^{\lfloor (j-1)/2 \rfloor} \approx 7\sqrt{n}, \text{ or } \Theta(\sqrt{n})
\]
- The total computation performed by each process is \( \Theta(\log^2 n) \).
- The parallel runtime is:
\[
T_P = \Theta(\log^2 n) + \Theta(\sqrt{n}).
\]
- This is optimal for the mesh, but not cost optimal.
Each process is assigned a block of $n/p$ elements.
The first step is a local sort of the local block.
Each subsequent compare-exchange operation is replaced by a compare-split operation.
We can effectively view the bitonic network as having $(1 + \log p)(\log p)/2$ steps $\Rightarrow \Theta(\log^2 p)$.
Block of Elements Per Processor: Hypercube
- Initially the processes sort their \( n/p \) elements (using merge sort) in time \( \Theta((n/p)\log(n/p)) \) and then perform \( \Theta(\log^2 p) \) compare-split steps.
- The parallel run time of this formulation is
\[
T_P = \Theta\left(\frac{n}{p} \log \frac{n}{p}\right) + \Theta\left(\frac{n}{p} \log^2 p\right) + \Theta\left(\frac{n}{p} \log^2 p\right).
\]
Block of Elements Per Processor: Mesh
The parallel runtime in this case is given by:
\[ T_P = \Theta \left( \frac{n}{p} \log \frac{n}{p} \right) + \Theta \left( \frac{n}{p} \log^2 p \right) + \Theta \left( \frac{n}{\sqrt{p}} \right) \]
Overview
1. Parallel sort – distributed memory
2. Parallel sort – shared memory
3. Sorting Networks
A. Odd-even
B. Bitonic
4. Parallel sort - GPU
Which algorithms on GPU?
- Quicksort: shared-memory formulation?
- Mergesort?
- PRAM formulation
- Odd-even transposition
- Bitonic sort
|
{"Source-Url": "http://parallel.vub.ac.be/education/parsys/notes2014/ParSys_8%20Sort.pdf", "len_cl100k_base": 6466, "olmocr-version": "0.1.53", "pdf-total-pages": 70, "total-fallback-pages": 0, "total-input-tokens": 99960, "total-output-tokens": 8924, "length": "2e12", "weborganizer": {"__label__adult": 0.00036072731018066406, "__label__art_design": 0.0005369186401367188, "__label__crime_law": 0.0004243850708007813, "__label__education_jobs": 0.00176239013671875, "__label__entertainment": 0.00011730194091796876, "__label__fashion_beauty": 0.00019073486328125, "__label__finance_business": 0.0003447532653808594, "__label__food_dining": 0.0004529953002929687, "__label__games": 0.0013971328735351562, "__label__hardware": 0.00701904296875, "__label__health": 0.0005731582641601562, "__label__history": 0.00047898292541503906, "__label__home_hobbies": 0.00020802021026611328, "__label__industrial": 0.0012454986572265625, "__label__literature": 0.0002655982971191406, "__label__politics": 0.0003600120544433594, "__label__religion": 0.0006241798400878906, "__label__science_tech": 0.428466796875, "__label__social_life": 0.00010132789611816406, "__label__software": 0.01428985595703125, "__label__software_dev": 0.53955078125, "__label__sports_fitness": 0.0004074573516845703, "__label__transportation": 0.0008287429809570312, "__label__travel": 0.00023996829986572263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22599, 0.02717]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22599, 0.67561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22599, 0.8283]], "google_gemma-3-12b-it_contains_pii": [[0, 141, false], [141, 294, null], [294, 447, null], [447, 601, null], [601, 993, null], [993, 1455, null], [1455, 1541, null], [1541, 1847, null], [1847, 2404, null], [2404, 2695, null], [2695, 2840, null], [2840, 3103, null], [3103, 3189, null], [3189, 3803, null], [3803, 4441, null], [4441, 4594, null], [4594, 5067, null], [5067, 5097, null], [5097, 5580, null], [5580, 5667, null], [5667, 5712, null], [5712, 6411, null], [6411, 6682, null], [6682, 7502, null], [7502, 7756, null], [7756, 8127, null], [8127, 8190, null], [8190, 8343, null], [8343, 8575, null], [8575, 8731, null], [8731, 9232, null], [9232, 9368, null], [9368, 9738, null], [9738, 10191, null], [10191, 10368, null], [10368, 10987, null], [10987, 11362, null], [11362, 11515, null], [11515, 11957, null], [11957, 12252, null], [12252, 12619, null], [12619, 12752, null], [12752, 13106, null], [13106, 13408, null], [13408, 13928, null], [13928, 14345, null], [14345, 14498, null], [14498, 15079, null], [15079, 15698, null], [15698, 16575, null], [16575, 17056, null], [17056, 17413, null], [17413, 17805, null], [17805, 18180, null], [18180, 18286, null], [18286, 18507, null], [18507, 18979, null], [18979, 19141, null], [19141, 19325, null], [19325, 19739, null], [19739, 20096, null], [20096, 20272, null], [20272, 20461, null], [20461, 20716, null], [20716, 21351, null], [21351, 21660, null], [21660, 22071, null], [22071, 22309, null], [22309, 22462, null], [22462, 22599, null]], "google_gemma-3-12b-it_is_public_document": [[0, 141, true], [141, 294, null], [294, 447, null], [447, 601, null], [601, 993, null], [993, 1455, null], [1455, 1541, null], [1541, 1847, null], [1847, 2404, null], [2404, 2695, null], [2695, 2840, null], [2840, 3103, null], [3103, 3189, null], [3189, 3803, null], [3803, 4441, null], [4441, 4594, null], [4594, 5067, null], [5067, 5097, null], [5097, 5580, null], [5580, 5667, null], [5667, 5712, null], [5712, 6411, null], [6411, 6682, null], [6682, 7502, null], [7502, 7756, null], [7756, 8127, null], [8127, 8190, null], [8190, 8343, null], [8343, 8575, null], [8575, 8731, null], [8731, 9232, null], [9232, 9368, null], [9368, 9738, null], [9738, 10191, null], [10191, 10368, null], [10368, 10987, null], [10987, 11362, null], [11362, 11515, null], [11515, 11957, null], [11957, 12252, null], [12252, 12619, null], [12619, 12752, null], [12752, 13106, null], [13106, 13408, null], [13408, 13928, null], [13928, 14345, null], [14345, 14498, null], [14498, 15079, null], [15079, 15698, null], [15698, 16575, null], [16575, 17056, null], [17056, 17413, null], [17413, 17805, null], [17805, 18180, null], [18180, 18286, null], [18286, 18507, null], [18507, 18979, null], [18979, 19141, null], [19141, 19325, null], [19325, 19739, null], [19739, 20096, null], [20096, 20272, null], [20272, 20461, null], [20461, 20716, null], [20716, 21351, null], [21351, 21660, null], [21660, 22071, null], [22071, 22309, null], [22309, 22462, null], [22462, 22599, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22599, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22599, null]], "pdf_page_numbers": [[0, 141, 1], [141, 294, 2], [294, 447, 3], [447, 601, 4], [601, 993, 5], [993, 1455, 6], [1455, 1541, 7], [1541, 1847, 8], [1847, 2404, 9], [2404, 2695, 10], [2695, 2840, 11], [2840, 3103, 12], [3103, 3189, 13], [3189, 3803, 14], [3803, 4441, 15], [4441, 4594, 16], [4594, 5067, 17], [5067, 5097, 18], [5097, 5580, 19], [5580, 5667, 20], [5667, 5712, 21], [5712, 6411, 22], [6411, 6682, 23], [6682, 7502, 24], [7502, 7756, 25], [7756, 8127, 26], [8127, 8190, 27], [8190, 8343, 28], [8343, 8575, 29], [8575, 8731, 30], [8731, 9232, 31], [9232, 9368, 32], [9368, 9738, 33], [9738, 10191, 34], [10191, 10368, 35], [10368, 10987, 36], [10987, 11362, 37], [11362, 11515, 38], [11515, 11957, 39], [11957, 12252, 40], [12252, 12619, 41], [12619, 12752, 42], [12752, 13106, 43], [13106, 13408, 44], [13408, 13928, 45], [13928, 14345, 46], [14345, 14498, 47], [14498, 15079, 48], [15079, 15698, 49], [15698, 16575, 50], [16575, 17056, 51], [17056, 17413, 52], [17413, 17805, 53], [17805, 18180, 54], [18180, 18286, 55], [18286, 18507, 56], [18507, 18979, 57], [18979, 19141, 58], [19141, 19325, 59], [19325, 19739, 60], [19739, 20096, 61], [20096, 20272, 62], [20272, 20461, 63], [20461, 20716, 64], [20716, 21351, 65], [21351, 21660, 66], [21660, 22071, 67], [22071, 22309, 68], [22309, 22462, 69], [22462, 22599, 70]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22599, 0.01542]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
2e08f1b4c133aadc2771ef68dab9aa663f0830ac
|
Introduction
Soft errors occur when high-energy charged particles alter the stored charge in a memory cell in an electronic circuit. The phenomenon first became an issue in DRAM, requiring error detection and correction for large memory systems in high-reliability applications. As device geometries have continued to shrink, the probability of soft errors in SRAM is becoming significant for some systems. Designers are using a variety of approaches to minimize the effects of soft errors on system behavior.
SRAM-based FPGAs store logic configuration data in SRAM cells. As the number and density of SRAM cells in an FPGA increase, the probability that a soft error will alter the programmed logical behavior of the system increases. The module described in this document provides a mechanism for detecting errors in configuration memory for the FPGA.
Core Description
The core consists of an access point to FPGA configuration memory, a controller circuit, and a personality ROM. As serial data is read from configuration memory, CRC is calculated over the bitstream. After roughly 21 data frames, the current CRC calculation is compared with the expected CRC result stored in the personality ROM. If the CRC values match, it indicates that there has been no configuration memory corruption. If the values differ, an error signal is generated along with the row number where the error occurred. This process is repeated until every bit of configuration memory is analyzed. In addition, CRC is checked over the contents of the personality ROM.
The Soft Error Detection Core utilizes a 17-bit CRC algorithm with the following attributes:
- 100% detection of single-bit errors
- 100% detection of two-bit errors within 10 adjacent frames
- 100% detection of multiple-bit errors within a 17-bit span
- \((1-2^{-17})\times100\%\) detection of random multiple-bit errors
Note that the expected CRC calculations are based upon the particular arrangement of configuration memory for a particular design. Consequently, the expected results cannot be specified until after the design is placed and routed. ispLEVER® bit generation software analyzes the configuration of a placed and routed design, and updates the personality ROM contents during bitstream generation.
Block Diagram
*Figure 1. Soft Error Detection IP Core Block Diagram*
## Signal Descriptions
<table>
<thead>
<tr>
<th>Name</th>
<th>I/O</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>reset_n</td>
<td>In</td>
<td>Active low reset. When asserted, resets all internal registers and state machines.</td>
</tr>
<tr>
<td>sed_clk</td>
<td>In</td>
<td>System clock. Maximum frequency is 200MHz. All inputs and outputs are referenced to the rising edge of the clock.</td>
</tr>
<tr>
<td>start</td>
<td>In</td>
<td>Active high control signal. When asserted, triggers the start of a soft error detect analysis cycle. Once the SED analysis starts, the state of the start signal is ignored. After the SED analysis completes and sed_done asserts, the core executes a 14 clock cycle wait state - and afterwards, the start signal state is monitored. Note that by tying start permanently high, the SED core executes continuous back-to-back SED analysis cycles.</td>
</tr>
<tr>
<td>sed_frcerr</td>
<td>In</td>
<td>Active high control signal. It provides a diagnostic capability that forces an error to occur in the analyzed FPGA array bitstream. When sed_frcerr is asserted while the SED analysis is in progress, a datastream error is inserted in the row specified by the sed_frcrow signal. In response to the forced error, the SED core should detect the error. Errors are reported on the signals: sed_err, sed_erow, sed_ercnt.</td>
</tr>
<tr>
<td>sed_frcrow[7:0]</td>
<td>In</td>
<td>This signal specifies an 8-bit numeric value that indicates the FPGA row on which a forced error is inserted (MSB=bit7). This signal works in concert with the sed_frcerr signal. Note that the maximum row number is different for each of the FPGA array sizes in the LatticeSC™ family (see table below). Specifying a sed_frcrow number beyond the range of the FPGA array size causes the SED core to fail to insert the requested forced error.</td>
</tr>
<tr>
<td>sed_data_out</td>
<td>Out</td>
<td>This signal displays the serial FPGA datastream analyzed by the SED core.</td>
</tr>
<tr>
<td>sed_start_out</td>
<td>Out</td>
<td>Active high signal that remains asserted throughout the entire SED analysis. The leading edge of the assertion begins 11 clock cycles after assertion of the start signal.</td>
</tr>
<tr>
<td>sed_inprog</td>
<td>Out</td>
<td>Active high signal that remains asserted throughout the entire SED analysis. The leading edge of the assertion begins when the core first detects the start sequence embedded in the serial FPGA datastream. The duration of the SED analysis is different for each of the FPGA array sizes in the LatticeSC family. See the table below for approximate duration intervals.</td>
</tr>
<tr>
<td>sed_done</td>
<td>Out</td>
<td>Active high signal that asserts when a SED analysis cycle ends. The signal deasserts when another SED analysis is started.</td>
</tr>
<tr>
<td>sed_err</td>
<td>Out</td>
<td>Active high signal that asserts when an error is detected during the SED analysis. sed_err is valid only when sed_done is asserted.</td>
</tr>
<tr>
<td>sed_ерow[7:0]</td>
<td>Out</td>
<td>This signal specifies an 8-bit numeric value that indicates the FPGA row number associated with any detected SED errors. If errors are detected in multiple rows, this signal specifies the first row.</td>
</tr>
<tr>
<td>sed_ерcnt[7:0]</td>
<td>Out</td>
<td>This signal specifies an 8-bit numeric value that indicates the number of FPGA rows that experienced errors during the SED analysis.</td>
</tr>
</tbody>
</table>
### LatticeSC Device Valid Row Numbers
<table>
<thead>
<tr>
<th>LatticeSC Device</th>
<th>Valid Row Numbers</th>
</tr>
</thead>
<tbody>
<tr>
<td>S15</td>
<td>0-45</td>
</tr>
<tr>
<td>S25</td>
<td>0-57</td>
</tr>
<tr>
<td>S40</td>
<td>0-71</td>
</tr>
<tr>
<td>S80</td>
<td>0-95</td>
</tr>
<tr>
<td>S115</td>
<td>0-117</td>
</tr>
</tbody>
</table>
### LatticeSC Device In Progress Duration (number of sed_clk cycles)
<table>
<thead>
<tr>
<th>LatticeSC Device</th>
<th>In Progress Duration</th>
</tr>
</thead>
<tbody>
<tr>
<td>S15</td>
<td>4,423,344</td>
</tr>
<tr>
<td>S25</td>
<td>7,686,464</td>
</tr>
<tr>
<td>S40</td>
<td>11,763,264</td>
</tr>
<tr>
<td>S80</td>
<td>22,250,944</td>
</tr>
<tr>
<td>S115</td>
<td>30,613,424</td>
</tr>
</tbody>
</table>
Core Operation
Overview
A typical SED analysis cycle is as follows:
- Trigger start of SED analysis
- Verify that SED analysis starts
- Wait for SED analysis to complete
- Check for errors at end of cycle
A typical SED timing diagram is shown below.
**Figure 2. SED Timing Diagram**

Starting a SED Analysis Cycle
A SED analysis is started by driving the start signal high. However start is not monitored while the analysis is in progress (sed_inprog asserted). It is also not monitored during the first 14 clock cycles after sed_done asserts. Therefore, an analysis cannot be started during these blind periods. A valid start signal can be as short as a single clock cycle, as long as it falls outside one of the blind periods.
Verifying that SED Analysis is Running
Eleven clock cycles after a valid start assertion, the sed_start_out signal asserts. Afterwards, the sed_inprog signal asserts when the readback datastream start sequence is detected. Both sed_start_out and sed_inprog remain asserted through the remainder of the SED analysis cycle.
Detecting the End of SED Analysis
The SED analysis is finished when the sed_done signal asserts. sed_done remains asserted until another analysis is started.
Detecting SED Errors
If a soft error is detected during the analysis, the sed_err signal is asserted. The sed_err signal is only valid while sed_done is also asserted. In addition, the sed_erow signal indicates the first array row that contains a soft error; the sed_ecnt signal indicates the total number of array rows that contained soft errors.
Forcing Errors
The sed_frcerr and sed_frcrow signals control forced soft error insertion. Driving sed_err high enables error insertion. The sed_frcrow signal specifies the row on which the forced soft error is inserted. Both signals must remain asserted throughout the entire period that the SED analysis is in progress. Note that forcing a soft error does not actually modify the contents of the FPGA array. Instead, the error is only inserted in the analyzer circuit. Therefore, forcing soft errors will not result in a malfunction in your FPGA design.
Design Flow Overview
A typical design flow using the SED IP core is illustrated below. The following few paragraphs explain the steps in detail.
Figure 3. Design Flow
Core Generation
The SED IP core is available for download from the Lattice IP Server tab in the IPexpress main window. The IP files are automatically installed using ispUPDATE technology in any customer-specified directory.
The ispLEVER IPexpress GUI window for the SED IP core is shown in Figure 4. The procedure for generating the SED core follows:
- Launch IPexpress by clicking Start-> Programs-> LatticeSemiconductor-> Accessories-> IPExpress. The window shown in Figure 4 will be displayed
• Choose the SED IP Core by clicking **IP-> Soft Error Detect Core** in the left-hand pane of the IPexpress window.
• Choose the directory in which you want your IP core files to be generated by entering the appropriate path in the **Project Path** text box.
• Enter the name **sed_core** in the **File Name** text box. You must enter the name “sed_core”. Otherwise, the IP core generation will fail.
• Select the desired HDL format (Verilog or VHDL) that you want for your generated files by choosing the desired option at the **Design Entry** text box.
• Select the **Device Family**.
• Select the **Part Name** to be used for your design. Note, it is important to choose the correct device type during core generation because operation of the SED core is tied to the array size. The SED core will not detect soft errors if it is downloaded into an array that is a different size than the array for which the core was originally generated.
Note also that although array size is critical; for example S25 vs. S40, package type is not critical, i.e. if a core is generated using one package type and downloaded the design into a different package type, it will still operate correctly as long as the array size is correct. If IPexpress is called from within an existing project, Project Path, Design Entry, Device Family and Part Name default to the specified project parameters. Please refer to the IPexpress on-line help for further information.
• Hit the **Customize** button. The window shown in Figure 5 will be displayed.
• Hit the **Generate** button in the new window. This will generate the IP core files. Close all windows when finished.
After clicking the Generate button, the configuration-specific IP core and supporting files are generated in the user's project directory. The directory structure of the generated files is shown in Figure 6.
**Figure 6. SED IP Core Generated Directory Structure**
The table below shows the basic files generated in Verilog format. Similar files are generated for VHDL format. These are all of the files necessary to implement and verify the SED IP core in a top-level design.
Table 1. SED IP Core Generated Files
<table>
<thead>
<tr>
<th>File Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Design Files</strong></td>
<td></td>
</tr>
<tr>
<td>sed_core_sc_1_001.ngo</td>
<td>Synthesized and mapped form of SED IP core. This file must be included in the same directory that your overall top-level application design is mapped with ispLEVER.</td>
</tr>
<tr>
<td>sed_core_bb.v</td>
<td>Black box description of SED IP core. This file should be used to describe the SED core when synthesizing your top level application.</td>
</tr>
<tr>
<td>sed_core_inst.v</td>
<td>Example of SED IP core instantiation.</td>
</tr>
<tr>
<td>sed_core_sim.v</td>
<td>Behavioral simulation model for SED IP core.</td>
</tr>
<tr>
<td><strong>Miscellaneous Files</strong></td>
<td></td>
</tr>
<tr>
<td>sed_core.lpc</td>
<td>Parameters associated with your configuration choices for SED IP core.</td>
</tr>
<tr>
<td>sed_core_generate.log</td>
<td>Lists IPexpress messages generated during SED core file generation.</td>
</tr>
<tr>
<td>sed_core_filelist.log</td>
<td>List of generated SED IP core files.</td>
</tr>
</tbody>
</table>
The \sed_core_eval and subtending directories provide files supporting SED core evaluation. The \sec_core_eval directory shown in Figure 6 contains files/folders with content that is constant for all configurations of the SED core. The \sec_core_eval directory is created by IPexpress the first time the core is generated and updated each time the core is regenerated.
**Instantiating the Core**
The generated SED IP core package includes black-box (sed_core_bb.v/sed_core_component.vhd) and instance (sed_core_inst.v/vhd) templates (Verilog or VHDL) that can be used to instantiate the core in a top-level design. An example RTL top-level reference file that can be used as an instantiation template for the IP core is provided in \sed_core_eval\src. Users may also use this top-level reference as the starting template for the top-level for their complete design.
**Running Functional Simulation**
The generated IP core package includes a configuration-specific behavior model (sed_core_sim.v/vhd) for functional simulation. A top-level file supporting ModelSim® eval simulation is provided in \sed_core_eval\sim\modelsim.
ModelSim simulation is supported via testbench files provided in \sed_core_eval\testbench. Models required for simulation are provided in the corresponding \models folder.
Users may run the eval simulation by doing the following:
1. Open ModelSim
2. Under the File tab, select New, then Project
3. Set project location to \sed_core_eval\sim\modelsim\verilog (or \vhdl)
4. Specify a Project Name (may be any name)
5. Select OK to create the project
6. Close Add items to the Project box without selecting anything
7. Execute simulation do script sed_core_eval.do located in \sed_core_eval\sim\modelsim\verilog (or \vhdl)
Synthesizing and Implementing the Core in a Top-Level Design
The SED core itself is provided in NGO format when the core is generated. Users may synthesize the core in their own top-level design by instantiating the core in their top-level as described previously and then synthesizing the entire design with either Synplicity® or Precision® RTL Synthesis.
An example RTL top-level reference source file supporting SED core top-level synthesis and implementation is provided with the SED IP core in `sed_core_eval\src`. This reference design supports the ability to synthesize and map just the SED core itself. This design is intended to provide an example of how to instantiate the core in a top-level design and an accurate indication of the device utilization associated with the SED core itself.
Push-button implementation of the reference design is supported via the ispLEVER project file `sed_core_eval.syn` located in `sed_core_eval\impl`. To use this project file:
1. Select **Open Project** under the **File** tab in ispLEVER
2. Browse to `\sed_core_eval\impl` in the **Open Project** dialog box
3. Select and open `sed_core_eval.syn`. At this point, all of the files needed to support top-level synthesis and implementation will be imported to the project.
4. Implement the complete design via the standard ispLEVER GUI flow
**Hardware Evaluation**
Lattice’s IP hardware evaluation capability makes it possible to create versions of IP cores that operate in hardware for a limited period of time (approximately one hour) without requiring the purchase on an IP license. The hardware evaluation capability is enabled by including a specific attribute string in the IP top-level design.
For Verilog flows, the attribute string is included in the top-level module declaration as follows:
```verilog
module <top_design> (
signal_1,
signal_2,
.
signal_n
)
/* <attribute string> */
;
```
The specific attribute string to be inserted is a function of the Lattice FPGA device family being targeted and the synthesis tool being used. When synthesizing the top-level with Synplify, the following attribute string should be included for LatticeSC devices:
```verbatim
/* synthesis LSC_IP_SC_HT_SED="LSC_IP_sed_core_sc_ipe" */
```
When synthesizing with Precision RTL Synthesis, the following attribute string should be included:
```verbatim
/* pragma attribute <top_design> LSC_IP_SC_HT_SED LSC_IP_sed_core_sc_ipe */
```
For VHDL flows, the attribute string is included in the architecture declaration as follows:
```vhdl
architecture <arch> of <top_design> is
attribute LSC_IP_SC_HT_SED : string;
attribute LSC_IP_SC_HT_SED of <arch> : architecture is "LSC_IP_sed_core_sc_ipe";
```
For VHDL flows the same attribute string is used for both Synplify and Precision RTL Synthesis.
The same string may be included in customer top-level designs to enable hardware evaluation of the SED IP core in customer-defined applications. When the attribute string is included in the top level of the design, it is possible to generate a programming file that may be downloaded into the device. After initialization, the IP core will be operational for approximately one hour. After one hour, the IP core will stop working and it will be necessary to reprogram the device to re-enable operation. This hardware evaluation capability is only enabled if the core has not been licensed. During implementation, a license check is performed. If a license is not detected, a pop-up window will be displayed indicating a license failure. Click **OK** in the window and generation will proceed to completion with hardware evaluation enabled. If a license is detected, no pop-up window is displayed and core generation is completed with no restrictions.
**References**
The following documents provide more information on implementing this core.
- ispLEVER Software User Manual
- ispLeverCORE™ IP Module Evaluation Tutorial available on the Lattice website at [www.latticesemi.com](http://www.latticesemi.com)
**Technical Support Assistance**
Hotline: 1-800-LATTICE (North America)
+1-503-268-8001 (Outside North America)
e-mail: techsupport@latticesemi.com
Internet: [www.latticesemi.com](http://www.latticesemi.com)
Appendix for LatticeSC FPGAs
Table 2. Performance and Resource Utilization
<table>
<thead>
<tr>
<th>Device</th>
<th>SLICEs</th>
<th>LUTs</th>
<th>Registers</th>
<th>External Pins</th>
<th>sysMEM™ EBRs</th>
<th>f_MAX (MHz)</th>
</tr>
</thead>
<tbody>
<tr>
<td>SC25</td>
<td>283</td>
<td>478</td>
<td>313</td>
<td>0</td>
<td>1</td>
<td>200MHZ</td>
</tr>
</tbody>
</table>
1. Performance and utilization characteristics are in Lattice’s ispLEVER 5.1 SP2 software with Synplify synthesis and targeting a LatticeSC LFSC3GA25EP1-5FF1020CES device. When using this IP core in a different density, speed, or grade within the LatticeSC family or in a different software version, performance may vary.
Supplied Netlist Configurations
The Ordering Part Number (OPN) for the Soft Error Detection IP core targeting LatticeSC devices is SED-CORE-SC-U1. You can use the IPexpress software tool to help generate new configurations of this IP core. IPexpress is the Lattice IP configuration utility, and is included as a standard feature of the ispLEVER design tools. Details regarding the usage of IPexpress can be found in the IPexpress and ispLEVER help system. For more information on the ispLEVER design tools, visit the Lattice web site at: www.latticesemi.com/software.
|
{"Source-Url": "http://www.latticesemi.com/-/media/LatticeSemi/Documents/UserManuals/RZ/SoftErrorDetectionIPCoreUsersGuide.ashx?document_id=19408", "len_cl100k_base": 4680, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 21774, "total-output-tokens": 4843, "length": "2e12", "weborganizer": {"__label__adult": 0.000682830810546875, "__label__art_design": 0.001129150390625, "__label__crime_law": 0.0005159378051757812, "__label__education_jobs": 0.000743865966796875, "__label__entertainment": 0.00015354156494140625, "__label__fashion_beauty": 0.0004343986511230469, "__label__finance_business": 0.0003859996795654297, "__label__food_dining": 0.000606536865234375, "__label__games": 0.001132965087890625, "__label__hardware": 0.1451416015625, "__label__health": 0.0009212493896484376, "__label__history": 0.0004732608795166016, "__label__home_hobbies": 0.0004901885986328125, "__label__industrial": 0.00420379638671875, "__label__literature": 0.0002033710479736328, "__label__politics": 0.0003714561462402344, "__label__religion": 0.0009665489196777344, "__label__science_tech": 0.357666015625, "__label__social_life": 7.450580596923828e-05, "__label__software": 0.022430419921875, "__label__software_dev": 0.45947265625, "__label__sports_fitness": 0.0006699562072753906, "__label__transportation": 0.0008859634399414062, "__label__travel": 0.00022530555725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20103, 0.02135]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20103, 0.25307]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20103, 0.8332]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2339, false], [2339, 6346, null], [6346, 8496, null], [8496, 9165, null], [9165, 10700, null], [10700, 11299, null], [11299, 14641, null], [14641, 17363, null], [17363, 18879, null], [18879, 20103, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2339, true], [2339, 6346, null], [6346, 8496, null], [8496, 9165, null], [9165, 10700, null], [10700, 11299, null], [11299, 14641, null], [14641, 17363, null], [17363, 18879, null], [18879, 20103, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20103, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20103, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2339, 2], [2339, 6346, 3], [6346, 8496, 4], [8496, 9165, 5], [9165, 10700, 6], [10700, 11299, 7], [11299, 14641, 8], [14641, 17363, 9], [17363, 18879, 10], [18879, 20103, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20103, 0.25767]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ab4787db4b8c8785884ae18895d87d56c460fd9c
|
SDfR -Service discovery for multi-robot systems
Stefan-Gabriel Chitic, Julien Ponge, Olivier Simonin
To cite this version:
Stefan-Gabriel Chitic, Julien Ponge, Olivier Simonin. SDfR -Service discovery for multi-robot systems. ICAART 2016 The 8th International Conference on Agents and Artificial Intelligence, Feb 2016, Rome, Italy. hal-01286895v2
HAL Id: hal-01286895
https://hal.archives-ouvertes.fr/hal-01286895v2
Submitted on 11 Mar 2016
Abstract: Multi-robots systems require dedicated tools and models for their design and the deployment. Our approach proposes service-oriented architecture that can simplify the development and deployment. In order to solve the problem of neighbors and service discovery in an ad-hoc network, the fleet robot needs a protocol that is able to constantly discover new robots in its coverage area. To this end we propose a robotic middleware, SDfR, that is able to provide service discovery. This protocol is an extension of the Simple Service Discovery Protocol (SSDP) used in Universal Plug and Play (UPnP) to dynamic networks generated by the mobility of the robots. Even if SDfR is platform independent, we propose a ROS (ROS, 2014) integration in order to facilitate the usage. We evaluate a series of overhead benchmarking across static and dynamic scenarios. We also present some use-cases where our proposal was successfully tested.
1 INTRODUCTION
Nowadays more and more robotic systems tend to be composed of several robots moving and cooperating, generally called fleets of robots. They are able to perform one or multiple tasks together and to share information about complex missions.
One of the problems in developing multi-robot applications is the communication inside the fleet. The robots need to know the reachable peers in any type of communication infrastructure. In an ad-hoc network, this problem becomes challenging because the robots are highly mobile. A fleet can easily partition or merge with another fleet. Also single robots can become isolated or join an existing fleet.
There is a great need of having a suitable middleware that can abstract the neighbor and service discovery layer and offer high-level application interfaces (APIs) to robotic applications. The challenge is to provide an adapted system that can offer a suitable mechanism for network configuration. Second, this mechanism needs to maintain a list of connected peers and their services in a highly mobile scenario.
In this paper we propose a new service discovery middleware, called SDfR (Service Discovery for Robots), an extension of the Simple Service Discovery Protocol (SSDP) used in Universal Plug and Play (UPnP). SDfR is able to make an auto-configuration for the connectivity to the fleet ad-hoc network, discover reachable neighbors and their services. We also provide a ROS integration node, simplifying the usage of SDfR.
The paper is structured as follows: Section 2 presents the background of service orientated architecture and discovery in multi-robot systems, Section 3 discusses the service discovery approaches in middleware for robotics, Section 4 presents our proposal for service discovery in a fleet context, Section 5 presents the architecture and the implementation of SDfR, Section 6 presents the main results of overhead benchmarking and Section 7 describes a couple of use-cases. Section 8 concludes the paper.
2 MOTIVATION
One problem of robotic fleets is how functionalities can be applied to make the collaboration between peers easier. Robots in fleet system that perform a mission together need to communicate with other peers. The communication can be done using a centralized node or directly using an ad-hoc network.
Robotic applications as services In our vision, robots need to advertise their functionalities as services in order to allow other members of the fleet to interact with them. In network based applications,
service-oriented programming is now a largely accepted principle (Issarny et al., 2011).
Service-oriented architecture greatly simplifies the implementation of highly-adaptive, constantly-evolving applications (Frénot et al., 2010). It also reduces the process of developing and deploying new robotic applications. This architecture is very suitable to quickly cope with new developing models and requirements.
In order to increase the mobility of the robots and to distribute the communication without having a central node, there is a need for the communication to be decentralized using ad-hoc networks. In this case, the robots do not have any image of their neighbors. Furthermore, the communication across peers is susceptible to route change and different peers can be used to relay a data package. The ad-hoc network becomes the sum of peer to peer network with 2 robots at least.
**Fleet service discovery** In order to solve the problem of neighbors and service discovery in an ad-hoc network, a fleet robot needs a protocol that is able to constantly discover new robots in its coverage area, while maintaining a neighbor connectivity quality indicator. Since there is not any central node that can manage IP address allocation, the protocol should be able to negotiate an IP address inside the network and to have a conflict management tool in case of an IP collision. Our work focuses on ad-hoc multi-robot systems communication, especially on a family of middleware that accelerates the development and the deployment of new robotic applications.
### 3 RELATED WORK
In this section we present some of the major middlewares in robotics as well as some of existing service discovery protocols used in the field.
#### 3.1 Middleware for robotics
All aspects of communication, application deployment and configuration can be facilitated using a proper middleware. The biggest difference between a classical middleware that runs in a cloud centralized infrastructure and a robotic one, is the mobility of the fleet and the decentralization of its components. An exhaustive survey of the existing middlewares for robot contexts is clearly impossible because of the large number of existing middleware and frequent releases of new ones. A detailed survey can be found in (Chitic et al., 2014). Most of the middlewares are designed for mono-robot with little applicability in multi-robot systems.
One of the first middleware that emerged for robotics is Player/Stage (Kranz et al., 2006) project which is designed to provide an infrastructure, drivers and a collection of dynamically loaded device-shared libraries for robotic applications. It neither offers a service orientated architecture, nor a discovery for neighbors services system.
Another highly used middleware for robotics is ROS (Robot operating system). It is a recent flexible middleware for robot applications (Quigley et al., 2009; Cousins et al., 2010; ROS, 2014). It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. It provides hardware abstraction, device drivers, visualizers, message-passing, package management.
ROS (ROS, 2014) seems to be the emerging middleware for mono-robot with the most potential to become the most used framework for robotic fleets. However, it has no multi-robot coordination system and no automated testing environment, but it has already the advantage of having a large community that develops new packages for it.
To the best of our knowledge, none of the existing middleware for robots provides a service discovery mechanism. ROS internal repository is represented as an internal node designed for internal service discovery, but it does not support neighbors discovery.
#### 3.2 Service discovery and robotics
Classical protocols and middlewares for service discovery in distributed environments like data-grids, clouds or even smart environments have a centralized or decentralized registry that manages service description. Decentralized systems (e.g UPnP (Ahn et al., 2005), Jini (Pereira et al., 2011) or SLP (Romero et al., 2010)) can be a purely distributed solution where each node stores its own service repositories or a hybrid solution that includes super-nodes that aggregate information from other peers.
Another way to see a fleet of robots is like a service-oriented multi-agent system. Such environments like Peer-to-Peer (P2P), Multi-Agent Systems (MAS) or Service-Oriented Environments (SOE) tend to approach the problem of service discovery in a centralized, distributed or decentralized way. Centralized mechanisms like super-peers (Gummadi et al., 2002), middle-agents (Klusch et al., 2006) or central registries (Rompothong and Senivongse, 2003) are limited in number of agents in the system and in terms of the number of requests. They also use...
a centralized node which can have serious impact if the central point becomes unreachable. Distributed approaches such as Distributed Hash Tables (DHT) (Maymounkov and Mazieres, 2002) offer more scalability and robustness by having multiple specific nodes that can manage the resources. Decentralized systems consider all the nodes to be equal. This approach provides more flexibility, but it has its downsides, since each node has only a partial view of the entire system. As mentioned in (del Val et al., 2014), an interesting way to discover service inside a decentralized and self-organized multi-agent system is to use homogeneity between agents.
In the robotic world, an approach for service discovery in centralized networks is to use classical Universal Plug and Play (UPnP) protocol. Since the concept of having the robotic tasks and processes as services is not mature yet, the main focus of research on service discovery in robotics is oriented towards the integration with the environment, like smart homes or smart cities. Reference (Borja et al., 2013) provides a case study of integration of service robots and smart-homes via UPnP. In these cases, the authors are not referring to a robot as part of a specific fleet, but as part of an environment, in which the robot is considered as an entity that can offer services.
The solutions presented above have their downsides when applied to ad-hoc multi-robot systems. Firstly, due to the mobility of the robots, the network connection is highly unstable and robots can disconnect and reconnect very often. Existing protocols do not perform the same way in a highly dynamic environment and in a static one. As mentioned in (Issarny et al., 2011), the challenge is to set the trade-off between physical mobility and scalability. Secondly, existing protocols are not very adaptive. The discovery protocol should be ready to be used at any time and track its usage and failures. Existing protocols like UPnP have a limited memory factor and when the connection is timed-out, the discovery process reinitializes itself at reconnection.
Next sections present our extension of SDP used in UPnP in order to overcome the limitation of existing service discovery approaches.
4 SERVICE DISCOVERY FOR ROBOTICS
Our main goal is to propose a mechanism that allows highly mobile robots to keep track of the reachable peers inside a fleet while using an ad-hoc infrastructure. This mechanism should also be able to provide a list of services available on each peer. Another objective is to propose a network configuration negotiation protocol. Due to the mobility of robots, classical peer to peer network configuration techniques are not suitable.
In this section we present the general description of our service discovery protocol for robotic applications, called SDfR (Service Discovery for Robots).
4.1 SDfR vs SSDP
We propose a protocol that does not flood the network and provides an already seen memory feature built-in. SDIR protocol aims to be a dynamic, adaptive and scalable protocol. For this purpose we based our approach on Simple Service Discovery Protocol (Goland et al., SSDP) from UPnP. SDIR also aims to provide service discovery with the smart-environment in which the robots are being deployed.
In order to limit the network usage for the service discovery process, SDIR sends most of the internal messages in multicast, avoiding the overhead generated by unicast transmission in order to propagate the same message. In addition, to avoid failure in case of a disconnection due to the movement of the robots outside the coverage area, all the communications are done using UDP (User Datagram Protocol). Furthermore, to limit the network flooding when the protocol needs information from just one robot, a second transmission is enabled in unicast mode. SDIR does not need to reinitialize the entire discovery protocol when the connection is lost, because it disposes of a history map of all the already seen robots and their services.
In order to avoid services that are out of reach (e.g. service of robots that are present in the history map but are not present in the covered communication area), a connection indicator is computed for each robot represented by the success rate of pinging the connected peers.
The main similarities between SSDP and SDfR are:
- **Multicast transmissions** In order to avoid the overhead of retransmitting the same unicast message, most of the internal messages are multicast.
- **HTTP-style messages** The messages that are being sent use an HTTP-style structure composed of headers and a body.
The main differences between SSDP and SDfR are:
- **Limited multicast transmissions** To avoid failure in case of a disconnection due to the movement of the robots outside the coverage area, all the communications in SDIR are done using UDP (User Datagram Protocol) and only in request model.
- **Unicast transmissions** To limit the network flooding when the protocol needs information from just
one robot, a second transmission mechanism is enabled in unicast mode in SDfR.
**History map** SDfR does not need to reinitialize the entire discovery protocol when the connection is lost, because it disposes of a history map of all the already seen robots and their services. In order to avoid services that are out of reach (e.g., service of robots that are present in the history map but are not present in the covered communication area), a connection indicator is computed for each robot.
### 4.2 Protocol Description
We designed our protocol as a finite state timed automaton. A detailed description of the design process and the model validation is out of the scope of this paper.
Each protocol method has at least one type of message that resides inside the request payload. A sample execution of the communications is illustrated in Figure 1. Two methods representing the desired action of a request are used in SDfR: **M-SEARCH** and **NOTIFY**.
The **M-SEARCH** method is used for discovery requests to get the list of nearby members and their services. The message type associated with this method is **Discovery**.
The other method, **NOTIFY** is used to respond to a **Discovery** request or to inform the others about changes in the current state of the robot. The message types associated with this method are: **Update**, **Alive** and **Byebye**.
- **The Update** message is sent as a response to a **Discovery** request or when the current services or capacities of the robot change.
- **The Alive** message is sent recurrently, as a beacon, in order to inform the others about the presence of the robot. The beacon sending rate can be set depending on the services need. The default value is 10s.
- **The Byebye** message is sent when the robot stops gracefully, in order to inform the others about its disappearance.
### 5 ARCHITECTURE OF SDfR
In this section we zoom in on the architecture and features embedded in SDfR by presenting its implementation as a service.
#### 5.1 SDfR Architecture
Service Discovery for Robots is developed as a service itself. The service-oriented architecture approach for robotic software development is not very widely used in the robotic community. The practice in this community is to develop built-in libraries in order to extend software features.
**Figure 1: Sample execution of SDfR communications**
The main advantage of having a service-oriented architecture is the compatibility with other robotic services developed in different programming languages and running over different operating systems. This is a critical feature for a heterogeneous robotic fleet.
Furthermore, SDfR service can run separately of the other processes on the robot and all the messages are consumed by instances of the service on multiple robots. If it fails, it would not affect the other services running on the robot. This sand-boxing also ensures that the information sent by the protocol is not corrupted by any other third-parties.
SDfR service is composed of two layers as shown in Figure 2: an API layer that communicates with other services and a Discovery Protocol layer. Each layer has an independent life-cycle and communicates internally via a shared memory. The API layer responds to requests independently from the lower layer, using the information from the shared memory. The lower discovery protocol layer is in charge of communication with the other SDfR nodes on an elastic number of robots in order to discover the reachable peers and their services.
**RESTful communication API** A Representational State Transfer (Fielding, 2000) (RESTful) web-service was chosen for the API. It lets other services communicate with SDfR service because it is based on normal HTTP requests which enables intent to be inferred from the type of request being made and
is completely stateless. All the responses are JSON messages.
Ad-hoc configuration management Since the fleet is operating in an ad-hoc infrastructure, the peers need to be able to negotiate and auto-configure their network configuration. A robotic fleet ad-hoc network is different from a classic ad-hoc hot-spot because the robots are highly dynamic and the network can be partitioned or merged. The mobility of the peers needs to be taken into consideration in the negotiation protocol of the configuration. SDfR service, based on a simple configuration file, is able to automatically connect to an ad-hoc network. The secured WiFi SSID network is composed using the fleet id. This mechanism allows to have multiple fleets of robots in the same network space. Moreover, the robots can auto assign IP addresses. The standard network space is 10.<fleet id>.,<x>,<y>, where x and y are computed by each robot from its internal MAC address in order to avoid IP conflict (Thomson et al., 2007). Furthermore, if an IP conflict happens, the service has a mechanism to trigger an IP change on the robots. This mechanism is available all the time since an IP conflict can be triggered by a merge of 2 subnetworks.
5.2 ROS integration
In order to make SDfR service user friendly, we have created a ROS node that communicates with SDfR service and can be used by other nodes via topics and services. When the node starts, it launches a instance of SDfR if it is not running and then it provides support for other ROS nodes to publish or unpublish their services and their capacities. Furthermore, our ROS node provides the neighbors list of services and is capable of allowing other ROS applications to search for a specific service with a specific configuration.
A producer node can publish its services or capacities in an asynchronous way using ROS topics because we consider that the registration is not highly bound to time. The same concept applies for the unpublish commands and for getting the list of neighbors. On the other hand, the search command for a specific neighbor and their services needs to be done in synchronous way using ROS services because the behaviors of the consumer node is depending on it.
6 EVALUATION
In the evaluation we want to measure what is the overload generated by the usage of SDfR in a robotic fleet context. Another objective is to see the impact of using text-plain protocol in the upper and the lower layer of the SDfR service. In this section we present the main results of resource overhead for SDfR.
SDfR was benchmarked in two types of contexts:
- a static scenario where the robots do not move to evaluate the overhead in an ideal Wi-Fi communication scheme
- a real dynamic scenario (see Figure 3) where robots are moving and transmission can drop.
In both scenarios all peers should discover their neighbors and in the the second one, the neighbors discovery depends on the distance between peers.
The benchmarks were performed on Turtlebot 2 robots equipped with an Intel Core 2 Duo, 2.1 GHz CPU, 4Gb of Ram PC running on Ubuntu 13.04
Each test run was given 5 minutes to collect the data. In our test runs we used simulated services that try to register/subscribe into SDfR. We simulate three type of actions:
1. Publish. We simulate new service providers that try to publish with a delay time of 10 seconds. In order to simulate publishers, we used an Apache server on each robot that responds to the auto-discovery URL of each publisher.
2. Unpublish. Each of the already published service providers could be unpublished with a random delay between 5 seconds.
3. **Subscribe.** We generate separated threads for each consumer. Each thread will constantly request the table of neighbors from SDfR, in order to stress at maximum our protocol.
In the static scenario we considered different numbers of robots: 2, 4, 6. Each robot had a total number of service providers and service consumers equal to 100 simulated services. For each number of robots we used different ratios between providers and consumers: 30%, 50% and 70% publishers.
In the dynamic test-case we used for each variant of SDfR 6 robots with a number of 100 simulated services per robot. 70% of the services on each robot were publishers. The robots moved randomly in a 200 square meters room with poles and other obstacles. The room (see Figure 3) was exposed to Wi-Fi interferences from other networks that occupy all of the 2.4Ghz channels.

**Figure 3:** A robotic application that provides a service for the fleet needs to register with SDfR. This must be done as fast as possible in order to avoid blocking the service when it starts. Another important overhead measure is the time of unregister request. This happens whenever a producer wants to remove itself from the SDfR registry. One of the most important metric for robotics application from a latency point of view is the time to request the list of neighbors and their services. In a real scenario, a producer registers once for its life-time cycle, but a consumer may request multiple times the list of reachable neighbors and services. The subscription response time can have an impact on the time to complete a fleet task.

**Figure 4:** Average request time in SDfR.
### 6.1 Overhead evaluation
A robotic application that provides a service for the fleet needs to register with SDfR. This must be done as fast as possible in order to avoid blocking the service when it starts. Another important overhead measure is the time of unregister request. This happens whenever a producer wants to remove itself from the SDfR registry. One of the most important metric for robotics application from a latency point of view is the time to request the list of neighbors and their services. In a real scenario, a producer registers once for its life-time cycle, but a consumer may request multiple times the list of reachable neighbors and services. The subscription response time can have an impact on the time to complete a fleet task.
Figure 4 reveals the resulting time consumed by a service to publish or unpublish its service or get the neighbors services in the static and dynamic scenarios. In the dynamic scenario (Figure 4), the latency is higher but the difference from the static scenario is less than 30%. This can be explained by the increase in computation load on robots generated by the mobility of the fleet. All the request times remain in a certain variation interval even if the number of robots increases. The publish request takes from 95ms to 130ms, the unpublish request varies between 98ms to 122ms and the subscribe request is less than 5ms.

**Figure 5:** CPU and Memory usage.
### 6.2 CPU and Memory usage evaluation
In robotic applications, the computation power is critical. Fleet of robots are heterogeneous and can include different types of robots with different computational factor. Having a low CPU consumption discovery service, benefits the other processes involved in performing the fleet mission. Besides the CPU usage, another critical resource in fleets of robots is the memory. As an example, robotic fleets may include visual sensors like 3D cameras which are in high demand of memory. A service discovery protocol needs to have a low usage of the robot memory.
Figure 5 presents our results for % of CPU and Memory used. CPU consumption varies between 2.2% to 2.8% of CPU usage. We observe that the CPU usage is higher for the dynamic scenario because the CPU time is consumed by the mobility management and the number of CPU slots is less for other processes. We can also say that the CPU usage increases with the number of robots in the fleet with a rate of maximum 1% per robot. The memory usage is less than 0.3% (out of 4GB) and seems to remain constant even if the number of robots increases.
### 6.3 Latency evaluation
In a fleet context the communication between peers in ad-hoc network are very sensitive. The transmissions
Figure 6: Transmitted and Received bytes per robot in 5 min.
can be unreliable due to the mobility of the robots. This is why the network overhead needs to be as limited as possible in order to allow services to exchange information. Our measurements include the average of transmitted and received bytes per robot in a test run of 5 minutes. To increase the quality of our measurements we use intermediary check-points for each metric at each 10 seconds.
Figure 6 presents the quantity of transmitted (TX) and received (RX) kilobytes per robot. The number of bytes varies from 220 kilobytes to 320 kilobytes for the transmitted bytes and from 540 kilobytes to 850 kilobytes for received bytes. This remains very limited considering the time of 5 minutes.
Conclusion Our results show that SDfR is low latency brick, with a small overhead on CPU and memory that has a low network consumption. Moreover, the increase in number of robots in a fleet do not have a big impact on SDfR overhead. Being integrated with ROS makes SDfR a good asset in managing neighbors and service in multi-robot context.
7 USE-CASES
This section presents use-cases where SDfR and service oriented architecture were applied. These two use-cases are being studied in an ongoing project: ‘CROME - Multi-view multi-robot scene understanding and fleet coordination’\(^1\). But the applicability of SoA and SDfR is larger than the examples presented below.
In general, we can use services in any robotic application that need to perform one or multiple tasks at once by having multiple nodes that manage different pars of hardware layers. In multi-robot context, SDfR can be used to simply discover the peers in our communication zone and avoid overloading the network with broadcasts or messages for unreachable robots. Furthermore SDfR provides a list of available services on each peer, thus a robot can decide if it can relay on a peer to perform together a task.
7.1 Peer-to-Peer monitoring in a fleet
This scenario consists in having a fleet of robots performing a mission in a decentralized network. The environment is big enough to make direct communication between all peers impossible.
In these condition, a monitor station cannot have a global image of what is happening in the fleet. In order to provide the monitor with information about each robot status, status messages and alerts need to be routed in a peer to peer context to the monitoring station.
In this case, on each robot inside the fleet, services are represented by the nodes that manage the hardware elements of each robot and that expose their status. In addition, a peer to peer monitoring service is present. It collects the data from other nodes and analyzes it in order to trigger alarms if necessary.
SDfR is used for all the hardware nodes to register in order to be discovered by the monitoring service. The monitoring service first registers itself into SDfR and then it gets constantly the lists of available service on the robot. In addition, this service uses SDfR to discover the peers that are in the communication area.
The peer-to-peer monitoring service searches for the neighbors with an instance of this same service. If it finds one, it routes in a state-less connection all the data from its nodes to the neighbor. It also tries to build a communication route for alert to get in a state-full environment to the monitoring station.
7.2 Complex scene observation by robotic fleet
This scenario consists in having a fleet of robots observing a complex scene using multiple sensors in order to have a better observation (Matignon et al., 2015). Combing the perception of each robot, the fleet has a better vision of the scene. The robots need to move in order to maximize the observation of the scene. In a particular case, the fleet observe the pose of a human and his joints.
In this case, on each robot we have a node that gets the sensor perception, a node that manages the movement of the robot and a upper AI layer service
\(^1\)http://youtu.be/5kmxIgL eerQ
that decides if the robot needs to move in order to maximize the vision.
SDfR is used by the AI layer to discover the movement node and the perception node. Furthermore, SDfR is used to search for the other reachable peers in the fleet. Using SDfR allows an elastic number of robots to participate in the fleet in this type of mission.
8 CONCLUSION
In this paper we presented the challenges to make a service discovery protocol for robot fleet systems. We discussed the limited applicability of existing service discovery protocols in the context of robot fleets and then, we proposed a new protocol called SDfR that is suitable for service discovery inside an ad-hoc networked fleet.
The evaluation results shows that SDfR API generates a small latency and the SDfR service has a small impact on the CPU and memory used. Furthermore, SDfR has a low bandwidth consumption in both static and dynamic scenarios. Base on our benchmarking, we believe that SDfR can provide neighbor and service discovery for multi-robot applications without stressing the system. ROS integration and ad-hoc network auto-connect features facilitate the usage of SDfR.
We intend to further use SDfR in our current approach of improving the families of middleware to facilitate the development of multi-robot systems. SDfR can be used as part of bigger systems like remote application deployment environment in ad-hoc robotic network where robots can be dynamically provisioned with new applications while remaining in operation environment. SDfR can also be used to monitor the running services on each robot. We are also interested in opportunities for SDfR not just for robotics but also in other fields like mobile wireless sensor networks.
REFERENCES
Goland, Y., Y., Cai, T., Leach, P., and Albright, S. Simple service discovery protocol. RFC 2026.
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01286895/file/ICAART2016.pdf", "len_cl100k_base": 6495, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26130, "total-output-tokens": 8148, "length": "2e12", "weborganizer": {"__label__adult": 0.0004329681396484375, "__label__art_design": 0.0005054473876953125, "__label__crime_law": 0.0007309913635253906, "__label__education_jobs": 0.0005860328674316406, "__label__entertainment": 0.0001742839813232422, "__label__fashion_beauty": 0.00019812583923339844, "__label__finance_business": 0.0004472732543945313, "__label__food_dining": 0.0003933906555175781, "__label__games": 0.0010404586791992188, "__label__hardware": 0.00444793701171875, "__label__health": 0.0007710456848144531, "__label__history": 0.0005235671997070312, "__label__home_hobbies": 0.00017654895782470703, "__label__industrial": 0.0013895034790039062, "__label__literature": 0.0003418922424316406, "__label__politics": 0.0004551410675048828, "__label__religion": 0.0004012584686279297, "__label__science_tech": 0.468017578125, "__label__social_life": 0.00013637542724609375, "__label__software": 0.0234375, "__label__software_dev": 0.492431640625, "__label__sports_fitness": 0.0005326271057128906, "__label__transportation": 0.0021152496337890625, "__label__travel": 0.00031185150146484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35047, 0.02901]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35047, 0.48922]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35047, 0.9121]], "google_gemma-3-12b-it_contains_pii": [[0, 444, false], [444, 3907, null], [3907, 8824, null], [8824, 13838, null], [13838, 17666, null], [17666, 21285, null], [21285, 25695, null], [25695, 29740, null], [29740, 35047, null]], "google_gemma-3-12b-it_is_public_document": [[0, 444, true], [444, 3907, null], [3907, 8824, null], [8824, 13838, null], [13838, 17666, null], [17666, 21285, null], [21285, 25695, null], [25695, 29740, null], [29740, 35047, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35047, null]], "pdf_page_numbers": [[0, 444, 1], [444, 3907, 2], [3907, 8824, 3], [8824, 13838, 4], [13838, 17666, 5], [17666, 21285, 6], [21285, 25695, 7], [25695, 29740, 8], [29740, 35047, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35047, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
16b92944333062a55e9bb79e2b1452a17e5e62ae
|
On the Logical Foundations of Staged Computation
Frank Pfenning
PEPM’00, Boston, MA
January 22, 2000
1. Introduction
2. Judgments and Propositions
3. Intensional Types
4. Run-Time Code Generation
5. The PML Compiler
6. Conclusion
Terminology
• **Staged Computation**: explicit division of a computation into stages. Used in algorithm derivation and program optimization.
• **Partial Evaluation**: (static) specialization of a program based on partial input data.
• **Run-Time Code Generation**: dynamic generation of code during the evaluation of a program.
Intensionality
- Staged computation is concerned with how a value is computed.
- Staging is an intensional property of a program.
- Most research has been motivated operationally.
- This talk: a logical way to understand staging which is consistent with the operational intuition.
[Davies & Pf. POPL’96] [Davies & Pf.’99]
Logical Foundations for Computation
- Specifications as Propositions as Types
- Implementations as Proofs as Programs
- Computations as Reductions as Evaluations
- Augmented by recursion, exceptions, effects, …
Judgments and Propositions [Martin-Löf]
- A *judgment* is an object of knowledge.
- An *evident judgment* is something we know.
- The meaning of a *proposition* \( A \) is given by what counts as a verification of \( A \).
- \( A \) is *true* if there is a proof \( M \) of \( A \).
- Basic judgment: \( M : A \).
Parametric and Hypothetical Judgments
- Parametric and hypothetical judgments
\[
\frac{x_1:A_1, \ldots, x_n:A_n}{\Gamma} \vdash M : A
\]
- Meaning given by substitution
\[
\text{If } \Gamma, x:A \vdash N : C \\
\text{and } \Gamma \vdash M : A \\
\text{then } \Gamma \vdash [M/x]N : C
\]
- Order in \( \Gamma \) irrelevant, satisfies weakening and contraction.
- Hypothesis or variable rule
\[
\frac{}{\Gamma, x:A \vdash x : A} \text{ var}
\]
Implication and Function Types
- Reflecting a hypothetical judgment as a proposition.
\[
\begin{align*}
\Gamma, x: A &\vdash M : B \\
\rightarrow & \\
\Gamma &\vdash \lambda x: A. M : A \rightarrow B \\
\rightarrow & \text{I}
\end{align*}
\]
\[
\begin{align*}
\Gamma &\vdash M : A \rightarrow B \\
\Gamma &\vdash N : A \\
\rightarrow & \text{E}
\end{align*}
\]
- How do we know these rules are consistent?
- Martin-Löf’s meaning explanation.
- Summarize as local soundness and completeness.
Local Soundness
- **Local soundness**: the elimination rules are not too strong.
- An introduction rule followed by any elimination rule does not lead to new knowledge.
- Witnessed by *local reduction*
\[
\begin{align*}
&D \\
&\Gamma, x : A \vdash M : B \\
&\quad \Gamma \vdash (\lambda x : A. M) : A \rightarrow B \quad \rightarrow I \\
&\quad \Gamma \vdash N : A \\
&\quad \Gamma \vdash (\lambda x : A. M) N : B \quad \rightarrow E
\end{align*}
\]
- \( \mathcal{D}' \) exists by the substitution property of hypothetical judgments.
Local Completeness
- **Local completeness**: the elimination rules are not too weak.
- We can apply the elimination rules in such a way that a derivation of the original judgment can be reconstituted from the results.
- Witnessed by *local expansion*
\[
\begin{align*}
D & \quad \Rightarrow_E
\end{align*}
\]
\[
\Gamma \vdash M : A \rightarrow B
\]
\[
\begin{array}{c}
D'
d \Gamma, x:A \vdash M : A \rightarrow B
\end{array}
\]
\[
\begin{array}{c}
\Gamma, x:A \vdash x:A
\end{array}
\]
\[
\begin{array}{c}
\var
\end{array}
\]
\[
\begin{array}{c}
\rightarrow_E
\end{array}
\]
\[
\begin{array}{c}
\Gamma, x:A \vdash M x : B
\end{array}
\]
\[
\begin{array}{c}
\Gamma \vdash (\lambda x:A. M \ x) : A \rightarrow B
\end{array}
\]
\[
\begin{array}{c}
\rightarrow_I
\end{array}
\]
- \(D'\) exists by weakening.
Reduction and Evaluation
- Reduction: \((\lambda x:A. M) N \overset{R}{\Rightarrow} [N/x]M\) at any subterm.
- Local soundness means reduction preserves types.
- Evaluation = reduction + strategy (here: call-by-value)
\[
\text{Values } V ::= \lambda x:A. M | \ldots
\]
\[
\begin{align*}
\lambda x:A. M & \leftrightarrow \lambda x:A. M \\
M & \leftrightarrow \lambda x:A. M' & N & \leftrightarrow V' & [V'/x]M' & \leftrightarrow V \\
M N & \leftrightarrow V
\end{align*}
\]
Towards Functional Programming
- Decide on *observable types*.
- Functions are not observable
— allows us to compile and optimize.
- Functions are extensional
— we can determine their behavior on arguments, but not their definition.
- Evaluate $M$ only if $\vdash M : A$.
- If $x_1:A_1, \ldots, x_n:A_n \vdash M : A$ then we may evaluate $[V_1/x_1, \ldots, V_n/x_n]M$.
Logical Foundations for *Staged* Computation
- *Staging* Specifications (as Propositions as Types)
- *Staged* Implementations (as Proofs as Programs)
- *Staged* Computations (as Reductions as Evaluations)
- Augmented by recursion, exceptions, effects, ...
Desirable Properties
- Local soundness and completeness.
- Evaluation preserves types.
- Conservative extension (orthogonality).
- Captures staging.
Some Design Principles
- Explicit: put the power of staging in the hands of the programmer, not the compiler.
- Static: staging errors should be type errors.
- Implementable: can achieve expected efficiency improvements.
Focus: Run-Time Code Generation
- Generate code for portions of the program at run-time to take advantage of information only available then.
- Examples: sparse matrix multiplication, regular expression matchers, ...
- Implementation via code generators or templates.
Requirements
- To “compile” at run-time we need a source expression.
- Enable optimizations, but do not force them.
- Distinguish *terms* from *source expressions*.
- The structure of (functional) terms is **not** observable: *extensional*.
- The structure of source expressions may be observable: *intensional*.
Categorical Judgments
- \( M :: A \) — \( M \) is a *source expression* of type \( A \).
- Do not duplicate constructors or types.
- Instead define: \( M \) is a source expression if it does not depend on any (extensional) terms.
\[
\vdash M :: A \quad \text{if} \quad \vdash M : A
\]
- \( A \) is *valid* (categorically true) if \( A \) has a proof which does not depend on hypotheses.
Generalized Hypothetical Judgments
- Generalize to permit hypotheses $u :: B$.
\[
\frac{\Delta, u :: B_1, \ldots, u_m :: B_m; x_1:A_1, \ldots x_n:A_n}{\Gamma} \vdash M : A
\]
- Meaning given by substitution
\[
\text{If } (\Delta, u :: B); \Gamma \vdash N : C
\]
\[\text{and } \Delta; \cdot \vdash M : B \quad (i.e., \Delta \vdash M :: B)\]
\[\text{then } \Delta; \Gamma \vdash [M/u]N : C\]
- New hypothesis rule
\[
(\Delta, u :: B); \Gamma \vdash u : B \quad \text{var}^\ast
\]
Reflection
• \( \square A \) — proposition expressing that \( A \) is valid.
• \( M : \square A \) — \( M \) is a term which stands for (evaluates to) a source expression of type \( A \).
• Introduction rule.
\[
\Delta; \Gamma \vdash M : A \\
\hline
\Delta; \Gamma \vdash \text{box} M : \square A
\]
\( \text{\( \square I \)} \)
• Premise expresses
\( A \) is valid, or
\( M \) is a source expression of type \( A \).
Elimination Rule
- Attempt:
\[
\Delta; \Gamma \vdash M : \square A \\
\Delta; \Gamma \vdash \text{unbox } M : A \\
\square E?
\]
- Locally sound (by weakening):
\[
\begin{array}{c}
\Delta; \cdot \vdash M : A \\
\Delta; \Gamma \vdash \text{box } M : \square A \\
\Delta; \Gamma \vdash \text{unbox (box } M ) : A \\
\Delta; \Gamma \vdash M : A
\end{array}
\]
- Definable later: \( \text{eval} : (\square A) \rightarrow A \).
Failure of Local Completeness
- Elimination rule is too weak.
- **Not** locally complete: \( M : \square A \Rightarrow \Gamma \Rightarrow_E \square (\text{unbox } M) \).
\[
\frac{\Delta ; \Gamma \vdash M : \square A}{\frac{\Delta ; \Gamma \vdash \text{unbox } M : A}{\Delta ; \Gamma \vdash \text{box (unbox } M) : \square A}} \square I \?
\]
- Also cannot prove: \( \vdash \square (A \rightarrow B) \rightarrow \square A \rightarrow \square B \).
Elimination Rule Revisited
- Elimination rule
\[
\frac{\Delta; \Gamma \vdash M : \Box A}{\Delta; \Gamma \vdash \text{let } \text{box } u = M \text{ in } N : C} \quad \Box E
\]
- Locally sound
\[
\frac{\Delta; \vdash M : A \quad (\Delta, u :: A); \Gamma \vdash N : C}{\Delta; \Gamma \vdash \text{box } M : \Box A} \quad \Box I
\]
\[
\frac{\Delta; \Gamma \vdash \text{let } \text{box } u = \text{box } M \text{ in } N : C}{(\Delta, u :: A); \Gamma \vdash N : C} \quad \Box E
\]
\[
\implies_{R} \quad \frac{\Delta; \Gamma \vdash [M/u]N : C}{\Delta; \Gamma \vdash N : C} \quad \mathcal{E}'
\]
Local Completeness
- Local expansion
\[ \Delta; \Gamma \vdash M : \Box A \]
\[ \frac{\Delta; \Gamma \vdash M : \Box A \quad \frac{\Delta, u :: A; \cdot \vdash u : A}{(\Delta, u :: A); \Gamma \vdash \text{box } u : \Box A} \quad \text{var}^*}{\Delta; \Gamma \vdash (\text{let } \text{box } u = M \text{ in } \text{box } u) : \Box A} \quad I \]
\[ \frac{\Delta; \Gamma \vdash M : \Box A}{\Box E} \]
- On terms:
\[ M : \Box A \iff_E \text{let } \text{box } u = M \text{ in } \text{box } u \]
Summary of Reductions
- Reductions as basis for operational semantics.
\((\lambda x:A. M) N \rightsquigarrow_R [N/x]M\)
- **let** box \(u = \text{box} M\) **in** \(N \rightsquigarrow_R \lbrack M/u \rbrack N\)
- Expansions as extensionality principles.
\(M : A \rightarrow B \rightsquigarrow_E (\lambda x:A. M x)\)
\(M : \Box A \rightsquigarrow_E (\text{let box} u = M \text{ in box} u)\).
Some Examples
- Application
\( \vdash \lambda x:\Box(A \rightarrow B). \lambda y:\Box A. \)
\[
\text{let } \text{box } u = x \text{ in let } \text{box } w = y \text{ in box } (u \ w)
\]
: \Box(A \rightarrow B) \rightarrow \Box A \rightarrow \Box B
- Evaluation
\( \vdash \lambda x:\Box A. \text{let } \text{box } u = x \text{ in } u \)
: \Box A \rightarrow A
- Quotation
\( \vdash \lambda x:\Box A. \text{let } \text{box } u = x \text{ in box } (\text{box } u) \)
: \Box A \rightarrow \Box \Box A
Logical Assessment
- □ satisfies laws of intuitionistic S₄.
- Cleaner and simpler formulation through judgmental reconstruction.
- Can be extended to capture ◇.
- (An aside: model Moggi’s computational meta-language
\[ □A \quad \text{Value of type } A \]
\[ ◇□A \quad \text{Computation of type } A \]
\[ ◇□A \quad = \bigcirc A \text{ of lax logic} \] )
Operational Semantics
- Values $\lambda x : A. M$ and $\mathbf{box} M$.
- Rules
$$\mathbf{box} M \rightarrow \mathbf{box} M$$
$$M \rightarrow \mathbf{box} M' \quad [M'/u] N \rightarrow V$$
$$(\text{let } \mathbf{box} u = M \text{ in } N) \rightarrow V$$
- $\mathbf{box} M$ may or may not be observable since $M$ is guaranteed to be a source expression even if functions are compiled.
- Fully compatible with recursion, effects.
Desirable Properties Revisited
- Local soundness and completeness. yes
- Evaluation preserves types. yes
- Conservative extension (orthogonality). yes
- Captures staging.
captures intensional expressions reflectively
- Enables, but does not force optimizations.
Observable Intensional Types
- Source expressions must be manipulated explicitly during computation.
- Source expressions are evaluated in contexts
\[
\text{let box } \boxed{u} = M \text{ in } \ldots u \ldots
\]
where \( u \) is not inside a \textbf{box} constructor.
- Source expression could be interpreted, or compiled and then executed.
- A \textbf{case} construct for source expressions(!) which does not violate \( \alpha \)-conversion can be added safely.
[Despeyroux, Schürmann, Pf. TLCA’97] [Schürmann & Pf. CADE’98] [Pitts & Gabbay ’00]
Some Applications
- Type-safe macros
- Meta-programming
- Symbolic computation
- (An aside: Mathematica does not distinguish \( \text{box}(2^{2^{2^{2^2}}} - 1) \) and \( 2^{2^{2^{2^2}}} - 1 \), but should!)
Non-Observable Intensional Types
- Obtain a pure system of run-time code generation.
- We may compile \texttt{box }M \text{ to a code generator.}
- This generator is a function of its free expression variables \( u_j \) (value variables \( x_i \) cannot occur free in \( M! \))
- Implemented in the PML compiler (in progress).
The PML Language
- [Wickline, Lee, Pfenning PLDI’98] (in progress)
- Core ML (recursion, data types, mutable references) extended by types $\Diamond A$ (written $[A]$).
- Lift for observable types (similar to equality types).
- Staging errors are type errors (but ...).
- Memoization must be programmed explicitly.
Structure of the Compiler
- Standard parsing, type-checking.
- “Split” (2-environment) closure conversion.
- Standard ML-RISC code generator for unstaged code.
- Lightweight run-time code generation (Fabius [Lee & Leone'96]).
Closed Code Generators
- Compiling $\text{box } M$ where $M$ is closed.
- Compile $M$ obtaining binary $B$ (using ML-RISC).
- Write code $C$ to generate $B$.
- Generate binary for $\text{box } M$ from $C$ (using ML-RISC).
- Backpatching for forward jumps and branches at code generation time (run-time system).
Open Code Generators
- Compiling `let box u = N in ... box M ...`
- At run-time, $u$ will be bound to a code generator.
- The generator for $M$ will call the generator $u$.
- Planned: pass register information (right now: standard calling convention).
- Planned: type-based optimization at interface (Fabius).
Nested Code Generators
- Special treatment for nested code generators to avoid code explosion.
- Conceptually:
\[
\text{box } M \mapsto \lambda x : \text{unit}. M
\]
\[
\text{let box } u = M \text{ in } N \mapsto \text{let val } x = M \text{ in } [x() / u]N
\]
- Observationally equivalent, but prohibits any optimizations.
Invoking Generated Code
- Compiling `let box u = N in ... u ..., u not “boxed”`
- Call code generator for `u`.
- Jump to generated code.
Example: Regular Expression Matcher
datatype regexp
= Empty (* e empty string *)
| Plus of regexp * regexp (* r1 + r2 union *)
| Times of regexp * regexp (* r1 r2 concatenation *)
| Star of regexp (* r* iteration *)
| Const of string (* a letter *)
(* aux function *)
val acc : regexp -> (string list -> bool)
-> (string list -> bool)
acc r k s \iff s = s_1 \circ s_2 \text{ where } s_1 \in \mathcal{L}(r) \text{ and } k s_2 \rightarrow \text{true} \text{ for some } s_1 \text{ and } s_2.
fun accept r s = acc r List.null s
Unstaged Implementation
fun acc (Empty) k s = k s
| acc (Plus(r1,r2)) k s =
acc r1 k s orelse acc r2 k s
| acc (Times(r1,r2)) k s =
acc r1 (fn ss => acc r2 k ss) s
| acc (Star(r)) k s =
k s orelse
acc r (fn ss => if s = ss then false
else acc (Star(r)) k ss) s
| acc (Const(str)) k (x::s) =
(x = str) andalso k s
| acc (Const(str)) k (nil) = false
Staged Version, Part I
(* val acc : regexp ->
[(string list -> bool) -> (string list -> bool)] *)
fun acc (Empty) = box (fn k => fn s => k s)
... | acc (Times(r1,r2)) =
let box a1 = acc r1
box a2 = acc r2
in
box (fn k => fn s => a1 (fn ss => a2 k ss) s)
end
| acc (Star(r1)) =
let box a1 = acc r1
in
box rec aStar =
box (fn k => fn s =>
k s orelse
a1 (fn ss => if s = ss then false
else aStar k ss) s)
in
box (fn k => fn s => aStar k s)
end
Staged Version, Part II
\[
\begin{align*}
| \text{acc (Const}(c)) &= \\
\text{let } \text{box } c' &= \text{lift } c \quad (* c : \text{string } *) \\
\text{in} \\
\text{box (fn } k => (\text{fn } (x::s) => (x = c') \text{ andalso } k \hspace{1em} s \\
| \text{nil } => \text{false})) \\
\text{end} \\
(* \text{val accept3 : regexp } -> (\text{string list } -> \text{bool}) *) \\
\text{fun accept3 } r = \\
\text{let } \text{box } a &= \text{acc } r \\
\text{in} \\
a \text{List.null} \\
\text{end}
\end{align*}
\]
Example
Times (Const "a", Empty) =>
let box a1 =
box (fn k => (fn (x::s) => (x = "a") andalso k s |
nil => false))
box a2 = box (fn k => fn s => k s)
in
box (fn k => fn s => a1 (fn ss => a2 k ss) s)
end =>
box (fn k => fn s =>
(fn k => (fn (x::s) => (x = "a") andalso k s |
nil => false))
(fn ss => (fn k => fn s => k s) k ss) s)
A Sample Optimization
Substitute variable for variable, functional value for linear variable.
\[
\begin{align*}
\text{box} & \ (fn\ k \Rightarrow \ fn\ s \Rightarrow \\
& \quad (fn\ k \Rightarrow (fn\ (x::s) \Rightarrow (x = "a")\ \text{andalso}\ k \ s \\
& \quad \quad | \ \text{nil} \Rightarrow \ false)) \\
& \quad (fn\ ss \Rightarrow (fn\ k \Rightarrow fn\ s \Rightarrow k\ s)\ k\ ss)\ s) \\
\Rightarrow & \\
\text{box} & \ (fn\ k \Rightarrow fn\ s \Rightarrow \\
& \quad (fn\ (x::s') \Rightarrow (x = "a")\ \text{andalso} \\
& \quad \quad (fn\ ss \Rightarrow (fn\ k \Rightarrow fn\ s \Rightarrow k\ s)\ k\ ss)\ s' \\
& \quad \quad | \ \text{nil} \Rightarrow false))\ s) \\
\Rightarrow & \\
\text{box} & \ (fn\ k \Rightarrow fn\ s \Rightarrow \\
& \quad (fn\ (x::s') \Rightarrow (x = "a")\ \text{andalso} k\ s' \\
& \quad \quad | \ \text{nil} \Rightarrow false))\ s)
\end{align*}
\]
43
Run-Time Code Generation Summary
- Logical reconstruction yields clean and simple type system for run-time code generation.
- Application of Curry-Howard isomorphism to intuitionistic $S_4$.
- Distinguish expressions from terms (valid from true propositions).
- Enables optimizations without prescribing them.
- (Partially) implemented in the PML compiler.
Some Issues
- Lift for functions? Top-level? Modules?
- Memoization? Garbage collections for generated code?
- Some inference?
- Empirical study (cf. Fabius).
Implicit Syntax
- Derived (logically) from Kripke semantics of $S_4$.
- Similar to quasi-quote in Lisp-like languages.
- Operational semantics defined by translation.
```haskell
fun acc (Empty) = '(fn k => fn s => k s)
| acc (Times(r1,r2)) = '(fn k => fn s => ^(acc r1) (fn ss => ^(acc r2) k ss) s)
| acc (Star(r1)) = '(fn k => fn s =>
k s orelse
^(acc r1) (fn ss => if s = ss then false
else ^(acc (Star(r1))) k ss) s)
...
```
- Note bug!
Relation to Two-Level Languages
- Conservative extension of Nielson & Nielson [book version].
- Evident from implicit syntax.
- Allows arbitrary stages [Glück & Jørgensen PLILP'95].
- Two-level languages are one-level languages with modal types.
Relation to Partial Evaluation
- Partial evaluation *prescribes* optimization.
- Computation proceeds in discrete transformation steps.
- No analogue of eval : $\Box A \rightarrow A$.
- Logical foundations through intuitionistic linear time temporal logic. [Davies LICS’96]
- Combination subject to current research [Moggi, Taha, Benaissa, Sheard ESOP’99] [Davies & Pf.]
- Soundness problems in the presence of effects.
Conclusion
- Cleaner, simpler systems through judgmental analysis and logical foundation.
- Two-level languages are one-level languages with modal types.
- Put the power of the staged computation into the hands of the programmer, not the compiler!
- Staging errors should be type errors.
|
{"Source-Url": "http://www.cs.cmu.edu/afs/cs.cmu.edu/user/fp/www/talks/pepm00-talk.pdf", "len_cl100k_base": 6000, "olmocr-version": "0.1.53", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 74128, "total-output-tokens": 8209, "length": "2e12", "weborganizer": {"__label__adult": 0.000362396240234375, "__label__art_design": 0.00030732154846191406, "__label__crime_law": 0.0003116130828857422, "__label__education_jobs": 0.0006709098815917969, "__label__entertainment": 6.026029586791992e-05, "__label__fashion_beauty": 0.00015592575073242188, "__label__finance_business": 0.0001817941665649414, "__label__food_dining": 0.00043082237243652344, "__label__games": 0.0004658699035644531, "__label__hardware": 0.0006666183471679688, "__label__health": 0.0005173683166503906, "__label__history": 0.0002038478851318359, "__label__home_hobbies": 0.00011879205703735352, "__label__industrial": 0.0004074573516845703, "__label__literature": 0.00033593177795410156, "__label__politics": 0.0002810955047607422, "__label__religion": 0.0005717277526855469, "__label__science_tech": 0.00910186767578125, "__label__social_life": 0.00011223554611206056, "__label__software": 0.0030956268310546875, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.00034546852111816406, "__label__transportation": 0.0005936622619628906, "__label__travel": 0.00019109249114990232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19061, 0.00728]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19061, 0.2496]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19061, 0.61821]], "google_gemma-3-12b-it_contains_pii": [[0, 233, false], [233, 564, null], [564, 892, null], [892, 1104, null], [1104, 1423, null], [1423, 1872, null], [1872, 2369, null], [2369, 2908, null], [2908, 3726, null], [3726, 4204, null], [4204, 4584, null], [4584, 4844, null], [4844, 4997, null], [4997, 5221, null], [5221, 5492, null], [5492, 5810, null], [5810, 6202, null], [6202, 6686, null], [6686, 7112, null], [7112, 7540, null], [7540, 7991, null], [7991, 8586, null], [8586, 9081, null], [9081, 9482, null], [9482, 10021, null], [10021, 10379, null], [10379, 10820, null], [10820, 11089, null], [11089, 11651, null], [11651, 11862, null], [11862, 12193, null], [12193, 12513, null], [12513, 12743, null], [12743, 13059, null], [13059, 13374, null], [13374, 13715, null], [13715, 13855, null], [13855, 14409, null], [14409, 14818, null], [14818, 15338, null], [15338, 15854, null], [15854, 16226, null], [16226, 17119, null], [17119, 17481, null], [17481, 17641, null], [17641, 18094, null], [18094, 18344, null], [18344, 18770, null], [18770, 19061, null]], "google_gemma-3-12b-it_is_public_document": [[0, 233, true], [233, 564, null], [564, 892, null], [892, 1104, null], [1104, 1423, null], [1423, 1872, null], [1872, 2369, null], [2369, 2908, null], [2908, 3726, null], [3726, 4204, null], [4204, 4584, null], [4584, 4844, null], [4844, 4997, null], [4997, 5221, null], [5221, 5492, null], [5492, 5810, null], [5810, 6202, null], [6202, 6686, null], [6686, 7112, null], [7112, 7540, null], [7540, 7991, null], [7991, 8586, null], [8586, 9081, null], [9081, 9482, null], [9482, 10021, null], [10021, 10379, null], [10379, 10820, null], [10820, 11089, null], [11089, 11651, null], [11651, 11862, null], [11862, 12193, null], [12193, 12513, null], [12513, 12743, null], [12743, 13059, null], [13059, 13374, null], [13374, 13715, null], [13715, 13855, null], [13855, 14409, null], [14409, 14818, null], [14818, 15338, null], [15338, 15854, null], [15854, 16226, null], [16226, 17119, null], [17119, 17481, null], [17481, 17641, null], [17641, 18094, null], [18094, 18344, null], [18344, 18770, null], [18770, 19061, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19061, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19061, null]], "pdf_page_numbers": [[0, 233, 1], [233, 564, 2], [564, 892, 3], [892, 1104, 4], [1104, 1423, 5], [1423, 1872, 6], [1872, 2369, 7], [2369, 2908, 8], [2908, 3726, 9], [3726, 4204, 10], [4204, 4584, 11], [4584, 4844, 12], [4844, 4997, 13], [4997, 5221, 14], [5221, 5492, 15], [5492, 5810, 16], [5810, 6202, 17], [6202, 6686, 18], [6686, 7112, 19], [7112, 7540, 20], [7540, 7991, 21], [7991, 8586, 22], [8586, 9081, 23], [9081, 9482, 24], [9482, 10021, 25], [10021, 10379, 26], [10379, 10820, 27], [10820, 11089, 28], [11089, 11651, 29], [11651, 11862, 30], [11862, 12193, 31], [12193, 12513, 32], [12513, 12743, 33], [12743, 13059, 34], [13059, 13374, 35], [13374, 13715, 36], [13715, 13855, 37], [13855, 14409, 38], [14409, 14818, 39], [14818, 15338, 40], [15338, 15854, 41], [15854, 16226, 42], [16226, 17119, 43], [17119, 17481, 44], [17481, 17641, 45], [17641, 18094, 46], [18094, 18344, 47], [18344, 18770, 48], [18770, 19061, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19061, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0754df1e081209ff8fb48fefdef2d811eef7060a
|
Adaptive Huffman coding
If we want to code a sequence from an unknown source using Huffman coding, we need to know the probabilities of the different symbols.
Most straightforward is to make two passes over the sequence. First we calculate statistics of the different symbols and then we use the estimated probabilities to code the source.
Instead we would like to do everything in one pass. In addition we would like to have a method that automatically adapts if the statistics of the source changes.
Adaptive Huffman coding, cont.
Simple method:
1. Start with a maximally “flat” code tree.
2. Code $N$ symbols from the source and at the same time gather statistics, ie count how many times each symbol appears. Build a new Huffman tree with the new estimated probabilities.
3. Repeat from 2.
No side information about the tree structure need to be transmitted, since the decoder has access to the same data as the coder.
The smaller $N$ is chosen to be, the faster the coder adapts to a change in source statistics. On the other hand we have to construct a new Huffman tree more often which takes time.
Adaptive Huffman coding, cont.
Smarter method: Adjust the code tree after each coded symbol. We need to keep track of some extra information in each node of the tree.
A binary tree with $L$ leaves has $2L - 1$ nodes.
Give each node a number between 1 and $2L - 1$.
Each node has a weight. For a leaf (outer node) the weight is the number of times the corresponding symbol has appeared (cf. probability). For an inner node the weight is the sum of the weights of its children.
If node $j$ has the weight $w_j$ we need
$$w_1 \leq w_2 \leq \ldots \leq w_{2L-1}$$
Nodes with number $2j - 1$ and $2j$ should have the same parent and the parent should have a higher number than its children.
Trees with these properties are huffman trees.
Adaptive Huffman coding, cont.
Start with a maximally flat code tree (corresponding to a fixlength code if \( L = 2^k \)). The weight in each leaf is set to 1, and the weight in each inner node is set to the sum of its children’s weights. Enumerate the nodes so that the requirements are met.
For each symbol to be coded:
1. Send the codeword corresponding to the symbol.
2. Go to the symbol’s corresponding node.
3. Consider all nodes with the same weight as the current node. If the current node is not the node with highest number we switch places (ie move weight, pointers to children and possible symbol) between the current node and the node with highest number.
4. Increase the weight of the current node by 1.
5. If we are in the root node we are done, otherwise move to the parent of the current node and repeat from 3.
Adaptive Huffman coding, cont.
Since the update of the tree is done after coding a symbol, the decoder can do the same update of the code after decoding a symbol. No side information about the tree needs to be transmitted.
One variant is to not start with a full tree. Instead we introduce an extra symbol (NYT, Not Yet Transmitted) and start with a “tree” that only contains that symbol, with weight 0 and number \(2^L - 1\).
When you code a symbol that hasn’t been seen before it is coded with the codeword for NYT, followed by a fixlength codeword for the new symbol. The old NYT node is then split into two branches, one for the NYT symbol and one for the new symbol. The new NYT node keeps the weight 0, the new symbol node gets the weight 1. If the new symbol is the last symbol not yet coded in the alphabet we don’t need to split, we can just replace NYT with the new symbol.
**Modified algorithm**
1. If the symbol hasn’t been coded before, transmit the codeword for NYT followed by a fixlength codeword for the new symbol, otherwise transmit the codeword corresponding to the symbol.
2. If we coded a NYT split the NYT node into two new leaves, one for NYT with weight 0 and one for the new symbol with weight 1. The node numbers for the new nodes should be the two largest unused numbers. If it was the last not yet coded symbol we don’t have to split, just replace NYT with the new symbol.
3. Go to the symbol’s corresponding node (the old NYT node if we split).
4. Consider all nodes with the same weight as the current node, except its parent. If the current node is not the node with highest number we switch places (ie move weight, pointers to children and possible symbol) between the current node and the node with highest number.
5. Increase the weight of the current node by 1.
6. If we are in the root node we are done, otherwise move to the parent of the current node and repeat from 4.
Forgetting factor
If we want the coding to depend more on more recent symbols than on older symbol we can use a forgetting factor.
When the weight of the root node is larger than \( N \) we divide the weight in all nodes with \( K \).
If we want to keep the weights as integers we have to divide the weights of all leaf nodes by \( K \) (round up) and then add up the weights from the children to the parents, all the way to the root node.
Because of the rounding, the rescaled tree might no longer be a proper Huffman tree. To solve this, the code tree has to be rebuilt after the rescaling.
Depending on how we choose \( N \) and \( K \) we can adjust the speed of adaptation. Large \( K \) and small \( N \) give fast adaptation and vice versa.
Run-length coding
Sometimes we have sources that produce long partial sequences consisting of the same symbol. It can then be practical to view the sequence as consisting of runs instead of symbols. A run is a tuple describing what symbol that is in the run and how long the run is.
For example, the sequence
\[
\text{aaaabbbbbbcbbbbaa}
\]
can be described as
\[
(a, 4)(b, 7)(c, 2)(b, 4)(a, 4)
\]
Basically we have switched to another alphabet than the original one. The gain is that it might be easier to find a good code for the new alphabet, and that it’s easier to take advantage of the memory of the source.
Note that if the original alphabet is binary, we only have to send the symbol for the first run.
Fax coding
Fax coding is a typical example when run-length coding is used.
There are two digital fax standard: Group 3 (T.4) and group 4 (T.6).
A fax machine scans a page one line at a time (1728 pixels per line for A4 papers). Each pixel is either white or black. Typically we get runs of alternating white and black.
A line can either be coded separately or with the help of the previous line.
Fax coding, cont.
When coding a line separately, run-length coding is used. Since the number of possible run-lengths is big it is impractical to have a Huffman code over all run-lengths. Instead, a run-length \( r \) is described as
\[
r = 64 \cdot m + t, \quad t = 0, \ldots, 63 \text{ and } m = 1, \ldots, 27
\]
An extra symbol to code the end of line (EOL) is also introduced, ie to be used when the rest of the line has the same colour.
The first run of each line is assumed to be white.
The alphabet with different \( m \), \( t \) and EOL is coded using static tree codes, one each for white and black runs.
This type of coding is called MH (*modified huffman*).
Two consecutive lines are probably very similar. This can be exploited in the coding.
A few definitions:
- $a_0$: The last pixel on a line that is known by both sender and receiver, i.e., current position on the line. When coding starts, this is an imaginary white pixel to the left of the first pixel of the line.
- $a_1$: The first pixel to the right of $a_0$ with the opposite colour. Known only by the sender.
- $a_2$: The first pixel to the right of $a_1$ with the opposite colour. Known only by the sender.
- $b_1$: The first pixel to the right of $a_0$ on the previous line with the opposite colour. Known by both sender and receiver.
- $b_2$: The first pixel to the right of $b_1$ that has the opposite colour. Known by both sender and receiver.
At coding you get three cases.
1. If both $b_1$ and $b_2$ are between $a_0$ and $a_1$ the codeword 0001 is transmitted. All pixels up to the location under $b_2$ have the same colour. This point will become our new $a_0$. New $b_1$ and $b_2$ are found.
2. $a_1$ is located before $b_2$ and the distance between $b_1$ and $a_1$ is no more than 3. The distance $a_1 - b_1$, $\{-3, -2, -1, 0, 1, 2, 3\}$ is coded using the codewords \{0000010, 000010, 010, 1, 011, 000011, 00000011\}. $a_1$ becomes the new $a_0$.
3. In all other cases 001 is transmitted and the run-lengths from $a_0$ to $a_1$ and from $a_1$ to $a_2$ are coded using the MH code.
Fax coding, cont.
In group 3 both methods are used. With regular intervals a line is coded using pure one-dimensional MH coding, so that any transmission errors will not propagate over the whole image. This coding method is called MR (*modified READ*).
In group 4 only the two-dimensional method is used. This is called MMR (*modified MR*).
Monotonously decreasing distributions
When coding waveform data, such as sound or images, we often have distributions where the alphabet consists of integers $\mathcal{A} = \{0, 1, 2, 3, \ldots\}$ (or $\mathcal{A} = \{\ldots, -2, -1, 0, 1, 2, \ldots\}$) and where the probabilities are monotonously decreasing with increasing (absolute) values.
Instead of counting statistics and constructing tree codes we can then often use codes where the codewords can easily be found directly from the symbols and where small values have short codewords and large values have long codewords.
The unary code (the Umbra code)
The codeword for a non-negative integer \( n \) consists of \( n \) ones followed by a zero.
<table>
<thead>
<tr>
<th>Symbol</th>
<th>codeword</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>110</td>
</tr>
<tr>
<td>3</td>
<td>1110</td>
</tr>
<tr>
<td>4</td>
<td>11110</td>
</tr>
<tr>
<td>\vdots</td>
<td>\vdots</td>
</tr>
</tbody>
</table>
The unary code achieves the entropy bound for the dyadic distribution
\[ p(i) = 2^{-(i+1)} \]
In some applications long sequences of ones are not desirable. Then you can use the reverse definition, where the codeword is \( n \) zeros followed by a one.
Golomb codes
\[ \mathcal{A} = \{0, 1, 2, \ldots\} \]
Choose the parameter \( m \). In practice, \( m \) is usually chosen to be an integer power of two, but it can be any positive integer. Golomb codes where \( m \) is an integer power of two are sometimes referred to as Rice codes.
Represent the integer \( n \) with \( q = \left\lfloor \frac{n}{m} \right\rfloor \) and \( r = n - qm \).
Code \( q \) with a unary code.
If \( m \) is an integer power of two, code \( r \) binary with \( \log m \) bits.
If \( m \) is not an integer power of two:
\[ 0 \leq r < 2^{\lceil \log m \rceil} - m \quad \text{Code } r \text{ binary with } \lceil \log m \rceil \text{ bits} \]
\[ 2^{\lceil \log m \rceil} - m \leq r \leq m - 1 \quad \text{Code } r + 2^{\lceil \log m \rceil} - m \text{ binary with } \lceil \log m \rceil \text{ bits} \]
(This type of code is called a truncated binary code.)
Examples of Golomb codes
<table>
<thead>
<tr>
<th>Symbol</th>
<th>$m = 1$</th>
<th>$m = 2$</th>
<th>$m = 3$</th>
<th>$m = 4$</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>00</td>
<td>00</td>
<td>000</td>
</tr>
<tr>
<td>1</td>
<td>10</td>
<td>01</td>
<td>010</td>
<td>001</td>
</tr>
<tr>
<td>2</td>
<td>110</td>
<td>100</td>
<td>110</td>
<td>101</td>
</tr>
<tr>
<td>3</td>
<td>1110</td>
<td>101</td>
<td>110</td>
<td>111</td>
</tr>
<tr>
<td>4</td>
<td>11110</td>
<td>1100</td>
<td>1110</td>
<td>1111</td>
</tr>
<tr>
<td>5</td>
<td>111110</td>
<td>1110</td>
<td>1110</td>
<td>1111</td>
</tr>
<tr>
<td>6</td>
<td>1111110</td>
<td>11100</td>
<td>11100</td>
<td>11101</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Golomb codes are optimal for distributions of the type
$$p(i) = q^i \cdot (1 - q) ; \quad 0 < q < 1$$
if we choose $m = \lceil -\frac{1}{\log q} \rceil$
Golomb codes are for instance used in the image coding standard JPEG-LS and in the video coding standard H.264.
Exp-Golomb codes
\[ A = \{0, 1, 2, \ldots\} \]
Choose the parameter \( m = 2^k \), \( k \) non-negative integer.
Calculate \( s = \lfloor \log_2(n + m) \rfloor \).
Code \( s - k \) with a unary code.
Code \( n - 2^s + m \) binary with \( s \) bits.
Examples of Exp-Golomb codes
<table>
<thead>
<tr>
<th>Symbol</th>
<th>$k = 0$</th>
<th>$k = 1$</th>
<th>$k = 2$</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0 0</td>
<td>0 00</td>
</tr>
<tr>
<td>1</td>
<td>10 0</td>
<td>0 1</td>
<td>0 01</td>
</tr>
<tr>
<td>2</td>
<td>10 1</td>
<td>10 00</td>
<td>0 10</td>
</tr>
<tr>
<td>3</td>
<td>110 00</td>
<td>10 01</td>
<td>0 11</td>
</tr>
<tr>
<td>4</td>
<td>110 01</td>
<td>10 10</td>
<td>10 000</td>
</tr>
<tr>
<td>5</td>
<td>110 10</td>
<td>10 11</td>
<td>10 001</td>
</tr>
<tr>
<td>6</td>
<td>110 11</td>
<td>110 000</td>
<td>10 010</td>
</tr>
<tr>
<td>7</td>
<td>1110 000</td>
<td>110 001</td>
<td>10 011</td>
</tr>
<tr>
<td>8</td>
<td>1110 001</td>
<td>110 010</td>
<td>10 000</td>
</tr>
<tr>
<td>9</td>
<td>1110 010</td>
<td>110 011</td>
<td>10 100</td>
</tr>
<tr>
<td>10</td>
<td>1110 011</td>
<td>110 100</td>
<td>10 110</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Exp-Golomb codes are for instance used in i H.264.
Test image Goldhill
512 × 512 pixels, 8 bits/pixel
Simple Huffman coding
Histogram for Goldhill:
Huffman coding gives an average data rate of 7.50 bits/pixel. The longest codeword is 16 bits, the shortest codeword is 7 bits. We haven't used any of the dependence between pixels.
Simple prediction
Instead of coding the pixels directly, we code the difference in pixel value between a pixel and the pixel above it. Imaginary pixels outside of the image are assumed to be medium gray, i.e., have the value 128. The smallest difference is -112, The largest difference is 107.
Huffman coding of differences
Huffman coding of the differences gives an average data rate of 5.34 bits/pixel.
The longest codeword is 18 bits, the shortest codeword is 4 bits.
We must first modify the values so that we only have non-negative values. This can for instance be done using the mapping
\[ F(x) = \begin{cases}
2x & ; x \geq 0 \\
-2x - 1 & ; x < 0
\end{cases} \]
ie the negative numbers are mapped to odd positive numbers and the positive numbers are mapped to even positive numbers.
\[ F^{-1}(x) = \begin{cases}
\frac{x}{2} & ; x \text{ even} \\
-\frac{x+1}{2} & ; x \text{ odd}
\end{cases} \]
The best Golomb code is the one with parameter $m = 10$, which gives an average data rate of 5.37 bits/pixel.
Golomb coding II
Alternatively we can code the absolute value of the differences with a Golomb code and then send an extra sign bit for each non-zero value. Histogram for the absolute value of differences
The best Golomb code is the one with parameter $m = 5$, which gives an average data rate of 5.40 bits/pixel.
Lossless JPEG
JPEG is normally an image coding method that gives distortion, but there is also a lossless mode in the standard.
The pixels are coded row-wise from the top down.
The pixel $I_{ij}$ on position $(i,j)$ is predicted from neighbouring pixels. There are 7 predictors to choose from:
1. $\hat{I}_{ij} = I_{i-1,j}$
2. $\hat{I}_{ij} = I_{i,j-1}$
3. $\hat{I}_{ij} = I_{i-1,j-1}$
4. $\hat{I}_{ij} = I_{i,j-1} + I_{i-1,j} - I_{i-1,j-1}$
5. $\hat{I}_{ij} = I_{i,j-1} + \lfloor (I_{i-1,j} - I_{i-1,j-1})/2 \rfloor$
6. $\hat{I}_{ij} = I_{i-1,j} + \lfloor (I_{i,j-1} - I_{i-1,j-1})/2 \rfloor$
7. $\hat{I}_{ij} = \lfloor (I_{i,j-1} + I_{i-1,j})/2 \rfloor$
The difference $d_{ij} = l_{ij} - \hat{l}_{ij}$ is coded either by an adaptive arithmetic coder, or using a Huffman code. Huffman coding is not performed directly on the differences. Instead categories
$$k_{ij} = \lceil \log(|d_{ij}| + 1) \rceil$$
are formed. Statistics for the categories are calculated and a Huffman tree is constructed. The codeword for a difference $d_{ij}$ consists of the Huffman codeword for $k_{ij}$ plus $k_{ij}$ extra bits used to exactly specify $d_{ij}$.
<table>
<thead>
<tr>
<th>$k_{ij}$</th>
<th>$d_{ij}$</th>
<th>extra bits</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>–</td>
</tr>
<tr>
<td>1</td>
<td>–1, 1</td>
<td>0, 1</td>
</tr>
<tr>
<td>2</td>
<td>–3, –2, 2, 3</td>
<td>00, 01, 10, 11</td>
</tr>
<tr>
<td>3</td>
<td>–7, ..., –4, 4, ..., 7</td>
<td>000, ..., 011, 100, ..., 111</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Coding Goldhill using lossless JPEG:
<table>
<thead>
<tr>
<th>Predictor</th>
<th>Bits/pixel</th>
</tr>
</thead>
<tbody>
<tr>
<td>Predictor 1</td>
<td>5.39</td>
</tr>
<tr>
<td>Predictor 2</td>
<td>5.42</td>
</tr>
<tr>
<td>Predictor 3</td>
<td>5.80</td>
</tr>
<tr>
<td>Predictor 4</td>
<td>5.27</td>
</tr>
<tr>
<td>Predictor 5</td>
<td>5.16</td>
</tr>
<tr>
<td>Predictor 6</td>
<td>5.15</td>
</tr>
<tr>
<td>Predictor 7</td>
<td>5.13</td>
</tr>
</tbody>
</table>
For different images different predictors will work best. The standard supports coding different parts of an image with different predictors.
Standard for lossles and near lossless coding of images. Near lossless means that we allow the pixel values of the decoded image to be a little different from the original pixels.
The pixels are coded row-wise from the top down.
When pixel \((i, j)\) is to be coded you first look at the surrounding pixels in position \((i, j - 1)\), \((i - 1, j - 1)\), \((i - 1, j)\) and \((i - 1, j + 1)\). A context is formed by first calculating the gradients
\[
\begin{align*}
D_1 &= I_{i-1,j+1} - I_{i-1,j} \\
D_2 &= I_{i-1,j} - I_{i-1,j-1} \\
D_3 &= I_{i-1,j-1} - I_{i,j-1}
\end{align*}
\]
The gradients $D_k$ are quantized to three integers $Q_k$ such that $-4 \leq Q_k \leq 4$. The quantizer bounds can be chosen by the coder. Each $Q_k$ takes 9 possible values, which means that we have 729 possible combinations. A pair of combinations with inverted signs counts as the same context which finally gives us 365 different contexts.
A median edge detector (MED) prediction of $I_{ij}$ is done according to:
If $I_{i-1,j-1} \geq \max(I_{i,j-1}, I_{i-1,j}) \Rightarrow \hat{I}_{ij} = \min(I_{i,j-1}, I_{i-1,j})$
if $I_{i-1,j-1} \leq \min(I_{i,j-1}, I_{i-1,j}) \Rightarrow \hat{I}_{ij} = \max(I_{i,j-1}, I_{i-1,j})$
Otherwise: $\hat{I}_{ij} = I_{i,j-1} + I_{i-1,j} - I_{i-1,j-1}$
For each context $q$ we keep track if the prediction has a systematic error, if that is the case the prediction is adjusted a little in the correct direction.
Prediction error
MED prediction (left) compared to simple plane prediction (right).
The difference between the real pixel value and the predicted value \( d_{ij} = I_{ij} - \hat{I}_{ij} \) is coded using a Golomb code with parameter \( m = 2^{k_q} \). For each context \( q \) we keep track of the best Golomb code, and each \( k_q \) is constantly adjusted during the coding process.
The coder also detects if we get long runs of the same value on a row. In that case the coder switches to coding run-lengths instead.
If we code Goldhill using JPEG-LS we get an average data rate of 4.71 bits/pixel.
FLAC (Free Lossless Audio Coding)
Lossless coding of audio signals.
The audio signal is split into blocks (typically a couple of thousand samples each).
Code the sum/difference of the two stereo channels if this gives a higher compression.
Linear predictors are optimized for each block. There is also a possibility to use fixed predictors (compare to lossless JPEG).
The prediction error is coded using Rice codes.
Resulting compression ratio around 2, of course depending a lot on the type of audio coded.
https://xiph.org/flac/
|
{"Source-Url": "https://www.icg.isy.liu.se/courses/tsbk08/lect4.pdf", "len_cl100k_base": 6073, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 54108, "total-output-tokens": 7112, "length": "2e12", "weborganizer": {"__label__adult": 0.0005230903625488281, "__label__art_design": 0.000934600830078125, "__label__crime_law": 0.0005512237548828125, "__label__education_jobs": 0.00029397010803222656, "__label__entertainment": 0.00022423267364501953, "__label__fashion_beauty": 0.00022077560424804688, "__label__finance_business": 0.00020420551300048828, "__label__food_dining": 0.0005826950073242188, "__label__games": 0.000659942626953125, "__label__hardware": 0.006282806396484375, "__label__health": 0.0004680156707763672, "__label__history": 0.0003135204315185547, "__label__home_hobbies": 0.00012993812561035156, "__label__industrial": 0.0011434555053710938, "__label__literature": 0.000324249267578125, "__label__politics": 0.0003046989440917969, "__label__religion": 0.0009169578552246094, "__label__science_tech": 0.1629638671875, "__label__social_life": 9.244680404663086e-05, "__label__software": 0.0192108154296875, "__label__software_dev": 0.80224609375, "__label__sports_fitness": 0.0003783702850341797, "__label__transportation": 0.0006170272827148438, "__label__travel": 0.00025773048400878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18653, 0.05843]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18653, 0.75379]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18653, 0.83049]], "google_gemma-3-12b-it_contains_pii": [[0, 505, false], [505, 1114, null], [1114, 1855, null], [1855, 2687, null], [2687, 3574, null], [3574, 4605, null], [4605, 5358, null], [5358, 6075, null], [6075, 6475, null], [6475, 7150, null], [7150, 7905, null], [7905, 8553, null], [8553, 8896, null], [8896, 9478, null], [9478, 10037, null], [10037, 10931, null], [10931, 11700, null], [11700, 11954, null], [11954, 12575, null], [12575, 12627, null], [12627, 12857, null], [12857, 13152, null], [13152, 13330, null], [13330, 13767, null], [13767, 13877, null], [13877, 14193, null], [14193, 14853, null], [14853, 15633, null], [15633, 16075, null], [16075, 16660, null], [16660, 17512, null], [17512, 17597, null], [17597, 18116, null], [18116, 18653, null]], "google_gemma-3-12b-it_is_public_document": [[0, 505, true], [505, 1114, null], [1114, 1855, null], [1855, 2687, null], [2687, 3574, null], [3574, 4605, null], [4605, 5358, null], [5358, 6075, null], [6075, 6475, null], [6475, 7150, null], [7150, 7905, null], [7905, 8553, null], [8553, 8896, null], [8896, 9478, null], [9478, 10037, null], [10037, 10931, null], [10931, 11700, null], [11700, 11954, null], [11954, 12575, null], [12575, 12627, null], [12627, 12857, null], [12857, 13152, null], [13152, 13330, null], [13330, 13767, null], [13767, 13877, null], [13877, 14193, null], [14193, 14853, null], [14853, 15633, null], [15633, 16075, null], [16075, 16660, null], [16660, 17512, null], [17512, 17597, null], [17597, 18116, null], [18116, 18653, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18653, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18653, null]], "pdf_page_numbers": [[0, 505, 1], [505, 1114, 2], [1114, 1855, 3], [1855, 2687, 4], [2687, 3574, 5], [3574, 4605, 6], [4605, 5358, 7], [5358, 6075, 8], [6075, 6475, 9], [6475, 7150, 10], [7150, 7905, 11], [7905, 8553, 12], [8553, 8896, 13], [8896, 9478, 14], [9478, 10037, 15], [10037, 10931, 16], [10931, 11700, 17], [11700, 11954, 18], [11954, 12575, 19], [12575, 12627, 20], [12627, 12857, 21], [12857, 13152, 22], [13152, 13330, 23], [13330, 13767, 24], [13767, 13877, 25], [13877, 14193, 26], [14193, 14853, 27], [14853, 15633, 28], [15633, 16075, 29], [16075, 16660, 30], [16660, 17512, 31], [17512, 17597, 32], [17597, 18116, 33], [18116, 18653, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18653, 0.20601]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
35aab656790ce965599710123dc935abcb2cd2c2
|
Designing, developing, and implementing software ecosystems
towards a step-wise guide
Manikas, Konstantinos; Hämäläinen, Mervi; Tyrväinen, Pasi
Published in:
Proceedings of the 8th Workshop on Software Ecosystems
Publication date:
2017
Document version
Publisher's PDF, also known as Version of record
Document license:
Unspecified
Citation for published version (APA):
Designing, Developing, and Implementing Software Ecosystems: Towards a Step-wise Guide.
Konstantinos Manikas\textsuperscript{1,3}, Mervi Hämäläinen\textsuperscript{2}, and Pasi Tyrväinen\textsuperscript{2}
\textsuperscript{1} Department of Computer Science
University of Copenhagen, Denmark
k@manikas.dk
\textsuperscript{2} Agora Center
University of Jyväskylä, Finland
[Mervi.A.Hamalainen,Pasi.Tyrvainen]@jyu.fi
\textsuperscript{3} DHI Group
Hørsholm, Denmark
Abstract. The notion of software ecosystems has been popular both in research and industry for more than a decade, but how software ecosystems are created still remains unclear. This becomes more of a challenge if one examines the “creation” of ecosystems that have high probability in surviving in the future, i.e. with respect to ecosystem health.
In this paper, we focus on the creation of software ecosystems and propose a process for designing, developing, and establishing software ecosystems based on three basic steps and a set of activities for each step. We note that software ecosystem research identifies that ecosystems typically emerge from either a company deciding to allow development on their product platform or from a successful open source project. In our study we add to this knowledge by demonstrating, through two case studies, that ecosystems can emerge from more than a technological infrastructure (platform). We identify that ecosystems can emerge out of two more distinct types of environments and thus the design should be based on the characteristics of this categorization.
Moreover, we follow the approach that design, development, and establishment are not three distinct phases but rather aspects of a single re-iterating phase and thus propose the view of design, development, and establishment as a continuous process, running in parallel with and interrelated to the monitoring of the ecosystem evolution.
Key words: software ecosystems; software ecosystem design; software ecosystem health
1 Introduction
The notion of software ecosystems is argued to provide clear advantages compared to traditional software development and distribution as it, among other, accelerates software development, reduces time to market, and increases user and
customer segment reachability. It is not a surprise that within the recent years we have experienced an increasing popularity of software ecosystems both as a topic of study and as a means of developing and distributing software (products). Despite the popularity, it is still very challenging to create software ecosystems, especially if one should take into consideration aspects of ecosystem survivability, productivity, or health. Few studies have been investigating the conditions of establishment of a software ecosystems and even fewer propose ways of designing software ecosystems. However, this kind of studies tend to either be too specific for a type of ecosystem and thus hard to generalize, or too generic and thus hard to apply. Remarks that are already identified in the most recent and extensive systematic literature review [10], reviewing a total of 231 academic publications studying 129 software ecosystems.
Contemporary public discussion on software ecosystems is much driven by the most visible players in the digital economy, the platforms and app stores of Apple and Google being the usual examples in the discussion. Among the practitioners, this has lead to a platform-centric view of ecosystem thinking where a platform provider is needed to orchestrate an ecosystem. Further, the terms platform and ecosystem are closely connected if not treated almost as synonyms. However, the literature has presented a variety of ecosystems and value networks beyond the platform-centric approach, such as ecosystems build around standards, common business and commonly adopted infrastructure [8].
The limitations of platform-centric ecosystem thinking are, to some extent, visible also in our common thinking on how to build ecosystems. That is, we tend to think that the only way to build an ecosystem is to build a platform and attract participants to it by some means, typically by providing financial benefits to the participants. This underlying assumption may lead to ignorance of a wider view on how to build ecosystems as the viewpoints of actors in the value network and the value creation in the business domain are overlooked if not excluded totally from our thinking.
In this paper, we take the wider view to building ecosystems. We start our journey towards a method for building ecosystems from the observation that ecosystem can emerge out of three distinct types of environments and thus the design should be based on the characteristics of this categorization. We study two cases presenting an actor-rooted and a business-rooted approach to ecosystem building. Adding findings from the two cases to the infrastructure-rooted approach (including platform-centric approach) we propose a process for designing, developing, and establishing software ecosystems based on three basic steps and a set of activities for each step. Moreover, we follow the approach that design, development, and establishment are not three distinct phases, but rather aspects of a single re-iterating phase and thus propose the view of design, development, and establishment as a continuous process, running in parallel with and interrelated to the monitoring of the ecosystem evolution.
2 Background and related work
The field of software ecosystems has an activity that spreads through several years. From the first reference in the book of Messerschmitt and Szyperski [16] and the first publications in 2007, to the day, there have been several studies that have been examining software ecosystems as a whole and attempt to analyse, model, classify, or design software ecosystem. In this context Jansen et al. [7] proposed the analysis of software ecosystems from three perspective: software ecosystem level, software supply network level, and software vendor level.
Campbel and Ahmed [1] propose the analysis of software ecosystems into three components. Manikas and Hansen [14] analyse the literature of software ecosystems and identify, among other, a lack of consistency in what is a software ecosystem. They analyze the existing definitions and identify three main components: common software, business, and connecting relationships. Christensen et al. [3] propose the modelling and design of software ecosystems based on the concept of software ecosystem architecture consisted of three structures: organizational, business, and software. Knodel and Manikas [8] challenge the existing definition of software ecosystems and propose a set of building blocks for software ecosystems. Manikas and Hansen [13] focus on the concept of ecosystem health where they analyse the literature and propose a framework for defining ecosystem health. Hyrynsalmi et al [6] expand on this work to include 38 papers on health, while Hansen and Manikas [5], inspired by natural ecosystems, focus on defining the influence of individual actors to the ecosystem.
3 The cases
In this section we discuss and analyze two cases of designing and building a software ecosystem. The first case is the telemedicine ecosystem established around the telemedicine services of the Danish healthcare and the second cases is the smart city ecosystem established around the smart city and Internet of Things (IoT) infrastructure and services in an area of one of the most populated cities in Finland.
3.1 Telemedical ecosystem
Danish healthcare, following the tendency in many other western countries, is facing a number of challenges due to changes in the demographics. The increase in life expectancy and decrease of birth-rate in combination with a rapid increase of lifestyle conditions and the continuously improving healthcare diagnosis and treatment are putting a pressure on the economics of a welfare-based1 and position the continuous care of the elderly and the chronically ill in even more central focus [9]. Telemedicine, comes as solution to these challenges. Telemedicine is understood as the provision of health through a distance. However, telemedical
1 I.e. funded indirectly by collected tax.
technologies are faced with severe integration and interoperability issues caused by the increasing need to interact with other medical system characterized as "silo" solutions and organizationally complex systems [2]. The establishment of a software ecosystem comes to address these technical challenges and abstract the development of telemedical solutions from the resource-heavy task of integration and distribution.
Thus, the establishment of the telemedical ecosystem deviates from the typical view of ecosystem emergence (i.e. from a successful platform or product). In this ecosystem, the design was motivated from a set of clear incentives. The state and healthcare authorities have been part of shaping and clarifying the incentives, however this kind of actors have not been otherwise active in the design and establishment of the ecosystem. Therefore, the ecosystem was, during design, characterized form the lack of orchestration. The steps taken to establish the ecosystem was:
- Identify and map the existing (and future) actors, (software) systems and their relationships [12].
- Identify the incentives for the different actors and make them explicit [3].
- Build the infrastructure that will support the ecosystem.
3.2 Smart city ecosystem
The second case is the establishment of an ecosystem in the smart city domain. The contribution of the digital technologies is considered to form a foundation for so called smart cities. Smart cities are complex systems and consist of multiple domains like transportation, energy, living, and governance. Smart city domains utilize digital technologies by collecting and storing both private and public data. They increasingly release the public information and data sets for external parties. The idea behind releasing the public data sets is to provide a possibility for external stakeholders to develop and create smart applications and services for citizens. Naturally, an ecosystem would support and facilitate the actor and smart city service interaction. An example is the environment for agile software and internet of things product and service development and experimentation with real users (citizens) in real-world settings [4].
In this context, our case, an urban area in one of the ten most populated cities in Finland is on the process of establishing a smart city ecosystem. The ecosystem establishment process was initiated by a set of actors interested in the smart city domain. These actors created a consortium that aimed at promoting the interaction of digital and software services in collaboration with independent business models, i.e. an ecosystem. Purpose of the smart city ecosystem is to develop new applications and internet of things service solutions in collaboration with construction companies, smart grid providers, nursing houses, city governance, and citizens. The initial actors in the smart city ecosystem included representatives from universities and city as well as the stakeholders from private
\(^2\) A more detail description on this work can be found in [3, 9].
sector like the network service providers, telecommunications operators, smart locking service providers, and organizations in the privacy and digital identity domains. The citizens have central role in the smart city district. As an outcome of the smart city ecosystem, new applications and services are created to improve the quality of citizens’ every-day life and enhance the research and value creation of modern digital technology services in smart city domain. The process of establishing the ecosystem included the following steps:
- Identify and map ecosystem (to-be) actors.
- Define business aspects: actor incentives, value propositions, customer segments, and revenue streams.
- Build technological infrastructure (e.g. platform) to support the ecosystem.
4 Proposed approach
As noted, the two cases studies are examples of ecosystem established by other than a common technological infrastructure (or platform). The telemedicine ecosystem is a business-rooted\(^3\), while the smart city ecosystems is an actor rooted\(^4\). These two cases contribute with different perspectives on how ecosystem are established. They add more parameters to the up-to-now knowledge of ecosystems being created by a successful or popular technological infrastructure (platform) [10, 11, 14].
Up to the current point and to the best of our knowledge of the field, there is no previous work suggesting an applicable and holistic or generic (i.e. applicable to most or all types of ecosystems) way of creating a software ecosystem. This is the gap that we are trying to address with this approach, as we argue that a method for designing ecosystems that is easy to apply and mature enough would support the maturity of the field both theoretically and empirically.
In our approach, we propose the view of ecosystem design, development, and establishment as one continuous and re-iterative phase rather than three distinct phases. In order to initiate this process, the basic information needs to be collected and the first initial designs need to be drawn. Thus, we identify three main steps in our process to conduct the necessary work for the iterative design. Figure 1 shows the proposed steps and the tasks included in each step. Our approach includes three main steps: pre-analysis, design, and evaluate & monitor. In the subsections bellow we describe these steps. Our approach has a strong focus on the ecosystem health, thus apart from the design, we support the view of continuous monitoring and evolution of the ecosystem making the separation between design and establishment unclear. This is reflected in step 3.
Furthermore, taking the approach demonstrated from our cases, we identify that ecosystem design can occur based on three different ecosystem types: infrastructure-rooted, where the ecosystem is established around a technological
---
\(^3\) I.e. initiated by strong actor incentives.
\(^4\) I.e. initiated by a set of actors to drive the ecosystem development.
Software Ecosystem Design
Step 1: Pre-analysis
1.1 Domain
1.2 Scope
1.3 General principles
1.4 Existing ecosystem aspects
Actors
Technological infrastructure
Business
Step 2: Design
(a) Infrastructure (platform) rooted
2.1 Identify extension possibilities
2.2 Define business
Incentives
Value proposition
for the actor
for the ecosystem
Customer segment
Revenue stream
2.3 Map existing & new actors
Actors
Roles
Contributions
Interaction
Among actors
Actor to software
2.4 (Re) Define orchestrator strategy
2.5 Open - extend infrastructure
2.4 Involve actors
(b) Actor rooted
2.1 Map actors
Actors
Roles
Contributions
Interaction
Among actors
Actor to software
2.2 Define business
Incentives
Value proposition
for the actor
for the ecosystem
Customer segment
Revenue stream
2.3 Define/Identify orchestrator strategy
2.4 Build technological infrastructure
(c) Business rooted
2.1 Define business
Incentives
Value proposition
for the actor
for the ecosystem
Customer segment
Revenue stream
2.2 Map existing & new actors
Actors
Roles
Contributions
Interaction
Among actors
Actor to software
2.3 Define/Identify orchestrator strategy
2.4 Build technological infrastructure
Step 3: Evaluate and monitor
3.1 Desired behavior
3.2 Define ecosystem measures
3.3 Define iteration/observation intervals
3.4 Iterate ecosystem evaluation
Measure
Evaluate
Act
Fig. 1. Ecosystem design steps.
Software ecosystem design guide.
infrastructure\textsuperscript{5}; \textit{actor-rooted}, where the establishment is around a strong actor consortium; and \textit{business-rooted}, where the ecosystem is established around a strong business (or incentives).
\textbf{4.1 Step 1: Pre-analysis}
The initial step for the design of a software ecosystem is to identify the general information and characteristics of the future ecosystem. This includes identifying the applied domain of the ecosystem, i.e. how is the domain defined and what are the general characteristics of this domain. Further, this step includes defining the scope of the ecosystem and marking the borders of what is considered part of the ecosystem. Moreover, this step includes identifying the general \textit{principles} of the ecosystem, i.e. core values and characteristics of the ecosystem that essential for the ecosystem \cite{15}. Finally, part of the pre-analysis step includes identifying what aspects of the future ecosystem already exist that can form the base for the future ecosystem. This step will define whether the ecosystem is actor, infrastructure, or business rooted in step 2.
\textbf{4.2 Step 2: Design}
If we examine how ecosystems are created, the most common way appearing in the literature is from a (software) company opening their platform to external actors or an open source software (OSS) project that is gaining popularity. Examining the existing ecosystems in the industry (or in the literature e.g. the list in \cite{10}), we note that this is not the only way that these ecosystem were established. Part of our proposed approach is to tailor the ecosystem design and establishment according to different aspects that exist in the domain of the future ecosystem. The above mentioned examples of OSS projects or companies opening the platform are examples of a \textit{infrastructure rooted} ecosystems-to-be, since they have the base of what cold eventually become the common technological infrastructure of the future ecosystem. Another category is the \textit{actor rooted}, that are ecosystems where there is a (strong) set or network of actors that can be form the core of the future ecosystem. Finally, there is also the \textit{business rooted}, where there is a existing business potential and incentives (not necessarily for and from many actors) that can be the main drivers to the establishment of an ecosystem. An example of this can be found from the literature on evolution of vertical software industries where ecosystems emerge around new standards and platforms to enable effective collaboration between businesses/enterprises \cite{17, 18}. Clearly, the steps towards designing and establishing an ecosystem are different depending on the already existing aspects. Sub-steps (a),(b), and (c) list the actions for each type.
\textsuperscript{5} Here using the approach of \cite{8}, we identify that an infrastructure can be apart from a platform, a standard or a protocol.
4.3 Step 3: Evaluate and monitor
Finally, as already explained, in our approach we propose the view of the design and development as a continuous and iterative process where software ecosystem design, development, and establishment are not distinct phases but rather part of one continuous and re-iterative phase. In order to achieve that, the ecosystem should be constantly monitored on its evolution and reaction to changes and potential deterioration should initiate new actions on the ecosystem architecture or orchestration. Thus, this step includes activities that focus on identifying what should be measured in the ecosystem to identify evolution and change in ecosystem health. After the measures are identified monitoring and evaluation activities will focus on (i) intervening in the operation of the ecosystem with changes and (ii) evaluating the effect of potential changes (as much as the whole design). It is essential to underline that identification of measures is an essential step as it defines the scope of action within the ecosystem. Too narrow measure might result in lack of overview of the whole ecosystem while not accurate or poorly defined measures might guide to wrong conclusions on the ecosystem activity and evolution.
5 Discussion
This paper aims at bringing focus to a central issue in the field of software ecosystem by proposing a method on designing ecosystems. Although generic and applicable, our method does not cover all the possible and potentially essential aspects in ecosystem design and evolution. One relevant aspect not adequately discussed is the orchestration of software ecosystems. The orchestration is central aspect in the health and evolution of an ecosystem and eventually the design of an ecosystem should include concrete considerations on the orchestration, in order to support the different characteristics of the ecosystem, its domain, and scope. Another relevant aspect is the establishment of the proper interfaces both technical and organizational. The different interfaces between the software components (e.g. in the common technological infrastructure) and between the different actors, should reflect the orchestration strategy of the ecosystem and respect the domain, boarders, and roles of the ecosystem and its actors.
Finally, as already discussed, the choice of the proper measures for monitoring the ecosystem is central to the evolution of the ecosystem towards the right direction. The monitored measures should also be influences as much as influence the ecosystem orchestration.
6 Conclusion and future work
In this work, we try to put focus on the gap in research and industry on how to “create” software ecosystems. Using our deep knowledge on software ecosystem literature and industry and experience from designing software ecosystem, we
propose a method for designing software ecosystems that is easy to use and applicable. Our method is consisted of three steps and a set of activities for each step.
We are currently empirically validating and improving this method. Further work includes the empirical evaluation and improvement with cases of each different type of design. Moreover, we plan to identify characteristics of the method for specific domains, i.e. how this “generic” method changes when applied to a domain with specific characteristics. It is our hope that this will be a first step towards a better informed and explicit design of software ecosystems and eventually further maturity in the field.
Acknowledgments
This work was partially conducted under the 5K\textsuperscript{6} project, co-funded from the TEKES foundation (all authors), and the SCAUT\textsuperscript{7} project, co-funded by Innovation Fund Denmark, grant #72-2014-1 (Manikas).
References
\textsuperscript{6} https://agoracenter.jyu.fi/projects/5k
\textsuperscript{7} http://www.scaut.dk/
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/179276181/Manikas_2017_Designing_developing_and_implementing.pdf", "len_cl100k_base": 4453, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23952, "total-output-tokens": 6235, "length": "2e12", "weborganizer": {"__label__adult": 0.0003528594970703125, "__label__art_design": 0.00036787986755371094, "__label__crime_law": 0.0002913475036621094, "__label__education_jobs": 0.0007767677307128906, "__label__entertainment": 5.91278076171875e-05, "__label__fashion_beauty": 0.00011920928955078124, "__label__finance_business": 0.0003743171691894531, "__label__food_dining": 0.000301361083984375, "__label__games": 0.0004651546478271485, "__label__hardware": 0.00041294097900390625, "__label__health": 0.0004425048828125, "__label__history": 0.00023031234741210935, "__label__home_hobbies": 5.5789947509765625e-05, "__label__industrial": 0.00021958351135253904, "__label__literature": 0.00029015541076660156, "__label__politics": 0.00022792816162109375, "__label__religion": 0.0002658367156982422, "__label__science_tech": 0.00984954833984375, "__label__social_life": 9.959936141967772e-05, "__label__software": 0.006649017333984375, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.00023424625396728516, "__label__transportation": 0.00034737586975097656, "__label__travel": 0.00018477439880371096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27024, 0.03259]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27024, 0.5639]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27024, 0.88966]], "google_gemma-3-12b-it_contains_pii": [[0, 729, false], [729, 2977, null], [2977, 6175, null], [6175, 8978, null], [8978, 12049, null], [12049, 15037, null], [15037, 16422, null], [16422, 19419, null], [19419, 22245, null], [22245, 24842, null], [24842, 27024, null]], "google_gemma-3-12b-it_is_public_document": [[0, 729, true], [729, 2977, null], [2977, 6175, null], [6175, 8978, null], [8978, 12049, null], [12049, 15037, null], [15037, 16422, null], [16422, 19419, null], [19419, 22245, null], [22245, 24842, null], [24842, 27024, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27024, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27024, null]], "pdf_page_numbers": [[0, 729, 1], [729, 2977, 2], [2977, 6175, 3], [6175, 8978, 4], [8978, 12049, 5], [12049, 15037, 6], [15037, 16422, 7], [16422, 19419, 8], [19419, 22245, 9], [22245, 24842, 10], [24842, 27024, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27024, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
86e2ce63f0138dc50d6f460f316a19e4b08cf874
|
Issues in Knowledge Representation to Support Maintainability: A Case Study in Scientific Data Preparation
Steve Chien, R. Kirk Kandt, Joseph Roden and Scott Burleigh
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109-8099
Abstract
Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages).
A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks.
Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings.
Todd King and Steve Joy
Institute of Geophysics and Planetary Physics
University of California at Los Angeles
Los Angeles, CA 90024-1406
Introduction
Scientific data preparation is defined as the application of multiple transformations to collected data sets in order to produce data in an easily usable form. The questions a scientist asks dictate which data are to be collected as well as which transformations are to be applied. The need for simplified scientific data preparation has increased due to the volume of data now collected and the diverse uses for any specific type of data. Automated scientific data processing systems can be used to simplify this process.
While general scientific data processing systems have existed for some time, the complexity of data types and transformations required in specific domains renders these systems of limited utility. As a result, many scientific teams develop their own software systems to accomplish the data preparation required in their specific domain. These systems suffer because they become too specific, and the effort spent developing such systems are only of value within the context of a particular domain and task. Because scientists desire to reuse their work, hybrid systems are appearing which provide useful analysis tools and definition of domain-specific data types and transformations. Plans are developed in these systems which specify which of the transformations to apply to a collection of data sets. By the nature of the processing steps required in many domains, these plans can become quite complex. We are now at a point where the complexity of these tools requires significant expert knowledge to use.
Master Plumber [King & Walker 1991] is a software tool developed by the UCLA Institute of Geophysics and Planetary Physics to create programs to prepare scientific data. While its primary area of application has been time-series magnetometer data, the tool is applicable to the general task of scientific data preparation.
Master Plumber is a dataflow system. Thus, in Master Plumber, data elements are represented by columns, which are streams of data being processed as they move through the system. Data processing steps are called fittings, and a
plan to process a particular form of a dataset into another form is called a blueprint.
Thus, as shown in Figure 1, raw data might be read in using an intro flatfile fitting, a running average computed using a runstat fitting, and the results written into an output file.
1. intro flatfile infilename=foo
columns=bx
2. runstat length=1287 shift=1
columns=bx
3. write flatfile outfile=bar
columns=bx, rabx overwrite=YES
Figure 1: A Simple Blueprint
A major difficulty in constructing blueprints is tracking the many fitting and column interactions. While a typical blueprint might use 25 columns and 20 fittings, the more complex blueprints use hundreds of columns and 30 or more fittings. Because of the number of possible interactions, constructing and debugging scientific data preparation blueprints is a time-consuming task requiring expert knowledge.
Because of the complexity of the data preparation task, users sometimes make errors in blueprint construction. One type of construction error occurs when a user forgets to set up the data needed for a particular step. Unfortunately, this type of error can go unnoticed until far into the execution of the blueprint, wasting valuable time.
Another common situation is that the exact method of processing the data is dependent upon the character of the data. In this case the user will use some default methods for processing the data, examine the results, and modify the options. This tuning cycle continues until the data is in a satisfactory form.
The final aspect of blueprint development which complicates the development process is that new fittings are added to a system as new needs and requirements arise. In addition, new fittings also evolve with new options and characteristics being added. Any intelligent tool must be readily changed to remain useful in such a dynamic environment.
Currently there are approximately 65 fittings which are part of the standard Master Plumber system. These fittings perform a variety of transformations on the data flow, such as: introducing and writing data into several formats; displaying data on the screen; and actual numerical transformations. There are support libraries which allow for fittings to be written in either C or FORTRAN. A special fitting called PLISP takes programs written in a C-like language and performs the transformations on the data flow. This allows for new processing steps to be initially tested as PLISP programs and later be integrated as full-fledged fittings into the Master Plumber system.
Some scientists use data preparation systems indirectly with the help of software support personnel who write and debug the actual data preparation plans. The goal of PIPE is to make Master Plumber easy enough to use such that this type of support is not necessary. The combination of PIPE and Master Plumber will allow the blueprint developer to develop blueprints easier and faster, allowing them to spend more time on data analysis and less time on data preparation.
Overview
To achieve these goals of assistance in the scientific data preparation process, PIPE [Chien et al. 1992] provides four capabilities:
1. constraint checking to detect invalid blueprints before execution;
2. diagnosis assistance of blueprints through dependency analysis;
3. optimization of blueprints through dependency analysis; and
4. runtime estimation, using models of fitting runtime performance.
The architecture of the PIPE system is shown in Figure 2. PIPE accepts a blueprint file and a set of descriptors for datafiles and uses a fittings knowledge base to construct a dependency graph representing the computations to be performed by each of the fittings in the blueprint. This blueprint parsing phase uses knowledge of fittings and their options to construct a dependency graph, which indicates for each fitting which columns are accessed and used to modify existing columns, create new columns, or remove existing columns. This dependency graph can then be used by the constraint checking module which determines if any of the constraints associated with the fittings have been violated.
In cases where blueprints must be debugged, PIPE can use the dependency graph to support isolation of the fault in the blueprint. Because the dependency graph tracks all of the operations upon the columns, when the user detects an error in one of the output columns, PIPE can present a list of fittings which modified the column in question. The user can then focus his attention upon these fittings, to determine where the error was introduced into the data, sometimes by plotting intermediate data. After isolating the first fitting at which the column is faulty, the user can query PIPE for information on the fitting to determine which columns were used to compute the changed column. This process continues until the fault is isolated to the column, fitting option settings, or fitting code itself.
PIPE also provides an optimization capability. Because PIPE constructs a full computation dependency graph, PIPE can determine the last fitting in which each column of data is used in the blueprint. Thus unneeded data can be removed from the dataflow, decreasing the execution time. Because many fittings operate on data by default, PIPE distinguishes between default processing and explicit
processing. Default computation which does not result in a program output (e.g. plot, output file) can also be removed.
Finally, PIPE provides a runtime estimation capability. Using the dependency graph to determine which columns each fitting processes, and models of runtime for each fitting type, PIPE can provide an estimate of how long the blueprint will take to run to completion for the specified datafiles.
Blueprint Parsing
In order to provide assistance in blueprint development, PIPE constructs a dependency network representation of a blueprint. When a blueprint is read in by PIPE, it is processed from the first step onward. For each fitting, PIPE uses:
- methods stored in the fitting knowledge base,
- default values stored in the fittings knowledge base,
- fitting options,
- a list of existing columns in the flow, and possibly
- an input file
to determine:
- any new columns created by the fitting,
- any existing columns modified by the fitting,
- existing columns deleted by the fittings.
Additionally, for any new or modified columns, PIPE determines:
- the set of columns accessed in computing the value for the column.
Because columns may be processed by default or explicitly selected, the dependency network also makes note of this distinction. This facet of the processing is important in order to take appropriate action when optimizing the blueprint (see below).
Constraint Checking
Constraint checking occurs while the blueprint file is being parsed (i.e., prior to execution). A description of the constraint checking algorithm follows.
**During Parsing**
for each fitting in the blueprint
for each option specified
check option type constraints
check for required options
**After Parsing**
for each parsed fitting in blueprint
for each option in fitting
check option value constraints
check inter-option constraints
check dependency constraints
check inter-fitting constraints
Diagnosis Assistance
PIPE also provides a blueprint diagnosis facility. This capability supports two basic types of queries: column-centered queries and fitting-centered queries. The column-centered queries are of the form
"What fittings affected <column> before <fitting>?
and default to the entire blueprint. This question can be easily answered using information from the dependency network. PIPE steps through the fittings in the blueprint and determines those fittings which create, modify, or delete <column>. This list of fittings is then displayed to the user in graphical form. The fitting centered queries are of the form
"What columns did <fitting> affect?", and
"What columns did <fitting> access in performing its processing to affect these columns?"
These types of queries can be answered by interpreting the dependency graph information on the designated fitting. The first query can be answered by determining the set of columns created, modified or deleted by the fitting. The second query can be answered by accessing dependency network information regarding which columns were accessed by the fitting in performing these operations.
Blueprint Optimization
PIPE also provides a limited blueprint optimization capability. In this capability, PIPE examines the dependency graph of each column and determines the last fitting at which each column is accessed explicitly (i.e., not by default). PIPE then recommends removing this column immediately after this fitting. If this column is not processed in the remainder of the blueprint, this removal does not significantly alter the runtime of the blueprint. However, many of the fittings process all of the columns in the flow by default. Thus, when a column that is processed in the remainder of the blueprint is removed from the data flow a significant speedup can result. While commonly used blueprints are likely to have unused columns optimized by hand, automating this process relieves the user of the burden of determining the point at which a column can be removed. Additionally, by allowing PIPE to automatically determine the correct places to remove columns, PIPE reduces the chance that a user will inadvertently prematurely remove a column from the data flow, which would cause an error.
Runtime Estimation
The final capability that PIPE provides is runtime estimation. PIPE estimates the runtime of a blueprint for a specific data set by applying the following algorithm:
for each fitting in the blueprint
identify fitting runtime model
compute runtime given dataset size
add runtime to total runtime
compute new size of dataset
Tracking the size of a dataset in Master Plumber can be a difficult task. Original data set sizes are determined from input files. When data of different temporal granularity are introduced into an existing flow, or when decimation operations are performed, data set sizes will need to be recomputed. Sometimes a fitting can affect the size of the dataset in a manner that depends on the exact data processed. In these cases, the exact dataset size cannot be determined, so PIPE estimates the size of the dataset at the output of the fitting. These estimations are sufficient for giving the user reasonably accurate runtime estimates.
An Example
We now illustrate each of the capabilities of PIPE using example blueprints. For an example of constraint checking, suppose a user has created a blueprint containing the following statement:
4. bin columns=bx delta=60.0 min_max
Because the option min_max requires that a value be specified, PIPE would indicate a constraint error such as:
* Fitting 4. bin option min_max required value not found; string type required.
As another example of the constraint checking, consider the following blueprint statement:
7. crossavg except=time avgname=xavg
Assuming the user removed the column named time earlier in the data flow, PIPE would issue a constraint error indicating:
* Fitting 7. crossavg option except undefined column time; a column with that name was deleted at fitting 4. drano.
An example of the diagnosis capability supported by PIPE is illustrated in the following scenario. Figure 3 shows a Master Plumber blueprint file. Suppose that the user
examines the output of the blueprint and determines that column o2 is producing results that are incorrect. The user tries to determine what may have affected column o2 by querying PIPE:
Q: Which fittings created or modified column o2?
A: Fitting 10. drano created column o2.
Fitting 12. plisp modified column o2.
The user determines that the o2 column was still incorrect before fitting 12. plisp, so the user wants to determine what columns were accessed by and were used in creating o2.
Q: Which columns were accessed by fitting 10. drano in order to create column o2?
A: Column raraby was accessed by fitting 10. drano in order to create column o2.
The user then continues backtracking through the blueprint to isolate the error:
Q: What fittings before fitting 10. drano modified column raraby?
A: Fitting 9. runstat created and modified column raraby.
By using PIPE in this way, the user can focus his attention directly upon the possibly faulty fittings instead of having to examine every fitting and column.
PIPE also uses the dependency graph to optimize blueprints. Because PIPE can determine which fittings modify which columns in the blueprint, PIPE can determine the last point at which each column is needed in the blueprint. In the example blueprint shown in Figure 3, PIPE makes the following recommendations for removal:
- never introduce column rim
- remove sens_x, senx_y, sens_z and bz after fitting 4
- remove bx, by after fitting 8
- remove rabx, raby after fitting 9
- remove bxc, byc, bzc, and stime after fitting 12
PIPE also provides runtime estimation capabilities. For the optimization example shown above, PIPE estimates that the non-optimized blueprint will take 11:32 +/- 1:04 to run and the optimized blueprint will take 9:58 +/- 0:58 to run.
Issues in Design for Maintainability
The central concern in the PIPE knowledge representation was that the PIPE knowledge base be easy to maintain. While this is a concern in any knowledge-based system, it was particularly important in PIPE because fittings capabilities, options, and defaults, evolve because of changing scientists' needs. The majority of the knowledge represented in PIPE is used for the pre-runtime constraint checking. Thus, we focussed upon ensuring that these constraints be in a form that requires minimal change when fittings are changed.
In order to be easily maintainable, fitting constraints are implemented in three ways. First, basic option requirements constraints and argument requirements are specified in a simple language. This specification is then combined with a translator to generate C code which checks the options and option values against type and option requirement constraints. For example, each option for a fitting may be optional, or required (e.g., all fitting of this type must have this option specified) or be allowed to appear multiple times. Additionally, for each option arguments have associated constraints (e.g., all occurrences of this option must have an argument specified with the option). This structure affects maintainability as follows. When a change to a fitting is made which affects this information, the specification must be changed in the fitting knowledge base file. A translator is then used to automatically regenerate the associated constraint checking code so that the future constraint checking corresponds to the updated fitting.
The second type of constraint are simple, commonly occurring constraints, such as range constraints and inter-option range constraints (e.g., the value of option 1 must be greater than the value of option 2). These constraints are represented in a simple constraint language and stored in the fitting knowledge base file. When the fitting and option information in the blueprint is extracted, these constraints are checked by a C code module which uses the constraint information in the fitting knowledge base file to check the extracted options and arguments. Thus, when a change to the fitting is made which affects this constraint information, the constraint information in the fitting knowledge base file must be updated. Thereafter, when the fitting is parsed, the updated constraint information will be used.
The third type of constraint information is represented directly as C code. This flexibility is needed as there are certain forms of constraints among options which are not easily represented in general languages or may occur so infrequently as to be impractical to support in the general case. This type of constraint information is contained in an explicit C function, whose name is specified in the fitting knowledge base file. When changes to the fitting impact this information, the code relevant code must be modified, compiled, and re-linked.
Another type of knowledge encoded in a flexible fashion is the runtime models. This information indicates how much time each processing step will take as a function of parameters including: the option settings, the number of data records in the dataflow, and the computer being used. Fitting models to cover new fittings can be constructed in two ways. First, existing runtime models can be used as templates. In this case creating a runtime model for a new fitting corresponds to filling in the appropriate parameters in the model. Second, a new fitting model can be created from scratch (and would serve as a potential template for future fittings).
Discussion
The current prototype version of PIPE was completed in July 1991. It is implemented in CommonLISP and LISPView and runs on Sun workstations. It operates as described in this paper with the exception that it does not distinguish between columns accessed for different computations in a fitting (i.e. it only determines the set of columns used to compute all of the new or modified columns). For instance, suppose the runstat fitting uses column bx to create column rabx and also uses column by to create column raby. The current implementation will only be able to state that the the fitting uses columns bx and by to create columns rabx and raby. In contrast, the new implementation will be able to isolate bx as the column used to create column rabx, and by as the column used to create column raby. Also, the current prototype version operates on actual blueprint files but is not integrated with Master Plumber or MPTool, a menu driven interface for blueprint construction in Master Plumber.
Work is underway on the deliverable version of PIPE. This version is being implemented in C++, and is expected to be completed in May of 1992. The deliverable version of PIPE will use the more refined dependency representation described in this paper. This version will be integrated with Master Plumber and MPTool, and is intended to be delivered to and used by IGPP personnel at UCLA. This version of PIPE will also incorporate feedback upon the "look and feel" of the interface specified by IGPP personnel.
There are numerous related projects in providing intelligent assistance in scientific computing. The Kineticist's workbench project at MIT [Abelson et al. 1989] targets modelling and analysis of dynamic systems. The SINAPSE system [Kant et al. 1990] assists in construction of numerical models for data interpretation but is specific to seismic models represented as finite difference equations. The Reason system [Atwood et al. 1990] supports analysis of high energy physics data (and is a dataflow system). Finally, the Scientific Modeling Assistant project [Keller 1991] addresses support to facilitate development of scientific models.
Summary
This paper has described a system to assist in the development of scientific data preparation programs and discussed issues in design for maintainability. This issue of maintainability was particularly important because the processing modules (fittings) are constantly evolving due to changing scientists' needs. In order to maximize maintainability of the constraint knowledge base, information for each fitting is encapsulated in a fitting knowledge base file and as much as is practical, constraint information is represented in a general declarative fashion.
Acknowledgements
This work was performed by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
References
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930008315.pdf", "len_cl100k_base": 4552, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21090, "total-output-tokens": 5309, "length": "2e12", "weborganizer": {"__label__adult": 0.0002875328063964844, "__label__art_design": 0.0003681182861328125, "__label__crime_law": 0.0003814697265625, "__label__education_jobs": 0.001468658447265625, "__label__entertainment": 0.0001087188720703125, "__label__fashion_beauty": 0.00019800662994384768, "__label__finance_business": 0.0003094673156738281, "__label__food_dining": 0.0004265308380126953, "__label__games": 0.0004911422729492188, "__label__hardware": 0.0018911361694335935, "__label__health": 0.0006566047668457031, "__label__history": 0.00035071372985839844, "__label__home_hobbies": 0.00013887882232666016, "__label__industrial": 0.0008983612060546875, "__label__literature": 0.0002799034118652344, "__label__politics": 0.0003032684326171875, "__label__religion": 0.0005273818969726562, "__label__science_tech": 0.221435546875, "__label__social_life": 0.00014698505401611328, "__label__software": 0.047698974609375, "__label__software_dev": 0.720703125, "__label__sports_fitness": 0.0003740787506103515, "__label__transportation": 0.0005383491516113281, "__label__travel": 0.00020778179168701172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25275, 0.01631]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25275, 0.78788]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25275, 0.89942]], "google_gemma-3-12b-it_contains_pii": [[0, 4169, false], [4169, 9494, null], [9494, 11443, null], [11443, 15667, null], [15667, 20429, null], [20429, 25275, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4169, true], [4169, 9494, null], [9494, 11443, null], [11443, 15667, null], [15667, 20429, null], [20429, 25275, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25275, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25275, null]], "pdf_page_numbers": [[0, 4169, 1], [4169, 9494, 2], [9494, 11443, 3], [11443, 15667, 4], [15667, 20429, 5], [20429, 25275, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25275, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
edfb062a64a5b097e3820d0fa86bbe441c7df33d
|
[REMOVED]
|
{"Source-Url": "https://pureadmin.qub.ac.uk/ws/portalfiles/portal/140370689/10.1007_2Fs11227_018_2238_4.pdf", "len_cl100k_base": 6659, "olmocr-version": "0.1.47", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31957, "total-output-tokens": 8862, "length": "2e12", "weborganizer": {"__label__adult": 0.00036454200744628906, "__label__art_design": 0.0004591941833496094, "__label__crime_law": 0.0003883838653564453, "__label__education_jobs": 0.0012254714965820312, "__label__entertainment": 0.00012600421905517578, "__label__fashion_beauty": 0.0001875162124633789, "__label__finance_business": 0.00035691261291503906, "__label__food_dining": 0.0003955364227294922, "__label__games": 0.0007791519165039062, "__label__hardware": 0.0028629302978515625, "__label__health": 0.0008177757263183594, "__label__history": 0.0004699230194091797, "__label__home_hobbies": 0.0001499652862548828, "__label__industrial": 0.0007791519165039062, "__label__literature": 0.0002868175506591797, "__label__politics": 0.00036215782165527344, "__label__religion": 0.0006804466247558594, "__label__science_tech": 0.267578125, "__label__social_life": 0.0001132488250732422, "__label__software": 0.0121307373046875, "__label__software_dev": 0.7080078125, "__label__sports_fitness": 0.00036835670471191406, "__label__transportation": 0.0008730888366699219, "__label__travel": 0.0002846717834472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38420, 0.03127]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38420, 0.17877]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38420, 0.88801]], "google_gemma-3-12b-it_contains_pii": [[0, 1761, false], [1761, 2503, null], [2503, 4306, null], [4306, 7941, null], [7941, 9128, null], [9128, 11274, null], [11274, 13920, null], [13920, 15942, null], [15942, 19053, null], [19053, 21852, null], [21852, 26874, null], [26874, 31349, null], [31349, 34497, null], [34497, 38420, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1761, true], [1761, 2503, null], [2503, 4306, null], [4306, 7941, null], [7941, 9128, null], [9128, 11274, null], [11274, 13920, null], [13920, 15942, null], [15942, 19053, null], [19053, 21852, null], [21852, 26874, null], [26874, 31349, null], [31349, 34497, null], [34497, 38420, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38420, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38420, null]], "pdf_page_numbers": [[0, 1761, 1], [1761, 2503, 2], [2503, 4306, 3], [4306, 7941, 4], [7941, 9128, 5], [9128, 11274, 6], [11274, 13920, 7], [13920, 15942, 8], [15942, 19053, 9], [19053, 21852, 10], [21852, 26874, 11], [26874, 31349, 12], [31349, 34497, 13], [34497, 38420, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38420, 0.16146]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
3671de8fde8db45e7cc9dcabecb4b13e31f1954c
|
[REMOVED]
|
{"len_cl100k_base": 6432, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23824, "total-output-tokens": 7058, "length": "2e12", "weborganizer": {"__label__adult": 0.0003371238708496094, "__label__art_design": 0.0006070137023925781, "__label__crime_law": 0.0005354881286621094, "__label__education_jobs": 0.0023651123046875, "__label__entertainment": 0.00018334388732910156, "__label__fashion_beauty": 0.0002415180206298828, "__label__finance_business": 0.0005588531494140625, "__label__food_dining": 0.0004930496215820312, "__label__games": 0.0007371902465820312, "__label__hardware": 0.0010204315185546875, "__label__health": 0.0010442733764648438, "__label__history": 0.0004405975341796875, "__label__home_hobbies": 0.00013947486877441406, "__label__industrial": 0.0005550384521484375, "__label__literature": 0.00104522705078125, "__label__politics": 0.0003719329833984375, "__label__religion": 0.0005855560302734375, "__label__science_tech": 0.365478515625, "__label__social_life": 0.0001964569091796875, "__label__software": 0.04925537109375, "__label__software_dev": 0.57275390625, "__label__sports_fitness": 0.00027680397033691406, "__label__transportation": 0.0005173683166503906, "__label__travel": 0.0002503395080566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24098, 0.05336]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24098, 0.53528]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24098, 0.87939]], "google_gemma-3-12b-it_contains_pii": [[0, 2780, false], [2780, 6225, null], [6225, 9138, null], [9138, 11338, null], [11338, 12967, null], [12967, 14849, null], [14849, 16947, null], [16947, 20484, null], [20484, 22256, null], [22256, 24098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2780, true], [2780, 6225, null], [6225, 9138, null], [9138, 11338, null], [11338, 12967, null], [12967, 14849, null], [14849, 16947, null], [16947, 20484, null], [20484, 22256, null], [22256, 24098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24098, null]], "pdf_page_numbers": [[0, 2780, 1], [2780, 6225, 2], [6225, 9138, 3], [9138, 11338, 4], [11338, 12967, 5], [12967, 14849, 6], [14849, 16947, 7], [16947, 20484, 8], [20484, 22256, 9], [22256, 24098, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24098, 0.17442]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
d203dd810f9695ae3ebf3f182b84397d7ce868b6
|
Inconsistency Repair to Improve the Alignment Results of Ontology Matchers
Bahareh Behkamal
Department of Computer Engineering
Ferdowsi University of Mashhad
Mashhad, Iran
b.behkamal@mail.um.ac.ir
Mahmoud Naghibzadeh
Department of Computer Engineering
Ferdowsi University of Mashhad,
Mashhad, Iran
naghibzadeh@um.ac.ir
Received: October 22, 2016- Accepted: March 14, 2017
Abstract— Ontology inconsistency is one of the most important topics in the field of ontology matching. Until now many matchers are introduced but most of them suffer from inconsistencies. Many of the ontology matching tools have severe problems with respect to the quality of matching results and therefore the results of matching process is not adequate. In this paper, we focus on this topic and present a new method to produce better results from the matching process. The major novelty of this paper is in detecting the inconsistencies in ontologies before starting the matching process. In this phase, many problems caused by ontology diversity are resolved. Besides, some new patterns and inconsistencies in ontologies are detected and then refactoring operations are applied on them. At the end, one of the well-known matchers in OAEI is selected to evaluate our work. Experimental results show that the transformed ontologies are more efficient than original unrepaired ones with respect to the standard evaluation measures.
Keywords- Ontology matching; Alignment; Inconsistency; Refactoring; Pattern detection
I. INTRODUCTION
The vast progress of data and communication on the web has caused a huge amount of diversity in information. The problem of managing heterogeneity in various information resources is increasing. Until now many solutions have been proposed to facilitate this problem, and specifically, to automate integration of distributed data resources. Among them, semantic technologies have attracted particular attention. One of the best semantic technologies in this field is ontology matching. Ontology matching is a technique that takes the ontologies as an input and extracts the alignments as an output. The alignment is a set of correspondences between entities of ontologies that are semantically related. These correspondences can be used for various tasks, such as ontology integration [1], ontology evolution [2], data integration [3], and data warehouses [4]. Until now, many different tools for matching process are developed. However, most of them suffer from many problems with respect to the quality of matching results. Thus, in this paper we proposed an approach to get a better results from matching processes. The solution is adding a preprocessing phase to matchers. In the preprocessing phase, many input ontologies are analyzed in order to detect inconsistencies and inappropriate patterns modeled by various developers. For detecting these inconsistencies, ontology preprocessing language (OPPL) is used. Then, the refactoring rules are applied on detected patterns to repair the inconsistencies in input ontologies. At the end, assimilated ontologies delivered to matchers for matching process. In this paper for evaluating this work one of the best matcher namely, ASMOV from OAEI is selected. The Ontology Alignment Evaluation Initiative (OAEI) is a coordinated international initiative, which evaluate all matchers every year. As you can see in Figure 1, ASMOV has a good rank in comparison to other matchers with respect to the standard evaluation measurements such as precision, recall, and F-measure [5]. The precision, recall, and F-measure are explained in Section V. The second reason for selecting the ASMOV to evaluate our work is that it can do n:m alignment in contrast to other matchers like, SAMBO, Falcon, DSsim, RiMOM, Anchor-Flood, and AgreementMaker which can do only 1:1 alignment.
The rest of the paper is organized as follows: Section 2 introduces some general terminologies relevant to this research. Section 3 gives a theoretical background of related work and Section 4 elaborates on the methodology. In Section 5, the quality of the matching results is evaluated. Finally, Section 7 provides the conclusion and future work.
II. GENERAL TERMINOLOGIES
In the following some preliminary definitions and terms that are relevant to this literature and are used throughout this paper are described.
A. Ontology
Ontology $O$ contains a set of entities related to a number of relations. Entities of an ontology can be divided into components as follows [6]:
- **Classes (C):** Classes define the concepts within the ontology
- **Individuals (I):** Individuals denote the object instances of classes
- **Literals (L):** Literals represent concrete data values
- **Data types (T):** Data types determine the possible types of those values
- **Object properties (OP):** Include the definitions of possible associations between two individuals
- **Data type properties (DP):** Include the definitions of possible associations between one individual and a literal.
There are four specific relations in ontology matching: equivalence, subsumption, disjointness, and membership.
B. Matching process
Matching is the process of finding the relations and correspondences between entities of different ontologies. The matching operation determines the alignment $A$ for a pair of ontologies. Generally, matching algorithms can be classified based on the input of the algorithms, the characteristics of the matching process, and the output of the algorithms. The input dimension focuses on the input type on which algorithms operate. Algorithms can be classified depending on the data/conceptual models in which ontologies or schemas are described. The matching process can be based on its general properties. In particular, this depends on the approximate or exact nature of its computation. The output of a matching algorithm is related to the form of the alignment. For example, the correspondence between ontology entities is either one-to-one or not. Another dimension concerns is the kind of relations between entities that a system can provide. Most of the matching softwares focus on equivalence (=) relation, while a few others are able to provide more expressive results (e.g. subsumption and incompatibility) [7, 8].
C. Alignment
The alignment of ontologies $o$ and $o'$ is a set of correspondences between two or more (in the case of multiple matching) ontologies. The alignment is the output of the matching process between the entities of $o$ and $o'$. The alignment can be achieved in various cardinalities: 1:1 (one-to-one), 1:m (one-to-many), n:1 (many-to-one) or n:m (many-to-many).
D. Refactoring
Refactoring is recognized as changes that are made to the internal structure of the software in order to make it easier to understand and to modify without changing its observable behavior.
III. THEORETICAL BACKGROUND
To establish suitable semantic correspondences between entities of different ontologies, the integration of the input ontologies is needed. Unfortunately, many ontology matching systems ignore the semantics of the input ontologies in the matching process. Therefore, the matching result is not satisfactory. In this paper, we attempt to combine four apparently distant areas to handle this problem. These areas are: ontology matching, ontology patterns, ontology refactoring, and inconsistency repair. Accordingly, in this section, some research conducted in each of these areas are described.
Research in ontology matching has been burgeoning since the early 2000’s. So far, most articles on the ontology matching field have focused on the method of matching processes and have introduced some matchers with diverse approaches. In this section, some matchers which have high ranks in ontology alignment evaluation initiative (OAEI) are introduced. ASMOV (Automated Semantic Matching of Ontologies with Verification) [6] have been applied to the lexical and structural characteristics of two ontologies to calculate the similarity measures. Then the alignment have been verified to ensure that it does not contain semantic inconsistencies. RiMOM [9] is a dynamic multi-strategy ontology alignment framework that combines multiple strategies to improve matching efficiency. The key intuition in this framework is that similarity characteristics between ontologies may vary widely. This approach has considered both the textual and structural characteristics of ontologies. RiMOM is a framework based on risk minimization of the Bayesian
decision systems. It employs multiple ontology alignment strategies and sets a combination weight. Another system is Falcon-AO [10], a practical ontology matching system with good performance that acts based on a number of remarkable features. It is an automatic ontology matching system that uses multiple elementary matchers (V-Doc, GMO and PBM), coordination rules, and the similarity combination strategy. PROMPT [11] algorithm consists of an interactive ontology merging tool and a graph-based mapping called Anchor-PROMPT. Anchor-PROMPT [12] uses linguistic “anchors” as a starting point and analyses these anchors in terms of the structure of the ontologies. GLUE [13] discovers mappings through multiple learners that analyze the taxonomy and the information within concept instances of ontologies. S-Match [14] is a deductive technique for semantic ontology matching which employs a number of elemental level matchers to express ontologies as logical formulas and then use a propositional satisfiability (SAT) solver to check the validity of these formulas. Generally, all of the above matching algorithms are classified into two categories: elemental and structural. Elemental level matching techniques compute matching elements by analyzing entities in isolation and ignoring their relations with other entities. Structural level techniques compute matching elements by analyzing how entities appear together in a structure and considering the relation of concepts in taxonomy tree [15].
In recent years, some works on ontology patterns is done [16-19]. Ontology patterns have been used in many fields, but they have rarely been applied in the field of ontology matching. Ontology patterns are mainly inspired by software engineering and knowledge engineering [20]. In the following, some previous works in the field of ontology matching by considering the ontology patterns is described. The paper in [21] involves testing the impact of ontology refactoring on the results of three matcher, namely HMatch, Falcon-AO, and ASMOV. In this paper, some modeling errors via name structure analysis were found and three refactoring operations were applied. By considering semantic structures, authors in [22] analyzed collections of OWL ontologies in order to determine the number of occurrences of several combined name and graph patterns. These structures ranged from simple subsumption to more complex constructions. The goal of this paper is to facilitate automatic alignment among different models by finding such patterns in the given ontologies. In [23], the authors concentrate on detection and mutual matching of semantic structures in ontologies. The authors use the equivalence relation, as well as analyzing homogeneous correspondence. Research in [24] presents a simple method of tracking name patterns over OWL ontology taxonomies. This method helps to detect several probable taxonomic errors and modeling inconsistencies with respect to their set-theoretic interpretations. In [25] authors applied weights to the edges of WordNet hierarchy to improve the semantic word similarity. Furthermore the distance of two words and depth of words in semantic similarity assessment are utilized. This approach can be applied for inconsistencies detection phase of matching process. Until now ontology refactoring is employed in many different areas [26-29], but the impact of ontology refactoring on the ontology matching field is rarely discussed [21, 23, 30]. In this paper, we focus on this matter. In [26], the authors focus on the detection of anomalies as an important criterion for verification. In this paper, some approaches for the syntactic verification of ontologies are explained and definitions are extended with respect to the existence of rules. Furthermore, novel measures are introduced for detecting the parts of the ontology that may create problems for maintainability. This paper [27] proposed an approach for refactoring multimodal knowledge on the basis of a generic data structure in order to support the representation of multimodal knowledge. Moreover, how this data structure was created from given documents (i.e. the most general mode of knowledge) was explained, along with how different refactoring could be performed by considering various levels of formality. In [29], the authors present the semantic knowledge wiki. Know WE, used to capture and share ontological knowledge for the effective elicitation of problem solving knowledge. Also, a distributed knowledge acquisition process and refactoring phase are shown. In [30], a semi-automatic process for lifting meta-models into ontologies is proposed that allows creating the semantic integration of modeling languages. In so doing, implicit concepts in the meta-model are changed to explicit concepts in the ontology. The application of refactoring patterns on the resulting ontologies could improve automation support for semantic integration tasks. The paper [28] presents a method to develop conceptual schemas as refinements of more general ontologies. For obtaining final conceptual schemas, three activities are performed: refinement, pruning, and refactoring. The refinement phase is done to execute a set of additive operations to the ontology to create necessary elements. Afterwards, in the pruning phase, some unnecessary elements are deleted. Then, a pruned ontology is obtained. At the end, the pruned ontology can be improved by using refactoring operations to obtain the final conceptual schema.
IV. METHODOLOGY
In this part, an approach is proposed to improve the quality of the matching results. The aim of this approach is improving the alignment results by finding the inconsistencies before matching process. Our previous works [31], [32] focus on only lexical and structural patterns, but in this work we concentrate on some new inconsistency patterns. To accomplish this aim, a pre-processing phase is added to matchers. In the pre-processing phase, at first, a comprehensive survey to find the inconsistencies in input ontologies are performed. Then various lexical and structural patterns, which have been modeled by different developers, are detected. Afterward, some refactoring operations are applied on these patterns for repairing
the ontologies. Finally, these repaired ontologies are used as inputs of the matching process. This process is evaluated by ASMOV [33]. Experimental results indicate that better outcomes can be achieved by applying the pre-processing phase as opposed to original ones. In the following the details of work is elaborated.
A. First step: Inconsistency Detection Phase
In this step, some inconsistencies were detected based on our preliminary analysis of many ontologies. For detecting lexical inconsistencies, the name of entities, especially classes in OWL ontologies are analyzed. The lexical feature consists of all information readable by humans in the ontology. Various ontologies use different methods for defining the names of homogeneous concepts, especially for compound words. In OWL ontologies, different styles in concept naming lead to many obstacles for calculating lexical similarities in matchers.
In ASMOV, three lexical concepts in OWL ontologies are considered: id, label, and comment. ASMOV uses the Lin method [34] for calculating the lexical similarity. As an instance, in two ontologies of a conference track, namely Conference and Ekaw two different class naming for a similar concept is discovered, <Conference#conference-www> - <Ekaw#website> and also <Conference#conference-contribution> ~ <Ekaw#rejected-paper>, both of them couldn’t be found by lexical similarity phase of ASMOV. Therefore, to solve these kinds of problems, some lexical patterns and inconsistencies are detected based on naming ontology design patterns [35] for the purpose of unifying the naming for these different styles of naming. To accomplish this, we used one refactoring operation called renaming operation (RN), which is described in the next section. By doing this, calculating the lexical similarity in matchers, which is done by different methods, can do better than before. Thus, better results can be obtained from the matching process.
Structural patterns are based on the fact that the taxonomic structures of ontologies are often varied and confusing. One reason for this is that different developers have dissimilar viewpoints for developing ontologies. Therefore, they utilize different hierarchies and granularities for defining the entities of ontologies in the same domain. For example, in two ontologies of the conference track namely Conference and Ekaw, realize that there are two different granularities in concept naming for the similar concept “author”. In Conference, three levels of granularity for “author” is found which include: contribution regular-author, contribution_co-author, and Conference _1th-author. However, in Ekaw, there was only one level of granularity for author, namely Paper_author. Furthermore, many problems for calculating the relational similarity by some matchers have been recognized. The relational or hierarchical similarity phase in most matchers is computed by combining the similarities between the parents and children of entities that want to be compared. By considering the problems mentioned above and matcher’s work, we realized that different taxonomic structures and different granularities in peer ontologies cause many problems in the matching process. For solving this problem, another refactoring operation, called restructuring operation (RS), is employed for assimilating the structural features of OWL ontologies. Our results show that, in most ontologies, there are significant number of occurrences of the aforementioned patterns.
B. Second step: Refactoring phase
In this phase, by refactoring operations some patterns and inconsistencies, which are detected in previous phase, are repaired. All cases of the modeling errors detected via some patterns mentioned earlier can be repaired by two refactoring operations. The detection of these patterns is the starting point for a refactoring. Generally, refactoring is a process for performing some changes in the internal structure of the software in order to make it easier to understand and to modify without changing its discernible behavior. In this literature, the refactoring process of an ontology matching field is applied. Thus, some changes are done in ontologies by a semi-automatic process. By doing this, new and more understandable versions of ontologies for users and matchers are produced. These versions of ontologies can be utilized more effectively by different ontology matching tools.
There are three general refactoring operations: adding operation (ADD), restructuring operation (RS), and renaming operation (RN). These operations consist of different steps depending on the detected situation [21]. In this paper, RN and RS are used for lexical patterns and structural patterns, respectively. More desirable results in lexical similarity of matchers can be obtained by applying the rename operations for the name of the classes. The rename operations are done by considering the name of the classes in the ontology that have the same taxonomic structures in the peer ontology. Furthermore, by considering the parent-child relations and various granularities used in peer ontologies, restructuring operations are applied for assimilating the structural features of the OWL ontologies. Experimental results show that, better results can be achieved from the structural similarity phase of matchers by transforming a part of ontology into another one. We carry out our experiments on seven pairs of ontologies from the conference track. The reason for choosing these seven pairs among other ontologies is described in the next section.
The number of RN and RS operations applied on these seven pairs of ontologies is explained in the following. In four pairs of ontologies, <Cmt-ConfOf>, <Cmt-Ekaw>, <Conference-Ekaw>, and <Edas-Ekaw>, RN operations are applied more than RS operations, because of the many different lexical patterns find in these pairs. Besides, in other ontology pairs, <Cmt-Sigkdd>, <Conference-ConfOf>, and <ConfOf-Sigkdd>, RS operations are utilized more than RN operations, because these pairs of ontologies have different hierarchical structures and RS operations is used for assimilating the taxonomies.
C. Data set
Some ontologies from OAEI is selected to evaluate this work. The OAEI offers several tracks and subtracks concentrated in different types of matching problems. Our approach was tested on the Conference Track [36]. They are described in OWL-DL and published in the RDF/XML format [37]. This data set is a well-known data set to the organizers and has been used in many ontology matching evaluations. The Conference dataset can be viewed as a much more challenging test cases in contrast to other ontologies of OAEI, such as the Benchmark dataset [38], [39]. Our experiment was carried out on six out of sixteen ontologies of the Conference Track. These ontologies are cmt, confOf, ekaw, conference, edas, and sigkdd. The reason for selecting these six ontologies among others is that reference mapping (also referred to as the gold standard) is available for all possible combinations of these selected ontologies. To evaluate the accuracy of the matching process, it is necessary to determine both the number of correctly found correspondences and the number of incorrectly found correspondences.
D. Implementation
Our implementation is based on the employment of Java language with Jena API in Net Beans IDE. Furthermore, protégé and the Ontology Pre-Processor Language (OPPL) were used for manipulating ontologies written in OWL. OPPL is a domain-specific language, based on the Manchester OWL Syntax. OPPL instructions can add or remove entities and add/remove axioms to entities in OWL ontology. The OPPL Instruction Manager is a Java library that processes OPPL instructions to make changes in OWL ontology. This language is also suitable for defining independent modeling macros that can be applied across ontologies [40].
E. Practical Example
Presented in this section is a practical example to clarify the proposed approach by testing the work with ASMOV matcher. Figure 2 illustrates different styles in class naming and various taxonomic structures for defining the same concepts in a part of two ontologies, namely ConfOf and Sigkdd.
The inconsistencies are located in two peer ontologies are detected with OPPL. After that the ontologies are manipulated by applying the refactoring rules on each one. Then assimilated ontologies are delivered to ASMOV matcher as inputs. The alignment results are shown in figure 3.
V. EVALUATION OF THE MATCHING RESULTS
For evaluating the matching results three standard measures, precision, recall, and F-measure is used. Precision is defined as the number of correctly found correspondences divided by the total number of found correspondences. Recall is considered as the number of correctly found correspondences divided by the number of reference alignment. A perfect precision score of 1.0 means that every correspondence computed by the algorithm was correct (correctness), whereas a perfect recall scores of 1.0 means that all correct correspondences were found (completeness).
Precision and recall are defined in (1), (2) [42].
\[
\text{Precision} = \frac{\# \text{Correctly found matches}}{\# \text{Number of all found matches}} \quad (1)
\]
\[
\text{Recall} = \frac{\# \text{Correctly found matches}}{\# \text{Number of reference alignment}} \quad (2)
\]
F-measure represents a trade-off between precision and recall and it is calculated as (3).
\[
F = \frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}} \quad (3)
\]
Experiments were performed on seven pairs of ontologies from the conference track. The alignments generated automatically by ASMOV for these pairs of ontologies before and after of applying the proposed approach. The results were illustrated in figure 4, figure 5, and figure 6. The results of our experiments show that transformed ontologies improve the matching results with respect to the standard evaluation measures i.e. precision, recall, and F-measure.
the matching process with respect to standard evaluation measurements, i.e. precision, recall, and F-measure.
For future research, new solutions can be proposed for overcoming the uncertainty and other challenges in the field of ontology matching. Furthermore, our approach can be tested on other matching tools, especially those participating in the OAEI contest. Moreover, some detectable patterns for discovering errors of ontologies and other refactoring operations for repairing them can be extended.
ACKNOWLEDGEMENT
This research was partially supported by Ferdowsi University of Mashhad under the grant number 2/39961 to which the authors likes to extend his thanks to.
REFERENCES
Bahareh Behkamal has received her B.S degree in Computer Engineering from Azad University of Mashhad, Iran in 2005 and her M.S degree in Computer Engineering from PNU University of Tehran, Iran in 2011. She is currently Ph. D candidate in Computer Engineering at Ferdowsi University of Mashhad, Iran. Her past research interests include Semantic Web and Ontology Matching. She has published some papers in international journals and conferences. She is currently researching in the area of Structural Bioinformatics.
Mahmoud Naghibzadeh has received his M.Sc. and Ph.D. degrees in Computer Science and Computer Engineering, respectively, from University of Southern California (USC), USA. He is now a full professor at the Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran. He is the director of Knowledge Engineering Research Group (KERG) laboratory. His research interests include the scheduling aspects of real-time systems, Grid, Cloud, Multiprocessors, Multicores, and GPGPUs. He is also interested in Bioinformatics computer algorithms, especially protein tertiary structures and protein-protein interactions.
IJICTR
This Page intentionally left blank.
|
{"Source-Url": "https://ijict.itrc.ac.ir/article-1-45-en.pdf%20", "len_cl100k_base": 5251, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30723, "total-output-tokens": 8497, "length": "2e12", "weborganizer": {"__label__adult": 0.00037384033203125, "__label__art_design": 0.0006341934204101562, "__label__crime_law": 0.00063323974609375, "__label__education_jobs": 0.0032901763916015625, "__label__entertainment": 0.0001735687255859375, "__label__fashion_beauty": 0.00029087066650390625, "__label__finance_business": 0.0005083084106445312, "__label__food_dining": 0.0004355907440185547, "__label__games": 0.0008707046508789062, "__label__hardware": 0.0008215904235839844, "__label__health": 0.0011138916015625, "__label__history": 0.0005292892456054688, "__label__home_hobbies": 0.00016069412231445312, "__label__industrial": 0.0005784034729003906, "__label__literature": 0.0011463165283203125, "__label__politics": 0.0005598068237304688, "__label__religion": 0.0007839202880859375, "__label__science_tech": 0.321533203125, "__label__social_life": 0.00027441978454589844, "__label__software": 0.032012939453125, "__label__software_dev": 0.6318359375, "__label__sports_fitness": 0.00031948089599609375, "__label__transportation": 0.0005869865417480469, "__label__travel": 0.0002694129943847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35185, 0.02674]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35185, 0.37708]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35185, 0.8916]], "google_gemma-3-12b-it_contains_pii": [[0, 3830, false], [3830, 8519, null], [8519, 14781, null], [14781, 20961, null], [20961, 24854, null], [24854, 28081, null], [28081, 35142, null], [35142, 35185, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3830, true], [3830, 8519, null], [8519, 14781, null], [14781, 20961, null], [20961, 24854, null], [24854, 28081, null], [28081, 35142, null], [35142, 35185, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35185, null]], "pdf_page_numbers": [[0, 3830, 1], [3830, 8519, 2], [8519, 14781, 3], [14781, 20961, 4], [20961, 24854, 5], [24854, 28081, 6], [28081, 35142, 7], [35142, 35185, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35185, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
b049703a9c1c5e25660e45a298f347d33ea5a9e6
|
This is the accepted version of a paper presented at 2018 *IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC 2018).*
Citation for the original published paper:
A Probabilistic Enterprise Architecture Model Evolution
In: (pp. 51-57). IEEE
https://doi.org/10.1109/EDOC.2018.00017
N.B. When citing this work, cite the original published paper.
Permanent link to this version:
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255632
A Probabilistic Enterprise Architecture Model Evolution
Simon Hacks and Horst Lichter
Research Group Software Construction
RWTH Aachen University
Aachen, Germany
{hacks, lichter}@swc.rwth-aachen.de
Abstract—Enterprise Architecture (EA) is a widely accepted means to ease the alignment of IS projects with enterprise-wide objectives. One central artifact of EA are EA models, which provide a holistic view on the organization and support EA's stakeholder to create added value. As EA collects its data from different sources, the data can be contradictory. This work contributes to existing research by proposing a novel approach to deal with contradictory data without solving the thereby caused conflicts.
In order to achieve this objective, we refine the Predictive, Probabilistic Architecture Modeling Framework (P²AMF) introduced by Johnson et al., which already incorporates a way to represent uncertainty regarding the existence of modelled entities. To make our technique usable, we generalize P²AMF from its UML/OCL notation to a graph presentation in order to apply it to EA models notated in arbitrary notations like ArchiMate. Furthermore, we add alternative scenarios in different versions along a time series to meet the requirements of a distributed EA evolution. To show the applicability of our approach, we developed a proof of concept prototype by implementing the proposed calculations and guidelines on a Neo4j graph database. Last, we argue that our approach meets the stated requirements of a distributed EA evolution.
Index Terms—Enterprise Architecture Management; Probabilistic Modelling; Quality Management
1. Introduction
Enterprise architecture (EA) is a widely accepted discipline to guide local information system (IS) endeavors through a holistic view on the fundamental structures, design, and evolution principles of the overall organization [1]. EA eases the alignment of IS projects with enterprise-wide objectives, which leads to reduced complexities as well as integration efforts in the overall corporate IS landscape [2], [3].
Since it beginnings in the 1980’s [4], EA has developed to an established discipline in industry and research [5], [6]. A widely accepted definition of the term architecture [7] is given in the ISO 42010:2011 [8], which defines architecture as the “fundamental concepts or properties of a system in its environment embodied in its elements, relationships, and in the principles of its design and evolution”.
As this definition implies, the EA model, comprised by the elements and relationships of the organization, is one central artifact of EA. It provides a holistic view on the organization and, therefore, supports the EA’s stakeholders to create added value [9]. Consequently, EA has to provide central, important, and up-to-date information to its stakeholders. For this purpose, EA needs to collect data from several sources which may be contradictory.
There are two different origins of contradictory EA data input. First, there are technical sources like network scanners or databases, e.g., the Configuration Management Database (CMDB). Those sources, mostly contained in the technology layer [10], [11], may deliver data on different levels of details or even outdated data. Second, data provided by human EA suppliers may cause conflicts, because there are different ways to model the same aspect or they rely on wrong or outdated data.
To handle such contradictions, so far two different kinds of approaches are proposed: The first approach tries to resolve the contradictions, e.g., by estimating the trustworthiness of the sources [10], [11]. The second approach tries to prevent contradictions before they emerge, for example, by providing a holistic framework to assess the quality of EA models [12].
Both approaches have in common that they try to determine one single truth for the model. However, in some cases it might be useful to keep the contradictory information. For example, there are two contradictory information from different sources and it is not possible to estimate which one is true. Therefore, we want to keep both information and do not want to abolish one. Another example could be that both information describe alternative future EA scenarios and we do not know which one will become true.
Consequently, we formulate our research question:
RQ: How can evolutionary EA scenarios provided with uncertainty information be represented in an EA model?
This research question encloses four facets:
1) The EA model needs to provide uncertainty regarding the existence of any of its entities.
2) The EA model needs to reflect different evolutionary scenarios along a certain time-span, because projects may deliver competing scenarios for different points in time.
3) As interaction with the EA model is needed, e.g., to integrate new scenarios, the proposed approach should define process guidelines, by means of a set of rules, to add and remove scenarios to an EA model as well as how to handle different versions along a certain time series.
4) A version of the EA model without uncertainty is needed, e.g., the management concerns an EA model with less details or the used EA tool cannot represent uncertainties.
In order to develop means to reflect uncertainty in EA model evolution, we opt for Design Science Research (DSR) in accordance to Peffers et al. [13]. Therefore, we generalize the P²AMF [14] to a simple graph representation and add capabilities to represent uncertainty regarding future states of the model. To discuss our work, we create a proof of concept prototype using a graph database containing a small EA model, transform our procedures to Cypher [15] to calculate different EA model states, and, finally, apply Cypher queries to the EA model.
This article is structured as follows: Next, we introduce the DSR methodology and how we applied its activities. In section 3 we present at first P²AMF and discuss its shortcomings related to our research question. Second, we introduce our solution by describing its formal background and we illustrate it with a small example. Third, we demonstrate our solution by implementing the beforehand introduced example using a graph database. Afterwards, we discuss our work by comparing our research objectives with the achieved results. Before we conclude the presented results and sketch future work, we discuss related work elaborating on uncertainty in models.
2. Methodology
DSR offers a systematic structure for developing artifacts, such as models or methods [16]. As our research question indicates the development of a constructive solution, the application of DSR is appropriate. We stick to the approach of Peffers et al. [13], which consists of six activities (A1..A6) and two possible feedback loops. We applied them as follows.
A1 Problem Identification & Motivation: The problem to be solved emerged in the context of a research project, which aims to develop a solution to keep the central EA model and peripheral models created by projects in synchronization. As projects may deliver contradictory information and it is not decided which information is true, the contradictory information need to be kept.
A2 Objectives of our solution: From the identified problem, we derived the objectives for a solution, manifested in the stated research question and its facets, detailed in the introduction.
A3 Design & Development: To develop the proposed constructive solution, we applied an engineering research approach according to Lázaro and Marcos [17]. Facilitating and adapting the findings of Johnson et al. [14], [18], we added a time-related facet to their research.
A4 Demonstration: To demonstrate the developed solution, we performed a single case study to gain a first, in-depth reflection on our solution in a real life scenario [19]. Moreover, single case studies are a feasible instrument to show the applicability of software engineering artifacts [20].
A5 Evaluation: We will discuss our results in 3.3 by comparing them to the objectives stated in section1.
The last activity (A6 Communication) and the feedback loops are not reflected directly in this paper. The feedback loops from Evaluation/Communication to Objectives/Design & Development will influence future research elaborating on this topic.
3. Solution Design
Based on the stated research question, we propose a constructive solution to represent uncertainty in EA models. This solution is embedded in a broader EA process already presented in [21]. The overall aim is to create a roundtrip between the central EA model and peripheral models developed by projects. Hence, the results presented in this paper contribute to a distributed evolution approach for EA models, to be more concrete, to assure the EA model data’s quality.
3.1. An Architecture Modeling Framework for Probabilistic Prediction
In the following we present concepts to represent a probabilistic EA for solving quality issues. Our solution is based on P²AMF, a framework to model uncertainty in class and object diagrams proposed by Johnson et al. [14]. Therefore, they facilitate OCL and add attributes storing existence probability values for objects or relations. Moreover, object attributes can be stochastic. However, we are only interested in objects and relations among them.
The following example illustrates P²AMF. Assume there are two classes (Service and Cloud) connected by an association. As a service might be down or we are not absolutely certain about its presence, its existence is modeled with an uncertainty expressed by a Bernoulli distribution with a probability of 0.98. In other words, a service has a probability of 98% to be existent and 2% to be not existent. Each service is provided by a cloud. As we have no secure knowledge regarding the relation between the service and its respective cloud, the association between Service and Cloud is also fraught with uncertainty. This may lead to a situation in which the defined cardinalities get violated. For more details on this issue and how to solve it, we refer to Johnson et al. [14], especially Section 5.
3.2. A Probabilistic Enterprise Architecture
P²AMF provides a framework to represent uncertainty in class and object diagrams. However, if we apply this approach to solve our research problem, two shortcomings occur: First, it is restricted to EAs modelled with UML class
and object diagrams. Consequently, dedicated EA modeling languages like ArchiMate [22] are not supported. Therefore, a more general representation is needed. Second, the special needs of decentralized performed projects contributing to a distributed EA evolution are not sufficiently reflected neither. Those needs can be satisfied by applying Dynamics Bayesian Networks [23] similar to the idea by Johnson et al. [18].
To get a general representation of EA models, we describe an EA model as a pair of a set of nodes and a set of relations: \( \text{EA} = (N, R) \). A node, \( n \in N \), represents an architectural element of the EA like a business process, an application, or a server. A relation, \( r \), is a tuple of two architectural elements, which are linked somehow to each other:
\[
r \in R \subseteq \{(u, v) : u, v \in N\}. \tag{1}
\]
To model uncertainty, we introduce a probabilistic existence function
\[
p : N \cup R \rightarrow \{x \in \mathbb{R} | 0 \leq x \leq 1\} \tag{2}
\]
annotated as \( p(e) \) which returns the probability of a node or a relation to be existent. Those probabilities can be determined in different ways. First, the value could be calculated similar to the approach presented in Johnson et al. [18]: The more often a certain element appears within a reported data set, the higher is its probability to be existent. Second, experts could estimate these probabilities.
Unfortunately, calculating the probability whether a certain scenario will be realized, falls short, because this needs a large amount of data, typically produced in an automated way. But, this is not the case in the EA domain due to the following reasons: First, project architects model a future state which could not be captured automatically. Second, projects architects would model only changes they plan to carry out. Consequently, changes would appear seldom. Third, all projects rely on the same data basis. If a project removes an element, other projects will not, because they necessarily do not know that this element does not exist anymore. Hence, the model would never change. Therefore, calculating the probabilities is not applicable in our case.
Expert estimation seems to be a more promising approach to determine existence probabilities of architecture elements. Obviously, estimation is biased and not perfect, but research has shown that even more formal approaches are not necessarily better [24]. Moreover, the estimation effort is low compared to the suggested calculations. Therefore, we opt for expert estimations.
Project architects or project managers are possible experts to estimate these probabilities. There are two different levels of granularity conceivable to estimate probabilities: First, experts can estimate the probability for each node and relation individually. Second, experts can estimate the probability for a complete scenario consisting of many nodes and relations to become existent.
Apart from the question who should estimate, it is also important to answer the question when the estimations should be conducted. Obviously, there is the initial estimation at the beginning of a project when the experts know about the different possible scenarios. This estimation should be updated every time the probabilities of the scenarios change fundamentally, e.g., because one of the scenarios got discarded.
As estimating each node and relation individually is too fine grained, we take estimating the probability of complete scenarios as more feasible. Let \( S \) be the set of all scenarios. To represent a scenario \( s \in S \) we annotate it as a quadruple with \( s = (N^+_s, R^+_s, N^-_s, R^-_s) \). \( N^+_s \subseteq N \) and \( R^+_s \subseteq R \). To differentiate between added and removed elements of the EA, we introduce \( + \) to describe that a node or relation is added and \( - \) to describe that a node or relation is removed.
Additionally, we extend the definition of \( p \) so that it also returns the probability value of a scenario to be existent:
\[
p : N \cup R \cup S \rightarrow \{x \in \mathbb{R} | 0 \leq x \leq 1\}. \tag{3}
\]
Furthermore, it might occur that different projects deliver their results for the same point in time. Therefore, it is necessary to know which scenarios are competitive to each other. Thus, we group all competitive scenarios into one set: \( S_C \subseteq S \), where the sum of all scenario probabilities has to be 1:
\[
\sum_{s \in S_C} p(s) = 1. \tag{4}
\]
In other words, we know every competitive scenario which delivers its results to a certain point in time.
The following short example clarifies this concept. We assume an EA model simplified to nodes and relations as depicted in Figure 1(a). Additionally, there is a project contributing to EA’s evolution by proposing two competitive scenarios (cf. Figure 1(b) and 1(c)). Either scenario 1 or 2 will be implemented. An expert estimates the realization of scenario 1 with \( p = 0.6 \) and scenario 2 with \( p = 0.4 \). Within both scenarios the node \( M \) is added as well as the relation between \( M \) and \( D \). Furthermore, a relation between \( D \) and \( F \) is added in both scenarios. Both scenarios differ in handling the node \( K \). Scenario 1 replaces this node by node \( L \) and additionally links \( L \) to node \( C \). In contrast, scenario 2 keeps node \( K \) and adds a relation to node \( C \) like in scenario 1.

To merge an origin EA model with all competing scenarios (leading to \( \text{EA}' \)), the existence probability for each node and each relation has to be calculated. For added nodes and relations, we simply sum up the estimated probabilities.
along all scenarios \( s \in S \). If nodes or relations are removed, we have to subtract the estimated probabilities from 1. If a node or relation is unchanged, it keeps its initial probability. This leads to the following equation to calculate the probability for a node \( n \in N' = N \cup \bigcup_{s \in S} N^+_s \) of EA':
\[
p(n) = \begin{cases}
\sum_{s \in S} \text{getP}(n, N^+_s) & n \in \bigcup_{s \in S} N^+_s, \\
1 - \sum_{s \in S} \text{getP}(n, N^-_s) & n \in \bigcup_{s \in S} N^-_s, \\
0 & \text{else.}
\end{cases}
\]
(5)
with
\[
\text{getP}(n, N_o) = \begin{cases}
p(n) & n \in N, \\
0 & \text{else.} \\
N_o : \text{set of nodes}
\end{cases}
\]
(6)
The equation for a relation \( r \) of EA' looks similar. To explain and clarify these equations, we merge the origin EA model with scenarios 1 and 2 from Figure 1. The result is depicted in Figure 2(a). The solid lines represent nodes and relations with \( p = 1.0 \), the dashed lines with \( p = 0.6 \), and the dotted lines with \( p = 0.4 \). Two observations can be made. First, nodes and relations occurring in both scenarios get a value of 1. Second, node \( K \) and its relation to \( E \) have a probability value of 0.4, because both are removed in scenario 1 and, therefore, the probability value has to be inverted.

So far, our solution to represent probabilities does not take the time dimension into account. However, this is necessary since projects usually do not deliver their results synchronously. Therefore, we introduce points in time, \( t_i \in T \), and link each point with the current valid EA model.
Figure 3 shows an example. The EA model \( M \) contains only entities with \( p = 1.0 \) and, thus, is considered to have no uncertainty. Starting with \( M \) as a baseline at \( t_0 \), we introduce project A delivering two competitive scenarios \( A_1 \) and \( A_2 \) at \( t_1 \) and project B delivering one non-competitive scenario \( B_1 \) at \( t_2 \). To keep track of the changes of the competitive scenarios, we trace \( A_1 \) and \( A_2 \), until we know which scenario will be realized. At this point in time, we integrate the chosen scenario into the current EA model and discard the others. E.g., at \( t_3 \) we decide to realize scenario \( A_2 \), thus, we incorporate it into \( M \) leading to \( M' \). \( A_1 \) is discarded and \( B_1 \) remains unchanged as it is not competitive to the other scenarios.

If an EA model representation without probabilities is needed, a report can be generated containing the most likely EA model. This report can either base on elements with a probability value greater than a threshold or on the most likely scenarios of each project. Assuming a threshold of \( p \geq 0.6 \), the EA model presented in Figure 2(b) is created consisting of the origin EA model and the changes introduced by scenario 1. If we choose to create the EA model based on the most likely scenario, the resulting one will be the same.
### 3.3. Demonstration
To demonstrate the proposed approach, we implemented the aforementioned examples in the graph database Neo4j\(^1\). Additionally, we created the needed reports utilizing Cypher [15] as query language on the database.
Representing the origin EA model depicted in Figure 1(a) in a graph database was straightforward. We simply added all nodes and edges and initialized their existence property with 1.0. To represent points in time, as a project delivers two competitive scenarios, we added two special nodes. The first node expresses the time value itself (e.g., \( t_0 \) in Figure 3). The second node, linked to the introduced time node via the AT-relations, represents the current EA model (cf., \( M \) in Figure 3). As we also wanted to relate edges to the current EA model, we introduced additional nodes for each edge, because it is not possible to create an edge between an edge and a node. Each special “edge-node” inherits the existence property from its respective edge.
Based on this initial EA model representation, we introduced the two competitive scenarios from Figures 1(b) and 1(c) named \( A_1 \) and \( A_2 \) in Figure 3. First, we added a time node representing \( t = 1 \) (cf., \( t_1 \) in Figure 3) and linked it to the already present time node (\( t = 0 \)). Second, we copied the entire EA model related to the current EA node (i.e., \( M \) in Figure 3) and linked it to time node \( t = 1 \). Third, we added two nodes representing the two scenarios with the existence property equal to 0.6 respectively 0.4. This is necessary to allow for a rollback if a scenario gets dismissed, since we store just the computed existence value at each node and not how it was computed.
Last, we added the edges and nodes introduced by each scenario with their probability to the database, linked them
1. https://neo4j.com
to the related scenario node via the \textit{BELONGS}-relation, and recorded that they are added at the relation to ease a possible rollback. If a scenario retires a node or a relation, we remove it from \textit{M} and relate it to the appropriate scenario with a note that the entity should be retired. Furthermore, we reduce the \textit{existence} property by the probability of the scenario to become existent.
Adding further projects and their scenarios to the database works the same way. More interesting is the rollback of a scenario, because the complementary scenario has to be realized. First, we remove all entities related to the scenario which have been added. Second, we move the entities, which should have been retired, back to the current EA model and restore their \textit{existence} property. Third, we do the opposite with the realized scenario: move the added entities to the current EA model, set their \textit{existence} property to 1.0, and remove the retired entities.
So far, we have presented a concept to enhance an EA model with uncertainty and how to maintain such a model over time. But, how to generate a report representing the EA model in a “classical” way, e.g., to import the model into existing EA tools? This can be achieved by evaluating a Cypher query [15], shown in Listing 1, which generates a report incorporating no respectively a certain degree of uncertainty.
Listing 1. Querying for EA entities at \( t = 1 \) with \textit{existence} \( \geq 0.5 \)
\begin{verbatim}
1 MATCH
2 ( :TIME {time : 1})
3 -[:AT]-
4 ()
5 -[:BELONGS]-
6 (n)
7 WHERE n.existance >= 0.5
8 RETURN (n);
\end{verbatim}
To ensure that we only get EA entities to a specified point in time, the query asks for nodes labelled with \textit{TIME} and a property \textit{time} equal to 1 (cf. line 2). From this start point, we follow the relation \textit{AT} (cf. line 3) to all nodes representing the current EA model and all scenarios (cf. line 4). From these nodes, we follow the \textit{BELONGS} relation (cf. line 5) and end up at all entities of the EA model at the chosen point in time and store them into a variable \( n \) (cf. line 6).
Now, we apply a filter to ensure that we select only entities we are interested in, i.e., entities with a probability of \textit{existence} greater or equal than the defined threshold 0.5 (cf. line 7). Last, we return the collected and filtered entities (cf. line 8).
Another and more sophisticated procedure to create such a report is not to rely on the \textit{existence} probability of each entity, but on the most probable scenario (see Listing 2). The query is comprised of three parts. The first part (cf. lines 1 to 6) retrieves the current EA model at the time point 1. Line 7 passes the results of the first part to the second part (cf. lines 8 to 11), which collects all scenarios at time point 1. Next, the maximum value of all \textit{existence} values along all scenarios is calculated. This value together with the current EA model is handed over to the last part in line 12. Last, all entities related to the most probable scenario (cf. lines 13 to 18) are collected and returned together with the current EA model (cf. line 19).
Listing 2. Querying for EA entities at \( t = 1 \) with most probable scenario
\begin{verbatim}
1 MATCH
2 ( :TIME {time : 1})
3 -[:AT]-
4 ( :CURRENT)
5 -[:BELONGS]-
6 (c)
7 WITH c
8 MATCH
9 ( :TIME {time : 1})
10 -[:AT]-
11 (s : SCENARIO)
12 WITH c, max(s.existance) as max
13 MATCH
14 ( :TIME {time : 1})
15 -[:AT]-
16 ( :SCENARIO {existence: max})
17 -[:BELONGS]-
18 (e)
19 RETURN (c), (e);
\end{verbatim}
4. Discussion
According to Shaw [20] examples are a proper technique to discuss artifacts produced in software engineering. Consequently, we discuss to what extent our proposed approach answers the stated research question represented by its four facets compared to \( \text{P}^2 \text{AMF} \).
The first facet of our research question covers the need to represent uncertainty regarding the entities within the EA model itself. In \( \text{P}^2 \text{AMF} \) this is considered by adding an existence property to the model elements and assigning it a probability. Further, in an instantiation of a \( \text{P}^2 \text{AMF} \) model there is still no uncertainty regarding the existence anymore. Similarly, we have added an existence property to edges and nodes, but we still have uncertainty in our EA model instances.
The second facet expresses the need for a representation of the evolution of the EA model over time. This is not present in \( \text{P}^2 \text{AMF} \), since one model only represents one state. However, the probability may be facilitated to express the behavior of a certain entity along a time series. In contrast, our approach links different evolution alternatives of an EA model along the time to each other. Therefore, we create an actual representation of the EA model at each point in time where it is somehow altered.
The third facet requires the ability to manipulate the model. It covers, on the one hand, adding and removing scenarios to an EA model and, on the other hand, the handling of different versions along a certain time series. The beforehand stated requirement is not explicitly covered within \( \text{P}^2 \text{AMF} \), since it neither contains competing scenarios.
nor different versions. However, there is no need to cover this requirement in P²AMF, because there are no dependencies among the included entities, which would raise the need for such a rule set. Whereas, we described beforehand how to fulfill this requirement both in theory and in practice.
The fourth facet incorporates the necessity to create a report without uncertainty. As the instantiation of P²AMF does not contain any uncertainty regarding the existence of its entities, this instantiation can be utilized as the required report. In opposition, we suggest two different ways to extract a representation of the EA model at an arbitrary point in time. This is more sophisticated, but it is more flexible regarding to the stakeholders’ needs.
5. Related Work
There is still research elaborating on uncertainty in models. A first approach is to deny uncertainty and to merge different variants of one model. This problem has been researched for some years in the context of model driven software development. For instance, Välja et al. [10], [11] try to estimate the trustworthiness of the sources by facilitating techniques from the human-computer interaction [28] and data fusion [29] domain. Therefore, they build their means upon a model of information processing automation. This model is comprised of the phases of data acquisition, data analysis, decision and action selection, and implementation.
Another concept to handle contradictions is to prevent them before they emerge. In the case of a human source, one can try to identify possible weaknesses in the model and tell the modeler about. The authors of [12] develop a holistic framework to assess the quality of EA models. Their methodology can guide enterprise architects in their work and, thus, avert contradictions.
In contrast, Johnson et al. [18] do not neglect uncertainty but represent it in models. They interpret modelling of EAs as a probabilistic state estimation problem [30]. For solving this problem, they propose to facilitate Dynamic Bayesian Networks [23]. Therefore, they observe a computer network and use a Dynamic Bayesian Network to predict the likeliest representation of the EA’s technology layer.
As the beforehand discussed approaches concentrate on representing the as-is state of a model, Johnson et al. [14] elaborate on uncertainty in a predictive manner by introducing the Predictive, Probabilistic Architecture Modeling Framework (P²AMF). P²AMF is based on the Object Constraint Language (OCL) [31] and introduces uncertainty to class and object diagrams in two ways. First, attributes may be stochastic. Second, objects and the relations among them may be uncertain. To realize the second aspect, a stochastic existence attribute for classes and relations becomes mandatory.
6. Conclusion
EA models are a central artifact of EA, which provides a holistic view on the organization and eases the value creation for EA’s stakeholder. As EA collects its data from different sources, the gathered data can be contradictory. This work contributes to existing research by proposing a novel approach which does not try to solve the conflict but keeps the contradictory data.
In order to achieve this objective, we have refined P²AMF [14], which already incorporates a way to represent uncertainty regarding the existence of modelled entities. To ease the use of our technique, we generalized P²AMF from its UML/OCL notation to a graph presentation. Therefore, it can also be applied to EA models notated in arbitrary formats like ArchiMate [22]. Furthermore, we added competing scenarios and different versions along a time series to meet the requirements of a distributed EA evolution. To show the applicability of our approach, we utilized the theoretical described calculations and guidelines on a Neo4j graph database. Following, we argued that our realization meets the stated requirements of a distributed EA evolution.
So far, we have shown the applicability of our approach. Next, the approach should be included into existing tools for EA. If the tool allows defining attributes on model elements and their relations, the existence probability can be depicted easily. Same holds for modelling the competing scenarios and the different versions along a time series if the tool allows altering its meta-model freely, i.e., add new model element types and relationship types. To create the needed report, the tool needs to support free formulated queries. Especially, the last aspect is not easy to accomplish using the tools we know, since often some kind of scripting would be necessary.
Another way to enrich existing tools by the advantages of our approach is to patch a graph database in before the EA tool. The graph database would handle the uncertainty as presented in section 3.3 and an export would be generated which can be handled by the EA tool. In this case the database becomes the data master for the EA model. Therefore, all sources for the EA model have to be linked to the database which processes the data and delivers the results to the EA tool.
Besides the beforehand presented lack of integration into existing EA tools, our research encloses still some more limitation: We implicitly assumed a continuous evolution of the EA model. In other words, we expect that a new state added to our time series belongs to a point in time which is more in the future than all thus far persisted ones. Consequently, there are still no mechanisms which could handle a change of an included state or the addition of a state between two existing states. A possible option could be not to copy the current EA each time but to use symbolic links or to keep only the changes to the model and not the whole model.
References
|
{"Source-Url": "http://www.diva-portal.org/smash/get/diva2:1340360/FULLTEXT01.pdf", "len_cl100k_base": 7216, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26855, "total-output-tokens": 9785, "length": "2e12", "weborganizer": {"__label__adult": 0.0003445148468017578, "__label__art_design": 0.0009551048278808594, "__label__crime_law": 0.0003142356872558594, "__label__education_jobs": 0.0018301010131835935, "__label__entertainment": 8.249282836914062e-05, "__label__fashion_beauty": 0.0001882314682006836, "__label__finance_business": 0.0007681846618652344, "__label__food_dining": 0.00034236907958984375, "__label__games": 0.0005207061767578125, "__label__hardware": 0.000705718994140625, "__label__health": 0.0004808902740478515, "__label__history": 0.00042510032653808594, "__label__home_hobbies": 0.00010734796524047852, "__label__industrial": 0.0005269050598144531, "__label__literature": 0.00041747093200683594, "__label__politics": 0.0002894401550292969, "__label__religion": 0.0004754066467285156, "__label__science_tech": 0.059783935546875, "__label__social_life": 0.00011259317398071288, "__label__software": 0.0098724365234375, "__label__software_dev": 0.92041015625, "__label__sports_fitness": 0.00024211406707763672, "__label__transportation": 0.0005035400390625, "__label__travel": 0.00021982192993164065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38720, 0.04588]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38720, 0.21439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38720, 0.89419]], "google_gemma-3-12b-it_contains_pii": [[0, 494, false], [494, 5273, null], [5273, 10905, null], [10905, 16655, null], [16655, 21648, null], [21648, 26984, null], [26984, 32716, null], [32716, 38720, null]], "google_gemma-3-12b-it_is_public_document": [[0, 494, true], [494, 5273, null], [5273, 10905, null], [10905, 16655, null], [16655, 21648, null], [21648, 26984, null], [26984, 32716, null], [32716, 38720, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38720, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38720, null]], "pdf_page_numbers": [[0, 494, 1], [494, 5273, 2], [5273, 10905, 3], [10905, 16655, 4], [16655, 21648, 5], [21648, 26984, 6], [26984, 32716, 7], [32716, 38720, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38720, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
9b9fcb48109780d7220d7756667732b493bc3ce3
|
Managing Web Resource Compositions
Mahdi Bennara, Youssef Amghar, Michael Mrissa
To cite this version:
Mahdi Bennara, Youssef Amghar, Michael Mrissa. Managing Web Resource Compositions. International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, Jun 2015, Larnaca, Cyprus. 10.1109/WETICE.2015.46 . hal-01212728
HAL Id: hal-01212728
https://hal.archives-ouvertes.fr/hal-01212728
Submitted on 7 Oct 2015
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Managing Web Resource Compositions
Mahdi Bennara, Youssef Amghar
Université de Lyon, CNRS
INSA-Lyon, LIRIS UMR5205
F-69621, France
{mahdi.bennara, youssef.amghar}@liris.cnrs.fr
Michael Mrissa
Université de Lyon, CNRS
Université Lyon 1, LIRIS UMR5205
F-69622, France
michael.mrissa@liris.cnrs.fr
Abstract—Nowadays, the use of RESTful Web services promotes stateless service interaction and decentralized hypermedia-driven discovery and composition. However, there is a need for models and tools to drive user interaction as well as description, discovery and composition of RESTful services. In this paper, we provide a solution to help users manage, share and discover workflows of RESTful Web services. We annotate RESTful Web services with semantic information, and introduce the notion of composition directory as a Web resource that assists a user in sharing, managing and discovering workflows. Users' composition directories form a decentralized repository of service workflows connected by hypermedia links. We illustrate the benefits of our approach with a typical scenario and show through a set of experiments that the breadth-first search algorithm combined with the exploitation of semantic annotations efficiently answers users’ goals by crawling through composition directories.
Keywords-RESTful Web services, linked services, semantic Web, composition
I. INTRODUCTION
The Web has moved from a Web of documents to a distributed application platform where applications are exposed as Web resources, as witnesses the growing number of available APIs1. Leading research topics are related to discovery, composition and invocation of Web resources via their API. In addition, the emergence of semantic Web technologies gives the opportunity to improve the use of APIs with semantic annotation over Web resources. Semantic annotation helps to drive the interaction with APIs by providing explicit description of domain-specific information about resources.
Another key concept that drives today’s Web is distributed affordance. Affordance is the ability for a user to use a Web resource. The idea is to dynamically create affordance based on the information already present in a resource representation, with knowledge from distributed sources [12]. Distributed affordance combines the information on resources and the knowledge on service providers, as well as user profiles in order to generate possibilities for manipulating Web resources. It should allow client-side software to dynamically drive the interaction with Web resources, therefore service providers do not have to anticipate user interaction
and avoid deploying static business processes that constrain users.
In order to enable distributed affordance, Web resources must be semantically described, and user agents needs to be able to exploit such descriptions. In this paper, we build on previous work to semantically annotate Web resources [1] and facilitate resource discovery and browsing. We introduce the concept of composition directory to help users manage and share compositions, and show that the breadth-first search algorithm can be used in this context to crawl and discover resources according to a composition workflow the user provides.
Our paper is organized as follows: Section II introduces the challenges we want to overcome while trying to solve the composition problem. Section III presents related work and highlights the advantages our solution offers. Section IV details the main aspects of our contribution and shows its innovation. Section V shows a scenario example and details how our prototype operates in the context of the scenario to demonstrate the applicability of our solution. Section VI discusses our results and gives some guidelines for future work.
II. CONTEXT AND CHALLENGES
In the context of our work, Web services are seen as Web resources that comply with the REST architectural style. The REST architectural style is based on the notion of resource as a conceptual entity that represents abstract or concrete things such as books, orders, payments, etc. Resources are identified by URIs, their state is passed to the client through representations using the adequate media type according to the principle of context negotiation. In this paper we consider a RESTful Web service as a set of resources that provide a coherent access to the state and functionality of the software it represents [9].
Another principle that drives the REST architectural style is the HATEOAS2 principle. Using HATEOAS requires hyperlinks to be established between Web resources that form an open and very large graph. HATEOAS means that the discovery process is realized progressively, user agents
1http://www.programmableweb.com
2Hypermedia As The Engine Of Application State
should be able to discover other Web resources accessible from any given resource in the graph.
In this context, we identified several challenges to address, which can be summarized as follows:
- Web resource description and interlinking: resources need to be appropriately described with semantic annotations and also linked to each other with hyperlinks to enable client-side discovery (how to interact with the resource) and crawling (how to discover other resource from a given one according to the HATEOAS principle).
- Web Resource discovery: as a follow-up to the first challenge, user agents should be able to implement an efficient algorithm to crawl between resources and exploit their annotations to realize users’ objectives.
In order to answer these challenges, we build on previous work to annotate resources. We introduce the notion of composition directory to manage and share composition workflows and we show that the breadth-first search algorithm can be used to crawl through and and efficiently discover Web resources.
III. RELATED WORK
In this section, we overview existing work about Web resource management and discovery algorithms.
A. Resource Discovery
RESTdesc [13] is based on the Notation3 RDF syntax. It involves all the operational semantics of Notation3 which allows for a versatile discovery methods. We can take advantage of all the advances in the reasoning domain in the Notation3 syntax in order to determine whether a resource satisfies a set of conditions defined for the discovery. More advanced reasoning is performed in order to achieve service matching. Authors estimate that this is an important prerequisite for services in order for them to contribute to the future Web of clients, because new functionality can only be obtained by on-demand compositions tailored to a specific problem [13].
RESTdoc [3] is a format that combines multiple micro-formats in order to semantically describe RESTful resources. RESTdoc offers a discovery mechanism that distinguishes two different aspects of REST services discovery problem: (1) the discovery as a client, or discovery as you browse, concerns the client-side browsers. This discovery uses on HTML Link element on a Web site in order to point to other related resources and (2) the discovery as a service, also called automated discovery, is the ability for a service to access and link to other related resources in the same application domain. The solution provided by RESTdoc describes a fully peer to peer discovery mechanism. The main idea is to construct a graph by running through links and identifying resources. This graph can be subject to later extending in order to explore new related resources.
LinkedWS [6] is a Web service discovery model based on social networks. The idea behind LinkedWS is to construct a social network for every service on the Web in order to allow social-based discovery process. The social network is built the moment the service participates a first time in a composition. Generally speaking, a social network of a Web service consists of nodes and edges. The nodes represent an object or an entity (book, person, organization, etc.) and the edges represent the relationships between nodes (distance between two cities, relationship between two persons, etc.)
Each edge has its own weight, that is used by the search or ranking algorithms that navigate through the network in order to find Web services that suit specific purposes. Every Web service is the entry point of his own social network. LinkedWS allow the discovery of additional Web services in a specific composition. The discovery itself triggers the re-evaluation of the weight of the edge that led to this discovery.
Our proposition aims to enhance the discovery process by embedding the related resources directly in the resource description. It allows also the discovery of resources using existing composition work-flows.
B. Resource Composition
Many researchers are interested in the problem of describing the semantics of the sequences in the executions flows, and have proposed many solutions.
One of the most important works in this domain is the BPMN 3 [14] specification. BPMN specifies a set of flow control sequences that allows us to describe the progressing of a process. The main interest of BPMN for us is that it can be used in order to construct and store dynamic service composition processes which can be reused afterward by another user that wants to do similar service composition. The use of BPMN relies on a Process-oriented approach rather than being Resource-oriented. This may conflict with the principles of the REST architectural style, nevertheless some concepts can be used naturally on resource oriented architectures.
Linked USDL 4 [10], is another work in this perspective. Unlike, BPMN, Linked USDL vocabulary has been designed especially for the service-oriented domain, making it easier to adapt for our solution. Some of the important concepts introduced by this vocabulary include: Service, ServiceOffering, InteractionPoint as well as services roles including: Producer, Provider, Intermediary, etc. which constitute the main semantic concepts of workflow control in service-oriented architectures. Linked USDL is being used in many projects, and it proved its efficient for the service community.
BPEL for REST [9] is a work that proposes to reuse the BPEL language principles and apply it on the REST architectural style. BPEL for REST either uses the WSDL
---
3 Business Process Modelling Notation
4 Unified Service Description Language
C. Graph discovery algorithms
Exploring very large graphs such as the Web requires efficient algorithms in order to have acceptable response times. As we are discovering resources on the Web, the efficiency of the exploration algorithm is one of the most important elements of our research work. Practically, the classic algorithms are not used as such because they may result in important response times due to the Web size. Instead, variants of these algorithms are used with specific parameters (often limiters) in order to yield reasonable response times and acceptable results. The most known examples of algorithms are Breadth First Search and Depth First Search algorithms. Other algorithms include variants of these two with limiting parameters, for example limiting the depth of the search (number of consecutive edges counting from the root) also known as depth-limited search or limiting the total number of nodes accessed during the whole process [11] [2].
According to [8] the Breadth-First Search graph traversal algorithm yields high-quality pages early on in a crawl. In other words, the most relevant pages/resources to the search are discovered early on in the process. In our work, this means Breadth-First Search finds the most relevant resources to answer a user’s request by finding multiple (or single) resources that can perform the tasks needed in order to answer the request. In addition to that, Breadth-First Search is a very natural search strategy in the context of Web. Also, compared to other efficient search algorithms, it has a relatively low computational cost for a large scale graph such as the Web.
IV. Contribution
Based on the related work presented above, we have built our solution that promotes the concept of composition directory and uses the breadth-first search algorithm to manage, share and crawl Web resources. Our solution must respect the following requirements in order to facilitate resource discovery and composition:
- Scalability: the increasing number of today’s Web APIs makes the scalability of solutions important.
- Responsiveness: the increasing number of users generates an important load of requests on servers. We want server responses to be as fast as possible in order to handle all the requests in a reasonable time.
- Diversity: We want our resource descriptions to propose rich and diverse links to other resources in order to give them a chance for being used. In other words, the users which make a request should have different propositions rather than only popular services in a given field and thus giving the chance to less popular services to emerge if the users are interested in the services they offer.
- Dynamism: the results of the resource discovery process should not be static, in other words it should be different from one request to another, because on one hand the availability of resources involved in the request as well as the context of the request may have changed in the meantime, and on the other hand, the user context may also have changed, which implies that users might not get same results because they browsed new resources which may impact the response.
- Serendipity: the serendipity concept allows APIs to be used in a non-specific process. In other words we do not want the clients to use APIs in a deterministic way where every next API to use is already known in advance.
Today, the main advances in Web resource composition are centered around the description of the resources. The focus of these advances is how to describe a resource in order to give as much information as possible to identify the nature of the resource, its activity and what type of data it exchanges. Too few efforts focus on how it links to other resources and how to follow these links, as well as how to manage and share composition workflows. The latter aspects are presented in the following in order to enable value-added resource discovery and composition.
A. Describing and Discovering Resources
In order to semantically describe resources, we rely on the notion of resource descriptor discussed in [1]. The resource descriptor has been slightly modified and the data model we use to implement the descriptor concept is the Hydra core vocabulary [4]. The main reason for this choice is that Hydra defines explicit semantics of its Operation and Link elements, the major two elements present in the descriptor.
The resource representation contains the business-level information about the given resource, while the descriptor contains semantically annotated information on how to use this resource (the available HTTP Operations on the current resource, not on other resources) plus information about the related resources (links to other related resources). Hence, resource descriptors separate resource representations from their descriptions, to promote separation of concerns between resource interaction and management (discovery and composition). We deem appropriate to make sure that every resource must have a descriptor to enable machine-to-machine interaction. As every resource must have a descriptor, and descriptors are also considered as resources,
descriptors also must have their own descriptors. To address this issue, we define the Universal Descriptor\(^5\), which describes all the resources including itself.
With the help of resource descriptors, a generic-client is able to interact with resources and to crawl from one to another in order to compose resources. From a resource URI, a client can get its descriptor by executing a GET/HEAD operation on the given URI, and checking the LINK header element in the HTTP response. Another GET operation on the retrieved link returns a HTTP response with the descriptor containing all the necessary meta-data that describes not only the interaction model with the current resource, but also annotated links to internal or external resources that can be composed together with the given resource. In order to document the semantics of sets of links as well as operations that are present in a given resource descriptor, we rely on ontology concepts. On the operation side, we want to know the exact concept that is accomplished by the operation being described. The concept is part of a larger global composition process that implies several other operations on different resources. Describing operations requires ontologies of operational semantics to describe how to realize a complex task with a combination of simpler, specific tasks.
B. Discovery Process
The discovery process we introduce in this work relies on semantic annotations of operations in the descriptors. When a user enters a request, it is processed by a reasoner in order to know what actions should be realized in order to prepare a response, as presented in previous work [7]. These actions are represented by an ordered list of ontology concepts. The discovery process will take as input this list of concepts as well as an entry point (the URL of a Web resource). Starting from this entry point, the process tries to find resources on the Web that provide the required operations to respond to the request, on the basis of the concept list taken as input.
We consider the Web as a big oriented cyclic graph, where nodes are resources and links are Web hyperlinks. We adopt the Breadth-First Search (BFS) algorithm in order to traverse the Web in search of resources that provide the operations corresponding to the concepts of the request.
C. Managing and Sharing Compositions
In order to enable users to record, reuse, manage and share their composition workflows, we propose a specific resource called composition directory. The Composition Directory resource contains information about its owner, and a sub-resource called repository to store as sub-resources the composition workflows the user creates. The Composition Directory of a user links to other connected users Composition Directories. Note that this is completely compatible with the descriptor concept because the links to other Composition Directories, the Repository and the created compositions scenarios represent the external part of our descriptor.
We define the following API in order to qualify possible interactions with Composition Directories:
1) GET on the base URI of a Composition Directory should send back the information about this Composition Directory and its owner.
2) GET on the Repository of the Composition Directory should send back the set of links to every composition on the Repository.
3) GET on a specific composition URI should send back the representation of the composition. This may require an authentication and may send a 401 code (Unauthorized) in case the authentication fails.
4) POST from the user on the Repository of his Composition Directory should create a new composition. Composition attributes and its accessibility should be indicated by the use beforehand in the representation.
5) POST from the user on his own composition directory in order to add a new Composition Directory URI of another user that exposes interesting compositions for him.
This offers many advantages, first it is a scalable and decentralized solution as every user stores a part of compositions on the web, it also respects the serendipity concept as a given client may find part of the solution to the user’s problem in another user’s composition set. Our model includes access control features relying on HTTP authentication. We define public compositions that everyone on the Web can access from its URI, and private compositions that are not disclosed and require authentication.
V. Tests and Evaluation
In this section, we illustrate our contribution with different scenarios, we detail our implementation setup, and discuss the results obtained.
A. Illustrative Scenarios and Experiment Setup
In order to illustrate our approach, we consider three scenarios. The first scenario involves three different Web resources and illustrates how resources and descriptors are disposed, and how resources link to each other using descriptors. It includes:
- A book selling service: users can select books, read abstract and place an order.
- A shipping service: its task is to deliver goods that users buy online.
- An online payment service: the task of this service is to debit money from users’ bank accounts to the benefit of online stores for the goods they buy.
The second scenario involves two users to illustrate the fact that users do not use Web resources in the same way.
\(^5\)http://soc.univ-lyon1.fr/universal.md
Each user orchestrates services in a different manner and shares it with the other user or with everyone on the Web using the Composition Directory mechanisms. The Web resources involved are similar to the first scenario with an additional computer accessories store, where users can select accessories, read description and place an order. The first user’s objective is to buy a book online, pay it online and receive it by mail. The second user wants to buy some computer accessories, pay them online and receive them by mail. The third scenario involves 25 resources with their descriptors. These resources can perform multiple actions, which are annotated in the descriptors. The objective of this scenario is to illustrate the discovery process with the descriptor mechanism. The prototype illustrating our work is available online6.
In our work, we rely on Java™ language and the Jersey framework7 in order to implement our services and JavaScript as a client-side scripting language in our Web pages. We use the Google Gson module8 in order to manipulate JSON objects in Java. We use Apache Tomcat9 as an application server in order to accommodate our different Web resources. Our descriptors rely on the Hydra Core vocabulary [5] to describe resources. The machine used for the experiments has an Intel® Core i5-3340M CPU and 8 GB RAM. The Web browser used is Mozilla™ Firefox. The tests were performed on the local university network.
B. Evaluation and Discussion
In this section, we discuss the choices of our implementation and the impact of these choices on the challenges and the properties we want to achieve. As we are using the REST architectural style to build our solution, the respect of Web constraints is ensured by design.
1) Resource Description: We evaluate and discuss our contribution for the resource description challenge on the basis of the first scenario. The use of descriptors allows to benefit from separation of concerns between business-level information and description-level information (metadata contained in the descriptor). The descriptor also allows access to relations with other Web resources, which enables interactions according to HATEOAS. The first scenario illustrates well this separation.
In our example, the book selling resource only contains business layer information such as the name of the service as well as a brief description. Its descriptor, on the other hand, contains links to internal and external related resources (the shipping and payment services for instance) as well as the functioning of the resource itself with the help of semantic annotations. The order resource in the book selling service gives another example about resource description: a GET operation returns the representation of the repository with all the placed orders and their statuses (with per user authentication), a POST operation takes an order as request body and adds it to the repository as an unpaid order, and PUT operation changes the order state and a DELETE operation (only available when the order has been fully completed) deletes an order from the repository.
2) Resource Composition: We evaluate and discuss our contribution for the composition challenge with the second scenario. Composition directories allow users to create, store and share compositions of Web resources. Compositions can be thereafter entirely of partially reused by the user himself or other authorized users. This solution allows for dynamic creation of new compositions rather than follow inflexible server-side compositions. It allows also a large scale sharing of popular compositions that users find useful and offers flexible ways to reuse and adapt compositions to user’s needs. We do not rely on a central repository to store the compositions.
The second scenario illustrates these statements: the first user can create a composition for the process of buying a book involving the shipping and online payment services. He can share this composition with the second user to buy things online with goods delivery and payment services. The second user can reuse the same composition if he wants to buy a book, or only a part of this composition if he wants to buy computer accessories. The creation of a new composition as well as its reuse depend on the discovery process discussed below.
3) Resource Discovery: We evaluate and discuss the application of the Breadth-First Search algorithm for discovery of resources in our work.
One important thing to note while applying the algorithm is that nodes are not simple in the context of descriptors, but are double. A node of our graph is the resource and its descriptor while the edges are simply the links in the descriptor of a given resource. In order to avoid unnecessary use of bandwidth, we use HEAD requests on resources instead of GET, in order to retrieve the descriptor link, and then we use a GET request in order to retrieve the descriptor contents.
We illustrate the application of the algorithm on our resources with the third scenario. The algorithm takes a set of concepts and an entry point as input data, it crawls through the graph formed by resources and their descriptors in order to give back the results. The timings of our experiment are shown in Table 1 and discussed below.
The number of nodes indicates the total number of nodes in the graph that have been traversed in order to give a full response to the request. The response time indicates the time in milliseconds taken in order to respond to the request. The response time per node is simply the response time divided by the number of nodes. The response times per node are under 20 millisecond if the research involves a small number
---
7https://jersey.java.net/
8https://code.google.com/p/google-gson/
9http://tomcat.apache.org/
of nodes, but as the number of nodes grows, it becomes lower due to the caching mechanism of the Web browser and the factoring of descriptors for resources that share the same descriptor contents. In other words, additional GET requests on the same descriptor that is shared by multiple resources in the graph are processed faster, as the results are stored in the local cache.
This experiment demonstrates that response times are below a linear progression as the global response times indicate. Hence, our approach scales quite well as the composition directories as well as the descriptions are decentralized and each resource stores a part of the global graph. Our approach also ensures dynamic results, as the content of descriptors may be subject to change especially the links part. The diversity is also ensured as the algorithm may find multiple resources that implement a specific operation needed in the process, this number can be limited before the algorithm starts and is given as input with the concepts. The serendipity has also its share in our approach, as the order in which the links of a descriptor are crawled may influence the final result, giving chance to more or less popular resources to be used within the composition process.
VI. CONCLUSION
In this paper, we have introduced the notion of composition directory as a REST resource that allows users to record, manage and share composition workflows. We have proposed a solution to link users' composition directories over the Web to each other to form a distributed directory of composition workflows that can be crawled with graph traversal algorithms. We demonstrate the adequacy of our solution with a set of experiments that rely on the well-known breadth-first search algorithm to discover Web resources according to a set of concepts that represent a user's goal. The obtained results show the scalability of our proposal. As future work, we envision extending our solution with quality models combined with user-side reasoning to enhance the discovery algorithm efficiency. We aim to enable automatic reuse of compositions by reasoning about their semantic annotations in order to respond to a user's request.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01212728/file/Bennara.pdf", "len_cl100k_base": 5653, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20753, "total-output-tokens": 7135, "length": "2e12", "weborganizer": {"__label__adult": 0.00029587745666503906, "__label__art_design": 0.0005259513854980469, "__label__crime_law": 0.00036215782165527344, "__label__education_jobs": 0.000911235809326172, "__label__entertainment": 0.00011593103408813477, "__label__fashion_beauty": 0.00016009807586669922, "__label__finance_business": 0.000431060791015625, "__label__food_dining": 0.0003249645233154297, "__label__games": 0.0003914833068847656, "__label__hardware": 0.0006766319274902344, "__label__health": 0.00048828125, "__label__history": 0.00031566619873046875, "__label__home_hobbies": 7.587671279907227e-05, "__label__industrial": 0.0003123283386230469, "__label__literature": 0.0004911422729492188, "__label__politics": 0.0002715587615966797, "__label__religion": 0.00042724609375, "__label__science_tech": 0.06304931640625, "__label__social_life": 0.00012350082397460938, "__label__software": 0.0244903564453125, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0001894235610961914, "__label__transportation": 0.00040531158447265625, "__label__travel": 0.00021648406982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32972, 0.02543]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32972, 0.581]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32972, 0.89945]], "google_gemma-3-12b-it_contains_pii": [[0, 985, false], [985, 5776, null], [5776, 11367, null], [11367, 16530, null], [16530, 21936, null], [21936, 27798, null], [27798, 32972, null]], "google_gemma-3-12b-it_is_public_document": [[0, 985, true], [985, 5776, null], [5776, 11367, null], [11367, 16530, null], [16530, 21936, null], [21936, 27798, null], [27798, 32972, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32972, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32972, null]], "pdf_page_numbers": [[0, 985, 1], [985, 5776, 2], [5776, 11367, 3], [11367, 16530, 4], [16530, 21936, 5], [21936, 27798, 6], [27798, 32972, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32972, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
de1f6911a09063bc46dbfee167f4bc5f5b252e36
|
POSIX threads parallelization for example of Particle-In-Cell density calculations in plasma computer simulations
Anna Sasak*, Marcin Brzuszek
Institute of Computer Science, Maria Curie Sklodowska University, pl. M. Curie-Sklodowskiej 1, 20-031 Lublin, Poland.
Abstract – The TRQR program [1–4] simulates trajectories of charged particles (electrons or ions) in the electromagnetic field. TRQR is based on the Particle-In-Cell method whose basic guideline is the use of computational particles (called macro particles) that represent a large number of real particles of the same kind moving in the same direction. The program calculates particles charge density distribution and potential distribution for chosen ion sources, analyses particles behaviour in the electromagnetic field, describes the process of beams from the source extraction. A number of factors influences simulation results. In order to improve efficiency the program has been parallelized. This paper presents the process of converting chosen parts of the TRQR program into the multi-thread version. In the first step the program was moved from Fortran 77 to C++. Then it was parallelized using the Pthread library with the standard API for C++ contained in the POSIX IEEE 1003.1c standard. Each of threads has its own stack, set of registers, program counter, individual data, local variables, state information. All threads of particular process share one address space, general signal operations, virtual memory, data, input and output. The Mutex functions were used as a synchronization mechanism. This paper presents the analysis of a particular piece of main program that implements computations of particles density distribution. The paper presents execution time dependencies for different simulation parameters such as: the number of macro particles, size of the simulation mesh and the number of used threads.
1 Introduction
Due to the complexity of physical processes, computer simulations of plasma behaviour in ion sources are still a great challenge for programmers. One of the methods of computing the
trajectories of charged particles in the electromagnetic field is the Particle-In-Cell method. In the PiC method a large number of particles such as ions or electrons in plasma or beam is represented by a smaller, numerically tractable number of so called ‘macro-particles’. Each macro-particle behaves like a single particle of certain kind, but carries a charge large enough to represent all real particles.
This paper presents the results from migration of one piece of TRQR program to parallel mode. First, the program was moved from Fortran 77 to C++ and then parallelized using the Pthread library. The paper presents the results of simulations for different parameters such as a number of used threads, a number of macro particles, mesh size.
### 2 TRQR - principle of operation
The TRQR program was developed in order to study plasma behaviour as well as the process of extraction and formation of the ion beams emitted from the plasma ion sources. The method implemented for computer simulation consists of the following steps:
1. Setting the systems geometry such as a number of particles etc. and generating initial distribution for all kinds of particles.
2. Calculations of particles density distributions for chosen ion sources using the PiC method.
3. Solving the Poisson equation for the charge density obtained in the previous step and the boundary conditions imposed by electrodes.
4. Calculation of electrical field in the grid points.
5. Solving the Lorentz equations of motion for each particle.
6. Generating new particles if it is needed due to hits on electrodes and plasma chamber walls.
This procedure, steps from 2 to 6, continues until a final state is achieved[3].
The special subject of interest for this paper is the particle-in-cell (PiC) method the second step of simulation is based on. In the PiC method a large number of particles such as ions or electrons in plasma or beam is represented by a smaller, numerically tractable number of so called ‘macro particles’. Each macro particle behaves like a single particle of certain kind, but carries a charge large enough to represent all real particles. The simulation space is divided into small regions creating a spatial mesh. The method weights particles to grid points using a particle shape factor to obtain charge on the grid. This distribution process is carried out with one of two possible schemes. The first method called nearest grid point (NGP) assigns the macro-particle charge to the point of grid that is the nearest to the particles position. In the second one called cloud-in-cell (CiC) fractions of macro-particle charge are assigned to 8 (in the case of 3D calculations) nearest in the mesh grid points. Even better charge distribution is obtained if in the CiC method the macro particle charge is distributed among 27 nearest grid points [4].
In architectures with shared memory threads can be used to implement parallelism. For the Unix systems, a standardized C language threads programming interface has been specified by the IEEE POSIX 1003.1c standard. The already mentioned POSIX standard from 1995 is included in the Unix system distributions.
Technically, a thread is defined as an independent stream of instructions that can be scheduled to run as such by the operating system. The comparison between threads and processes is presented in Table 1.
What needs to be emphasized is that in the case of threads - reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer.
The subroutines which comprise the Pthreads API can be informally grouped into three major classes (included in the library Pthreads):
1. Thread management – the group of functions that work directly on threads - creating, detaching, joining, etc. Here are also included the functions that set thread attributes.
2. Mutexes (abbreviation for ‘mutual execution’) – the functions that deal with synchronization. The Mutex functions provide for creating, destroying, locking and unlocking mutexes and also setting or modifying mutex attributes.
3. Condition variables – the functions that address communications between threads that share a mutex. They are based upon programmer specified conditions. This class includes the functions to create, destroy, wait and signal based upon specified variable values. In this paper condition variables are only mentioned without further analysis as they were not implemented in the pthread parallelization presented in this paper.
Table 1. Process and thread features comparison.
<table>
<thead>
<tr>
<th>PROCESS</th>
<th>THREAD</th>
</tr>
</thead>
<tbody>
<tr>
<td>• Created by the operating system</td>
<td>• Use and exist within the process-creator resources</td>
</tr>
<tr>
<td>• Requires a fair amount of overhead</td>
<td>• Duplicate only the bare essential resources that enable them to exist as executable code</td>
</tr>
<tr>
<td>• Contains information about program resources and program execution state that include:</td>
<td>• Share with other threads in the same process:</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td>– Process, process group, user and group IDs,</td>
<td>– Global and static variables,</td>
</tr>
<tr>
<td>– environment,</td>
<td>– heap and dynamic variables (Two pointers having the same value point to the same data),</td>
</tr>
<tr>
<td>– working directory,</td>
<td>– operating system resources (files),</td>
</tr>
<tr>
<td>– program instructions,</td>
<td>– process instructions.</td>
</tr>
<tr>
<td>– registers,</td>
<td></td>
</tr>
<tr>
<td>– stack,</td>
<td></td>
</tr>
<tr>
<td>– heap,</td>
<td></td>
</tr>
<tr>
<td>– file descriptors,</td>
<td></td>
</tr>
<tr>
<td>– signal actions,</td>
<td></td>
</tr>
<tr>
<td>– shared libraries,</td>
<td></td>
</tr>
<tr>
<td>– inter-process communication tools.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>• Each thread has a unique:</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>– Set of registers, stack pointer,</td>
</tr>
<tr>
<td></td>
<td>– automatic variables,</td>
</tr>
<tr>
<td></td>
<td>– Stack for local variables,</td>
</tr>
<tr>
<td></td>
<td>– priority,</td>
</tr>
<tr>
<td></td>
<td>– thread ID.</td>
</tr>
</tbody>
</table>
4 Thread creation
Initially main() program comprises a single thread. All other threads must be created explicitly by the programmer. Once created, threads are peers and may create other threads. There is no implied hierarchy or dependency between them. A new thread is created by calling int pthread_create(pthread *thread, const pthread_attr *attr, void *(*start_routine)(void *), void *arg) subroutine. The arguments of this function in order of appearance stand for: unique identifier for the new thread returned by the subroutine, attribute object that may be used to set thread attributes, the C routine that will be executed by thread once it is created, a single argument that may be passed to start_routine. Attribute parameter set to NULL means that default attributes are used, otherwise it defines members of struct pthread_attr_t that includes: detached state, scheduling policy, stack address and size etc. As it was mentioned before pthread_create() routine permits a programmer to pass only one argument to the thread start routine. To overcome this limitation a structure should be created which contains all of the arguments to be passed. Then just a pointer to that structure should be passed to pthread_create() routine.
There is presented below the fragment of code, which creates NTH threads with a default set of parameters which will execute routine `thread_fun_dens` with the parameters from the proper cell of matrix `tab_th_data`.
```c
struct th_data {
long idoms; // starting cell of global density matrix
long idome; // ending cell of global density matrix
long NNion; // number of ions per thread
};
pthread_t th_ids[NTH]; // matrix that contains threads ids
th_data tab_th_data[NTH]; // matrix of threads specific data, passed as a structure pointer to the executed routine
void *thread_func_dens(void *ptr) {
...
pthread_exit(NULL);
}
void main(...) {
...
for (int w = 0; w < NTH; w++)
pthread_create(&th_ids[w], NULL, thread_func_dens, (void *)&tab_th_data[w]);
...
}
```
5 Threads synchronization and termination
There are several ways in which a thread may be terminated. The most common is either when the thread returns from its starting routine or when the thread makes call to the `pthread_exit()` subroutine. Typically, the `pthread_exit()` routine is called after a thread has completed its work and is no longer required to exist. If main() finishes before the threads it has created, and exits with `pthread_exit()`, the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes. The programmer may optionally specify a termination status, which is stored as a void pointer for any thread that may join the calling thread.
One way to accomplish synchronization between threads is so called ‘joining’. The `int pthread_join(pthread_t th, void **thread_return)` subroutine blocks the calling thread until the thread specified by `th` argument terminates. The programmer is able to obtain, via the second argument, the target threads termination status. It is possible though only if it was explicitly specified in the target thread call to `pthread_exit` routine. A joining thread can match
only one `pthread_join()` call. It is a logical error to attempt multiple joins on the same thread. In the following figure the scheme of program course is presented, which after creating two worker threads waits for them to exit and then resumes its execution.

**Fig. 2.** Threads synchronization.
The fragment of main function that stops program execution until all created threads exit would have the following form:
```c
void main (...) {
...
for (int ii = 0; ii < NTH; ii++)
pthread_join(th_ids[ii], NULL); // execute as much
// pthread Joins as pthread_create
// were execute before
...
}
```
## 6 Mutual execution
Mutex variables are one of primary means of implementing thread synchronization and for protecting shared data when multiple writes occur. A mutex variable acts as a ‘lock’ or a semaphore protecting access to a shared data resource – critical section. With the basic mutex concept only one thread can own – which means lock – a mutex variable at any given time. Thus, even if several threads try to lock a certain mutex only one of them will succeed, booking access to the protected resource for himself. The shared data resource is available again not till then mutex owner unlocks that mutex. The presented operation is a safe way to ensure that when several threads update the same variable, the final value is the same as what it would be if only one thread performed the update.
The typical sequence of steps in the use of a mutex is as follows:
1. a mutex variable is created and initialized,
2. several threads attempt to lock the mutex,
3. only one of them succeeds and that thread owns the mutex,
4. the owner thread performs a set of actions,
The owner unlocks the mutex,
(6) another thread acquires the mutex and repeats the process,
(7) finally the mutex is destroyed.
The mutex variable must be declared with the type `pthread_mutex_t` and initialized before it can be used. Initialization can take two forms:
(1) static with the instruction
```c
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
```
(2) dynamic with `int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *mutexattr)` routine.
Initially mutex is unlocked. To establish different from default (specified as NULL) properties for the mutex the second argument of the `pthread_mutex_init` routine should be used. Mutex that is no longer needed should be released with `pthread_mutex_destroy(pthread_mutex_t *mutex)` routine.
Three standard routines are used to manage mutex access. The `pthread_mutex_lock(pthread_mutex_t *mutex)` routine is used to acquire lock on the specified mutex variable. If the mutex is already locked by another thread, this call will block the calling thread until the mutex is unlocked. The `pthread_mutex_trylock(pthread_mutex_t *mutex)` will attempt to lock a mutex. However, if the mutex is already locked, the routine will return with ‘busy’ error code. The `pthread_mutex_unlock(pthread_mutex_t *mutex)` will unlock a mutex if called by owning thread. An error will be returned if the mutex has already been unlocked or if the mutex is owned by another thread[5].
The following example presents the way mutexes were used in our simulation.
```c
pthread_mutex_t ***tab_mutex;
...
for (int x=1; x<=Nxx; x++)
for (int y=1; y<=Nyy; y++)
for (int z=1; z<=Nzz; z++) {
int res = pthread_mutex_init(&tab_mutex[x][y][z],NULL);
}
...
// creating threads with pthread_init routine
...
// a piece of code somewhere in the thread start_routine
int err = pthread_mutex_lock( &tab_mutex[Nx][Ny][Nz] )
;
density_q[Nx][Ny][Nz][kj] += is;
int err2 = pthread_mutex_unlock( &tab_mutex[Nx][Ny][Nz]
)
;
...
for (int x=1; x<=Nxx; x++)
for (int y=1; y<=Nyy; y++)
for (int z=1; z<=Nzz; z++) {
int res=pthread_mutex_destroy(&tab_mutex[x][y][z]);
}
7 Parallel mode calculations
The environment for simulations was the Intel Xeon processor 4cores x 2, 16BG RAM, Mandriva operating system and gcc 4.1.2 compiler. In the first step the program was moved from Fortran 77 to C++. Then it was parallelized using the Pthread library with the standard API for C++ contained in the POSIX IEEE 1003.1c standard.
During the simulation process the measure that was analysed was the simulation time. It is a formal but very relative measure as sometimes the process of creating parallel version may not be cost effective contrary to the gained reduction in the simulation time. The second performance criterion that was adopted for plasma density thread parallelization is speedup that is described by the formula \( S(p) = \frac{T(1)}{T(p)} \), where \( p \) stands for a number of threads, \( T(1) \) and \( T(p) \) - the simulation time with one or \( p \) threads (adequately) [6].
8 Results of simulations
As it was presented in paper [7] using the simplest charge density distribution technique and a large number of macro particles is the best solution as far as charge density calculations are concerned. For example, using NGP and 100 mill of macro particles gives better results (i.e. more homogeneous distributions) in less time than using the CIC method and 20 mill of macro particles. That is why all results presented in this paper are calculated for the NGP method with a different number of macro particles, different sizes of spatial mesh and a different number of threads used in the parallelization process.
Fig. 3 presents the simulation time for the NGP method with different numbers of macro particles and the mesh of size 100x100x100. Red line in each picture stands for the execution time of the sequential version of the algorithm.
Analyzing the above graphs one can conclude that using only two threads gives the execution time close to the sequential version and that using eight threads, which equals the number of available processors, gives the best reduction of execution time. Further improvement of a number of threads, nine and above does not give further reduction of execution time.
As the graphs obtained for simulations with a different number of macro particles show similar results, Fig. 4 presents speedup calculated only for one of them, the one for 200 mill macro particles. It confirms that speedup close to 1 (which means close to the sequential execution time) is for 2 threads and the highest speedup is gained for 8 threads.
In the next step the size of the mesh was changed to 50x50x50. Two simulations were done. First for 200mill of macro particles – Fig. 5(a). In the second one – Fig. 5(b) - the number
Fig. 3. Time of charge density calculations as a function of the number of threads used for the parallel run, using the NGP method, mesh of size 100x100x100 and a different number of macro particles: a)2 mill, b)20 mill, c)100 mill, d)200 mill.
of particles was changed proportionally to the change in mesh size, which gave the number of approximately 25 mill macro particles. For both simulations speedup factors were calculated and presented in Fig. 6(a) and 6(b) respectively.
Analyzing Fig. 5 and 6 it can be noticed that the maximum speedup gained with the parallelization changed dropped by about 40% compared to the previous simulation. Also the number of threads required to gain the execution time close to sequential changed from 2 to 4.
Further tests were carried out for different sizes of mesh from 200x200x200 down to 15x15x15. For each of them the parallel version run for 200 mill macro particles and 8 threads were executing calculations. The red line stands for the execution of sequential version of the algorithm.
Fig. 4. Speedup for NGP parallel run, for 200mln. macro particles mesh of size 100x100x100.
Fig. 5. Time of charge density calculations versus the number of threads used for the parallel run, using the NGP method, mesh of size 50x50x50 and different number of macro particles: a) 200mill, b) 25mill.
Fig. 7 presents that for the meshes of size 80x80x80 and bigger ones give quite good execution time reduction while parallelized. In the case of meshes of size 40x40x40 and smaller running the parallel version of algorithm gives no benefit of reduction of execution time.
Final tests were carried out for the asymmetrical mesh of dimensions 128x64x128 and 100 mill macro particles. The aim of this test was to examine if the geometry of the mesh has any influence on the algorithm performance. Fig. 8 presents the results of that simulation – both simulation time and speedup. The environment of this simulation is similar to the one presented in Fig. 3(c). The results for both mentioned simulations are very close which gives
Fig. 6. Speedup for NGP parallel run, for a) 200mill. b) 25mill. macro particles, mesh size 50x50x50.
Fig. 7. Time of charge density calculations versus the mesh size with the number of threads used for the parallel run equal 8, using the NGP method and 200mln. of macro particles.
A conclusion that only a number of cells influences the simulation time whereas the mesh geometry has no influence on POSIX thread parallelization performance.
POSIX threads parallelization for example of Particle-In-Cell density calculations.
(a)
(b)
Fig. 8. Time of charge density calculations and speedup for the parallel run, using the NGP method, the mesh of size 128x64x128 and 100 mill. of macro particles.
9 Conclusion
A direct advantage of program parallelization is more effective time use which relates to the time assigned to the simulation process. This paper presents the POSIX Pthread library as one of the available methods of parallelization. So far Pthread parallelization is implemented only for a part of TRQR program which is charge density calculations, but it gives quite acceptable results encouraging for further research.
References
|
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3282/2476", "len_cl100k_base": 4713, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24365, "total-output-tokens": 5446, "length": "2e12", "weborganizer": {"__label__adult": 0.0004153251647949219, "__label__art_design": 0.0004041194915771485, "__label__crime_law": 0.0005292892456054688, "__label__education_jobs": 0.0009484291076660156, "__label__entertainment": 0.0001093149185180664, "__label__fashion_beauty": 0.0002300739288330078, "__label__finance_business": 0.00023508071899414065, "__label__food_dining": 0.0005588531494140625, "__label__games": 0.0007901191711425781, "__label__hardware": 0.00450897216796875, "__label__health": 0.0009713172912597656, "__label__history": 0.0003972053527832031, "__label__home_hobbies": 0.0001832246780395508, "__label__industrial": 0.00144195556640625, "__label__literature": 0.00023484230041503904, "__label__politics": 0.0004377365112304687, "__label__religion": 0.000705718994140625, "__label__science_tech": 0.330322265625, "__label__social_life": 0.00012826919555664062, "__label__software": 0.0103302001953125, "__label__software_dev": 0.64453125, "__label__sports_fitness": 0.0006098747253417969, "__label__transportation": 0.0008854866027832031, "__label__travel": 0.0002199411392211914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21846, 0.01511]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21846, 0.63634]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21846, 0.90019]], "google_gemma-3-12b-it_contains_pii": [[0, 2093, false], [2093, 4945, null], [4945, 6622, null], [6622, 9007, null], [9007, 10997, null], [10997, 12757, null], [12757, 14747, null], [14747, 17619, null], [17619, 18656, null], [18656, 19687, null], [19687, 20131, null], [20131, 21846, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2093, true], [2093, 4945, null], [4945, 6622, null], [6622, 9007, null], [9007, 10997, null], [10997, 12757, null], [12757, 14747, null], [14747, 17619, null], [17619, 18656, null], [18656, 19687, null], [19687, 20131, null], [20131, 21846, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21846, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21846, null]], "pdf_page_numbers": [[0, 2093, 1], [2093, 4945, 2], [4945, 6622, 3], [6622, 9007, 4], [9007, 10997, 5], [10997, 12757, 6], [12757, 14747, 7], [14747, 17619, 8], [17619, 18656, 9], [18656, 19687, 10], [19687, 20131, 11], [20131, 21846, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21846, 0.1497]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
71f36cca232b9658973bea60028eb50e5092b13c
|
Lecture 2: A Relational Logic
and analysis with Alloy
declarative modelling of software
Oxford, home of Z
Pittsburgh, home of SMV
The Atlantic Divide
American school of formal methods
European school
- emphasis on verification algorithms (like SMV)
- emphasis on modelling (like Z, VDM, B, Alloy)
Alloy brings together
- automatic analysis (like SMV)
- logical notation (like Z)
First order effects
- Finite interpretation
- Generalized relational join operator
- No scalars or sets; all expressions are relation-valued
- Novel features
- No need to distinguish scalars from singleton sets
- No constructors; compositions by projection
- Design implications
- To allow exhaustive search
Alloy is first order
What's atomic in the real world?
- **Very little** -- a modelling abstraction
- **Immutable** -- don't change over time
- **Uninterpreted** -- can't be broken into smaller parts
- **Indivisible** -- atoms are
- **Atoms & relations** -- structures are built from atoms
```plaintext
Date 0 Date 1
tomorrow
State 0 State 1
next
contains
```
```plaintext
nextState 0
```
Employer, Employee in PERSON
Employer = \{BOB, CAROL\}
Employer = \{ALICE\}
\text{atoms that share properties share a type}
no subtyping, so
\{FILESYSTEM = \{FILESYSTEM0, FILESYSTEM2\}\}
\{STATE = \{STATE0, STATE1, STATE2\}\}
\{PERSON = \{ALICE, BOB, CAROL\}\}
\{DATE = \{JAN1, JAN2, \ldots, DEC31\}\}
partitioned into basic types, each a set
\text{a finite (but perhaps big) set}
\text{contains all atoms}
universe
\text{types}
Relations
definition
A relation is a non-empty set of tuples.
typing
A relation type is a non-empty list of basic types.
relations
- birthday: (PERSON, DATE)
- Alice: May 1
- Bob: January 4
- Carol: December 9
- likes: (PERSON, PERSON)
- Alice likes Bob
- Bob likes Carol
- Carol likes Bob
Relations as tables
- can view relation as table
- atoms as entries, tuples as rows
- order of columns matters, but not order of rows
- can have zero rows, but not zero columns
- no blank entries
Example
\[
\text{birthday} = \{(\text{ALICE, MAY1}), (\text{BOB, JAN4}), (\text{CAROL, DEC9})\}
\]
dimensions
- Elise heterogeneous
- Relation of type \( (T, \ldots, T) \) is homogeneous
- \( \#p \) is an integer expression giving the size of \( p \)
\( \#p \in \mathbb{N}_0 \) \- number of rows
\( \text{size} \)
- \( \#p \in \mathbb{N}_0 \) \- number of columns
\( \text{arity} \)
- \( \#p \in \mathbb{N}_0 \) \- finite, \( \geq 0 \)
\( \text{arity} \)
- \( \#p \in \mathbb{N}_0 \) \- finite, \( > 0 \)
- \( \#p \in \mathbb{N}_0 \) \- unary, binary, ternary for \( k = 1, 2, 3 \)
\( \text{arity} \)
- \( \#p \in \mathbb{N}_0 \) \- relation of arity \( k \) is a \( k \)-relation
\( \text{arity} \)
- \( \#p \in \mathbb{N}_0 \) \- relation of type \( (T, \ldots, T) \) is homogeneous
\( \text{homogeneity} \)
- \( \#p \in \mathbb{N}_0 \) \- else heterogeneous
Relations as graphs can view 2-relation as graphs. Atoms as nodes, tuples as arcs.
\[ \text{likes} = \{(\text{alice, bob}), (\text{bob, carol}), (\text{carol, bob})\} \]
Example
sets and scalars
sets and scalars represented as relations
set: a unary relation
scalar: a unary, singleton relation
\{(a)\}, \{(a)\}, \{\{a\}\}, \{\{a\}\}, \{\{\{a\}\}\}
no distinction between
unlike standard set theory
\{\{\{a\}\}\}
Alice = \{(ALICE)\}
Employer = \{(ALICE)\}
\{(BOB), (CAROL)\}
Employer = \{(BOB), (CAROL)\}
PERSON = \{(ALICE), (BOB), (CAROL)\}
-- note (a)'s! Employee = \{(BOB), (CAROL)\}
scalars vs. relation
set: a unary relation
represented as relations
sets and scalars
sets and scalars
For associating binary relations with atoms:
\[
\{ (\text{BB0, ALICE, MAY1}), (\text{BB0, BOB, JAN4}), (\text{BB1, CAROL, DEC9}) \}
\]
For relationships involving 3 atoms:
\[
\{ (\text{Alice, Apple, $60k}), (\text{Bob, Biogen, $70k}) \}
\]
\[
\{ (\text{PERSON, COMPANY, SALARY}) \}
\]
left-type(likes) = right-type(likes) = PERSON
right-set(likes) = \{(bob, carol)\}
left-set(likes) = \{(alice, bob, carol)\}
\{likes = \{(alice, bob), (bob, carol), (carol, bob)\}\} examples
left and right types
\langle \text{left (right) type of } p \text{ is the first (last) basic type of } p \rangle \text{ type}
\langle \text{left and right types} \rangle
left and right sets
\langle \text{left (right) set of } p \text{ is set of atoms in left-(right-)most column} \rangle
left and right sets
set operators
standard set operators
\[ p \text{ and } q \text{ contain same set of tuples} \]
\[ p \subset q \text{ if } p \text{ in } q \]
\[ p = q \text{ if } p \text{ and } q \text{ contain same set of tuples} \]
interpretation of +
- for scalars, makes a set
- for sets, makes a new set
- for relations, combines maps
\[ \text{likes} + \text{Alice} \rightarrow \text{Bob} \]
\[ \text{Employer} + \text{Employee} \]
\[ \text{Alice} + \text{Bob} \]
\[ \text{contains tuples in } p \text{ but not in } q \]
\[ \text{contains all tuples in both } p \text{ and } q \]
\[ \text{contains tuples of } p \text{ and tuples of } q \]
difference \( p - q \)
intersection \( p \cap q \)
union \( p + q \)
For scalars \( a \) and \( b \), \( a \rightarrow b \) is a tuple.
For sets \( s \) and \( t \), \( s \rightarrow t \) is the Cartesian product.
Examples:
Birthday = Alice\( \rightarrow \) May1 + Bob\( \rightarrow \) Jan4 + Carol\( \rightarrow \) Dec9
Employee\( \rightarrow \) Employee in likes
If \( p \) contains \((p_1, \ldots, p_n)\) and \( q \) contains \((q_1, \ldots, q_m)\) then \( p \rightarrow q \) contains \((p_1, \ldots, p_n, q_1, \ldots, q_m)\).
join definition
if \( p \) contains \((p_1, \ldots, p_{n-1}, p_n)\) and \( q \) contains \((q_1, \ldots, q_{m})\) and \( p_n = q_1 \) then
\( p \cdot q \) contains \((p_1, \ldots, p_{n-1}, q_2, \ldots, q_m)\)
constraints
\[
\text{arity}(p) + \text{arity}(q) > 2 \\
\text{right-type}(p) = \text{left-type}(q)
\]
definition
\( b \cdot d \) then \( b \cdot d \) contains \((p_1, \ldots, p\_n-1, q_2, \ldots, q\_m)\)
and \( p_n = q_1 \)
and \( q \) contains \((q_1, \ldots, q\_m)\)
If \( p \) contains \((p_1, \ldots, p\_n)\)
join
Given join examples, we have
\[
\text{Alice} \cdot \text{likes} = \{(\text{BOB})\}; \quad \text{likes} \cdot \text{Alice} = \{} \}
\]
\[
\text{likes} \cdot \text{birthday} = \{(\text{ALICE}, \text{MAY1}), (\text{BOB}, \text{JAN4}), (\text{CAROL}, \text{DEC9})\}
\]
\[
\text{birthday} \cdot \text{likes} = \{(\text{ALICE, MARY1}), (\text{BOB, JAN4}), (\text{CAROL, DEC9})\}
\]
\[
\text{Alice} \cdot (\text{bb0.birthdayRecords}) = \{(\text{MAY1})\}
\]
\[
\text{bb0.birthdayRecords} = \{(\text{ALICE, MARY1}), (\text{BOB, JAN4})\}
\]
\[
\text{birthday} \cdot (\text{likes}) = \{(\text{ALICE, MARY1}), (\text{BOB, JAN4}), (\text{CAROL, DEC9})\}
\]
\[
(\text{bb0, Alice}) \cdot (\text{likes, birthday}) = \{(\text{ALICE, MARY1}), (\text{BOB, JAN4}), (\text{CAROL, DEC9})\}
\]
for binary relations \( p \) and \( q \), \( p \cdot d \) is standard join of \( p \) and \( q \) for binary relation \( r \) of type \( (S, T) \).
For set \( s \) and binary relation \( r \), \( s \cdot r \) is image of \( s \) under \( r \).
\[ r \cdot 1 \text{ is left-set of } r \]
\[ s \cdot r \text{ is right-set of } r \]
\( s \cdot r \) is image of \( s \) under \( r \).
Join variants for non-binary relations, join is not associative
Syntactic variants of join
\[ \text{Alice.bbo.birthdayRecords} \]
\[ \text{Alice.bbo::birthdayRecords} \]
\[ \text{Alice.bbo.[birthdayRecords]} \]
Binding power: :: most, then ., then []
\[ (b \cdot d) \cdot r = [r]b \cdot d \]
\[ (r \cdot b) \cdot d = r :: b \cdot d \]
Equivalent expressions
\[ [d]b = b :: d = b \cdot d \]
transpose
for relation \( r: (S,T) \)
\[ \sim r \text{ contains } (b,a) \text{ whenever } r \text{ contains } (a,b) \]
\[ \sim r \text{ has type } (T,S) \]
a theorem
for set \( s \) and binary relation \( r \),
\[ r.s = s.\sim r \]
Given
\( p \) contains \((a, q)\) and \( q \) does not map a
\( q \) contains \((a', q')\), or
\( b ++ d \)
for relations \( \mathcal{P}, \mathcal{B} : (S, T) \)
Closure for relation $r$: $(T,T)$
$^r = r + r.r + r.r.r + r.r.r.r + \ldots$ is smallest transitive relation $p$ containing $r$
$\ast r = \text{idem}(T) + r + r.r + r.r.r + r.r.r.r + \ldots$ is smallest reflexive & transitive relation $p$ containing $r$
Examples
- $\text{precedes} = \sim_{\neq} \text{next}$
- $\text{reaches} = \ast_{\text{connects}}$
- $\text{ancestor} = \text{parent}^\ast$
is smallest reflexive & transitive relation $p$ containing $r$
\[ \cdots + I.T + I.T + I.T + I.T + I.T + I.T + \cdots \]
is smallest transitive relation $p$ containing $r$
\[ \cdots + I.T + I.T + I.T + I.T + I.T + I.T + \cdots \]
for relation $r$: $(T,T)$
Closure
<table>
<thead>
<tr>
<th><strong>Operator</strong></th>
<th><strong>Function</strong></th>
<th><strong>Type</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><code>p & q</code></td>
<td>Logical AND</td>
<td><code>(T)</code></td>
</tr>
<tr>
<td><code>\neg p</code></td>
<td>Logical NOT</td>
<td><code>(S)</code></td>
</tr>
<tr>
<td><code>p</code></td>
<td>Identity</td>
<td><code>(T)</code></td>
</tr>
<tr>
<td><code>p</code>, <code>\neg p</code></td>
<td>粘连</td>
<td><code>(T,T)</code></td>
</tr>
<tr>
<td><code>p \lor q</code></td>
<td>Logical OR</td>
<td><code>(S)</code></td>
</tr>
<tr>
<td><code>\exists p</code></td>
<td>Existential</td>
<td><code>(T)</code></td>
</tr>
<tr>
<td><code>\forall p</code></td>
<td>Universal</td>
<td><code>(T)</code></td>
</tr>
<tr>
<td><code>p \rightarrow q</code></td>
<td>Implication</td>
<td><code>(T)</code></td>
</tr>
</tbody>
</table>
**Notes:**
- `\forall p`: For all `p`
- `\exists p`: There exists `p`
- `p \rightarrow q`: If `p` then `q`
- `p \equiv q`: `p` if and only if `q`
navigation expressions from 2-relations and the operators + ^ * ~ interpret as path-sets
cousin = parent.sibling.~parent
daniel.sibling.sibling
= daniel.sibling.siblign
tim
claudia
emily spouse
spouse
example
follow p backwards
d~
d_
dv
follow p zero or more times
follow p once or more
follow p or q
follow p then q
interpret as path-sets
~ * + .
from 2-relations and the operators
navigation expressions
a navigation example
\[ \text{all } n: \text{Node} \mid n.\text{~source-queue-elt}.\text{from} = n \]
or equivalently
\[ \text{all } n: \text{Node} \mid n.\text{~source-queue-elt}.\text{from} \in \text{idem[Node]} \]
we can write
\[ \text{from field of that node emanating from a node have a } \text{all messages queued on links} \]
to say
negated operators
if-then-else expressions
\[ F \implies G \]
\[ F \implies G \text{ else } H \]
\[ F \lor G \]
\[ \{ F; G \} \]
\[ F \land G \]
standard connectives
logical operators
\[ e \in \alpha, e' \in \alpha \]
\[ \neg e \in \alpha, e' \in \alpha \]
\[ F \land G \]
\[ F \lor G \]
\[ F \implies G \]
\[ F \iff G \]
\[ F \leftrightarrow G \]
\[ F \rightarrow G \]
\[ F \rightarrow G, H \]
if-then-else expressions
\[ F \implies G \text{ else } H \]
\[ F \lor G \]
\[ \{ F; G \} \]
\[ F \land G \]
not F
set declarations
form
var : [set | option] setexpr
same meaning as
Employee : Person
not unary, so no scalar constraint
Employee is a subset of Person
p is a scalar in Person
examples
bb : Person -> Date
Employee : set Person
Employee is a subset of Person
p : Person
p is a scalar in Person
v : e in e and #v = 1
v : set e in e and #v <= 1
v : option e in e and #v <= 1
same meaning as
Employee : Person
Relation declarations
var : expr => [mult] expr
form
Examples
r : A -> i B
r is a partial function
r : A ?-> B
r is a total function
r : A i-> B
r is a bijection
multiplicity symbols
? zero or one
i exactly one
+
one or more
meaning
r : e0 -> e1
means
r : e0 m -> e1
r : e0 n <- e1
and n e1's for each e0, and m e0's for each e0,
r : e0 m -> e1
r : e0 n <- e1
for each e0,
What is an object model?
- A set of declarations drawn as a graph.
- Boxes denote sets, arcs represent relations.
- Parentless boxes have implicit type.
Example:
- **Person**: set PERSON
- **Company**: set COMPANY
- **Employee**: set Person
- **worksFor**: Employee →! Company
Diagram:
- `Company` → `Employee` → `Person`
Object models
comprehensions
example
\{ a, b : Person | a.parents = b.parents \& a \neq b \}
sibling = { a, b : Person | a.parents = b.parents \& a \neq b }
and \{ (a0) in e0, (a1) in e1, etc \}
such that \( E \) holds when \( v0 = \{(a0)\}, v1 = \{(a1)\}, \ldots \)
is the relation containing tuples \((ao,a1,\ldots)\)
\{ \{ v0 : e0, v1 : e1, \ldots \} | E \}
meaning
\{ \forall \text{ var : setexpr} | \ldots \text{ formula} \}
General form
Example
\[ \forall a: \text{Person} \mid a \in a.\text{parents} \]
(meaning)
\[ \forall v0: e0, v1: e1, \ldots \mid F \]
universal quantification
universal quantification
quantifiers
other quantifiers
$F \mid \cdots \mid \text{one } v_0: e_0, v_1: e_1, \ldots \mid F$ is equivalent to $F \mid \cdots \mid \text{all } v_0: e_0, v_1: e_1, \ldots \mid F$
$F \mid \cdots \mid \text{one } v_0: e_0, v_1: e_1, \ldots \mid F$ is equivalent to $F \mid \cdots \mid \text{all } v_0: e_0, v_1: e_1, \ldots \mid F$
\begin{align*}
F & \text{ holds for exactly one } x \in e \\
F & \text{ holds for at most one } x \in e \\
F & \text{ holds for no } x \in e \\
F & \text{ holds for some } x \in e \\
F & \text{ holds for all } x \in e
\end{align*}
\text{note}
quantified expressions
for quantifier Q and expression e, make formula
\[ Q \ e \]
meaning
- **some** \( e \) e is non-empty \( \#e > 0 \)
- **no** \( e \) e is empty \( \#e = 0 \)
- **sole** \( e \) e has at most one tuple \( \#e \leq 1 \)
- **one** \( e \) e has one tuple \( \#e = 1 \)
example
**no** Man & Woman \( no \ person \ is \ both \ a \ man \ and \ a \ woman \)
sample quantifications
biological constraints
one eve: Person | in eve. ~mother
biblical constraints
no p: Person | some p.spouse & p.siblings
no p: Person | sole p.spouse
all p: Person | sole p.spouse
all p: Person | one p.mother
biological constraints
no p: Person | p in p.parents
all p: Person | one p.mother
all p: Person | one p.mother
cultural constraints
no p: Person | eve.*~mother
summary: doing more with less
everything's a relation
(a, b) \in r
and
a \neq b
r
first-order operators
r : A \rightarrow B
means i \in A \Rightarrow B replaces i \in p(A \times B)
dot operator
plays many roles
dot operator
inexpressive
inexpressive
tractable
tractable
expressive
expressive
write a constraint on an undirected graph that says it is acyclic.
a solution
sample graph
higher-order quantifiers
examples
\[
\begin{align*}
\forall \mathcal{F} : \mathcal{R} \mid F \text{ holds for all } x \in \mathcal{R} \\
\forall : \mathcal{S} \mid \mathcal{F} \text{ holds for all } s = \mathcal{S}' \text{ where } \mathcal{S}' \in \mathcal{S} \\
\forall : \text{option} \mathcal{S} \mid \mathcal{F} \\
\forall : \text{set} \mathcal{S} \mid \mathcal{F}
\end{align*}
\]
model checking
- keep counters, discard model or vice versa?
- but in software, essence is incremental modelling
- emphasis is finding showstopper flaws
- culture of model checking
- missing at operation level
- modularity
- fixed topology of processes
- not suited for abstract schemes
- built-in communications
- no transitive closure, etc
- must encode in records, arrays
- only low-level datatypes
model checking
|
{"Source-Url": "http://people.csail.mit.edu/dnj/talks/marktoberdorf/logic.ppt.noanim.pdf", "len_cl100k_base": 5417, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 53153, "total-output-tokens": 7093, "length": "2e12", "weborganizer": {"__label__adult": 0.00031113624572753906, "__label__art_design": 0.0005478858947753906, "__label__crime_law": 0.0003790855407714844, "__label__education_jobs": 0.003664016723632813, "__label__entertainment": 8.308887481689453e-05, "__label__fashion_beauty": 0.00015223026275634766, "__label__finance_business": 0.00040841102600097656, "__label__food_dining": 0.000461578369140625, "__label__games": 0.0006961822509765625, "__label__hardware": 0.0007300376892089844, "__label__health": 0.00045990943908691406, "__label__history": 0.0003376007080078125, "__label__home_hobbies": 0.0002168416976928711, "__label__industrial": 0.0008044242858886719, "__label__literature": 0.0004489421844482422, "__label__politics": 0.0002741813659667969, "__label__religion": 0.0005035400390625, "__label__science_tech": 0.058746337890625, "__label__social_life": 0.00022590160369873047, "__label__software": 0.010589599609375, "__label__software_dev": 0.9189453125, "__label__sports_fitness": 0.00033211708068847656, "__label__transportation": 0.0006542205810546875, "__label__travel": 0.0001932382583618164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14937, 0.00731]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14937, 0.47927]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14937, 0.66769]], "google_gemma-3-12b-it_contains_pii": [[0, 89, false], [89, 387, null], [387, 718, null], [718, 1092, null], [1092, 1540, null], [1540, 1846, null], [1846, 2144, null], [2144, 2929, null], [2929, 3109, null], [3109, 3628, null], [3628, 3917, null], [3917, 4424, null], [4424, 5134, null], [5134, 5600, null], [5600, 6136, null], [6136, 6914, null], [6914, 7297, null], [7297, 7693, null], [7693, 7931, null], [7931, 8097, null], [8097, 8764, null], [8764, 9315, null], [9315, 9738, null], [9738, 10084, null], [10084, 10601, null], [10601, 11024, null], [11024, 11409, null], [11409, 11751, null], [11751, 12190, null], [12190, 12366, null], [12366, 12947, null], [12947, 13327, null], [13327, 13731, null], [13731, 14039, null], [14039, 14106, null], [14106, 14117, null], [14117, 14130, null], [14130, 14517, null], [14517, 14937, null]], "google_gemma-3-12b-it_is_public_document": [[0, 89, true], [89, 387, null], [387, 718, null], [718, 1092, null], [1092, 1540, null], [1540, 1846, null], [1846, 2144, null], [2144, 2929, null], [2929, 3109, null], [3109, 3628, null], [3628, 3917, null], [3917, 4424, null], [4424, 5134, null], [5134, 5600, null], [5600, 6136, null], [6136, 6914, null], [6914, 7297, null], [7297, 7693, null], [7693, 7931, null], [7931, 8097, null], [8097, 8764, null], [8764, 9315, null], [9315, 9738, null], [9738, 10084, null], [10084, 10601, null], [10601, 11024, null], [11024, 11409, null], [11409, 11751, null], [11751, 12190, null], [12190, 12366, null], [12366, 12947, null], [12947, 13327, null], [13327, 13731, null], [13731, 14039, null], [14039, 14106, null], [14106, 14117, null], [14117, 14130, null], [14130, 14517, null], [14517, 14937, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14937, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14937, null]], "pdf_page_numbers": [[0, 89, 1], [89, 387, 2], [387, 718, 3], [718, 1092, 4], [1092, 1540, 5], [1540, 1846, 6], [1846, 2144, 7], [2144, 2929, 8], [2929, 3109, 9], [3109, 3628, 10], [3628, 3917, 11], [3917, 4424, 12], [4424, 5134, 13], [5134, 5600, 14], [5600, 6136, 15], [6136, 6914, 16], [6914, 7297, 17], [7297, 7693, 18], [7693, 7931, 19], [7931, 8097, 20], [8097, 8764, 21], [8764, 9315, 22], [9315, 9738, 23], [9738, 10084, 24], [10084, 10601, 25], [10601, 11024, 26], [11024, 11409, 27], [11409, 11751, 28], [11751, 12190, 29], [12190, 12366, 30], [12366, 12947, 31], [12947, 13327, 32], [13327, 13731, 33], [13731, 14039, 34], [14039, 14106, 35], [14106, 14117, 36], [14117, 14130, 37], [14130, 14517, 38], [14517, 14937, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14937, 0.02183]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
483aa2df86969acab98aec0e315ca1d0b0ba24c2
|
The GCC Quad-Precision Math Library
Published by the Free Software Foundation
51 Franklin Street, Fifth Floor
Boston, MA 02110-1301, USA
Copyright © 2010-2021 Free Software Foundation, Inc.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being “A GNU Manual,” and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled “GNU Free Documentation License.”
(a) The FSF’s Back-Cover Text is: “You have the freedom to copy and modify this GNU manual.”
Short Contents
Introduction ............................................................... 1
1 Typedef and constants ............................................ 3
2 Math Library Routines ............................................. 5
3 I/O Library Routines ............................................... 9
GNU Free Documentation License ............................... 11
4 Reporting Bugs ...................................................... 19
Table of Contents
Introduction .................................................... 1
1 Typedef and constants ................................. 3
2 Math Library Routines ................................. 5
3 I/O Library Routines ................................. 9
3.1 `strtoflt128` — Convert from string ................. 9
3.2 `quadmath_snprintf` — Convert to string .......... 9
GNU Free Documentation License .................. 11
ADDENDUM: How to use this License for your documents ... 18
4 Reporting Bugs ........................................... 19
Introduction
This manual documents the usage of libquadmath, the GCC Quad-Precision Math Library Application Programming Interface (API).
1 Typedef and constants
The following data type has been defined via `typedef`.
`__complex128`: __float128-based complex number
The following macros are defined, which give the numeric limits of the `__float128` data type.
- `FLT128_MAX`: largest finite number
- `FLT128_MIN`: smallest positive number with full precision
- `FLT128_EPSILON`: difference between 1 and the next larger representable number
- `FLT128_DENORM_MIN`: smallest positive denormalized number
- `FLT128_MANT_DIG`: number of digits in the mantissa (bit precision)
- `FLT128_MIN_EXP`: maximal negative exponent
- `FLT128_MAX_EXP`: maximal positive exponent
- `FLT128_DIG`: number of decimal digits in the mantissa
- `FLT128_MIN_10_EXP`: maximal negative decimal exponent
- `FLT128_MAX_10_EXP`: maximal positive decimal exponent
The following mathematical constants of type `__float128` are defined.
- `M_Eq`: the constant e (Euler’s number)
- `M_LOG2Eq`: binary logarithm of 2
- `M_LOG10Eq`: common, decimal logarithm of 2
- `M_LN2q`: natural logarithm of 2
- `M_LN10q`: natural logarithm of 10
- `M_PIq`: pi
- `M_PI_2q`: pi divided by two
- `M_PI_4q`: pi divided by four
- `M_1_PIq`: one over pi
- `M_2_PIq`: one over two pi
- `M_2_SQRTPIq`: two over square root of pi
- `M_SQRT2q`: square root of 2
- `M_SQRT1_2q`: one over square root of 2
2 Math Library Routines
The following mathematical functions are available:
acosq: arc cosine function
acoshq: inverse hyperbolic cosine function
asinq: arc sine function
asinhq: inverse hyperbolic sine function
atanq: arc tangent function
atanhq: inverse hyperbolic tangent function
atan2q: arc tangent function
cbrtq: cube root function
ceilq: ceiling value function
copysignq: copy sign of a number
coshq: hyperbolic cosine function
cosq: cosine function
erfq: error function
erfcq: complementary error function
exp2q: base 2 exponential function
expq: exponential function
expmq: exponential minus 1 function
fabsq: absolute value function
fdimq: positive difference function
finiteq: check finiteness of value
floorq: floor value function
fmaq: fused multiply and add
fmaxq: determine maximum of two values
fminq: determine minimum of two values
fmodq: remainder value function
frexpq: extract mantissa and exponent
hypotq: Euclidean distance function
ilogbq: get exponent of the value
isnanq: check for not a number
issignalingq: check for signaling not a number
j0q: Bessel function of the first kind, first order
j1q: Bessel function of the first kind, second order
jnq: Bessel function of the first kind, n-th order
ldexpq: load exponent of the value
lgammaq: logarithmic gamma function
llrintq: round to nearest integer value
llroundq: round to nearest integer value away from zero
logbq: get exponent of the value
logq: natural logarithm function
log10q: base 10 logarithm function
log1pq: compute natural logarithm of the value plus one
log2q: base 2 logarithm function
lrintq: round to nearest integer value
lroundq: round to nearest integer value away from zero
modfq: decompose the floating-point number
nanq: return quiet NaN
nearbyintq: round to nearest integer
nextafterq: next representable floating-point number
powq: power function
remainderq: remainder function
remquoq: remainder and part of quotient
rintq: round-to-nearest integral value
roundq: round-to-nearest integral value, return __float128
scalblog: compute exponent using FLT_RADIX
scalbnq: compute exponent using FLT_RADIX
signbitq: return sign bit
sincosq: calculate sine and cosine simultaneously
sinhq: hyperbolic sine function
sinq: sine function
sqrtq: square root function
tanq: tangent function
tanhq: hyperbolic tangent function
tgammaq: true gamma function
truncq: round to integer, towards zero
y0q: Bessel function of the second kind, first order
y1q: Bessel function of the second kind, second order
ynq: Bessel function of the second kind, n-th order
cabsq complex absolute value function
cargq: calculate the argument
cimagq imaginary part of complex number
crealq: real part of complex number
cacoshq: complex arc hyperbolic cosine function
cacoshq: complex arc cosine function
casinhq: complex arc hyperbolic sine function
casinhq: complex arc sine function
csinhq: complex arc hyperbolic cosine function
csinhq: complex arc sine function
catanhq: complex arc hyperbolic tangent function
catanq: complex arc tangent function
cosq complex cosine function:
ccoshq: complex hyperbolic cosine function
cexpq: complex exponential function
cexpiq: computes the exponential function of “i” times a real value
clogq: complex natural logarithm
clog10q: complex base 10 logarithm
conjq: complex conjugate function
cpowq: complex power function
cprojq: project into Riemann Sphere
csinq: complex sine function
csinhq: complex hyperbolic sine function
csqrtq: complex square root
ctanq: complex tangent function
ctanhq: complex hyperbolic tangent function
Chapter 3: I/O Library Routines
3 I/O Library Routines
3.1 strtoflt128 — Convert from string
The function `strtoflt128` converts a string into a `__float128` number.
Syntax
```c
__float128 strtoflt128 (const char *s, char **sp)
```
Arguments:
- `s` input string
- `sp` the address of the next character in the string
The argument `sp` contains, if not NULL, the address of the next character following the parts of the string, which have been read.
Example
```c
#include <quadmath.h>
int main ()
{
__float128 r;
r = strtoflt128 ("1.2345678", NULL);
return 0;
}
```
3.2 quadmath_snprintf — Convert to string
The function `quadmath_snprintf` converts a `__float128` floating-point number into a string. It is a specialized alternative to `snprintf`, where the format string is restricted to a single conversion specifier with `Q` modifier and conversion specifier `e`, `E`, `f`, `F`, `g`, `G`, `a` or `A`, with no extra characters before or after the conversion specifier. The `%m$` or `*m$` style must not be used in the format.
Syntax
```c
int quadmath_snprintf (char *s, size_t size, const char *format, ...)
```
Arguments:
- `s` output string
- `size` byte size of the string, including tailing NUL
- `format` conversion specifier string
Note
On some targets when supported by the C library hooks are installed for `printf` family of functions, so that `printf ("%Qe", 1.2Q);` etc. works too.
Example
```c
#include <quadmath.h>
#include <stdlib.h>
#include <stdio.h>
int main ()
{
__float128 r;
int prec = 20;
```
int width = 46;
char buf[128];
r = 2.0q;
r = sqrtq (r);
int n = quadmath_snprintf (buf, sizeof buf, "%+-#*.20Qe", width, r);
if ((size_t) n < sizeof buf)
printf ("%s\n", buf);
/* Prints: +1.41421356237309504880e+00 */
quadmath_snprintf (buf, sizeof buf, "%Qa", r);
if ((size_t) n < sizeof buf)
printf ("%s\n", buf);
/* Prints: 0x1.6a09e667f3bcc908b2fb1366ea96p+0 */
n = quadmath_snprintf (NULL, 0, "%+-#46.*Qe", prec, r);
if (n > -1)
{
char *str = malloc (n + 1);
if (str)
{
quadmath_snprintf (str, n + 1, "%+-#46.*Qe", prec, r);
printf ("%s\n", str);
/* Prints: +1.41421356237309504880e+00 */
}
free (str);
}
return 0;
GNU Free Documentation License
Version 1.3, 3 November 2008
http://fsf.org/
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released
under this License. If a section does not fit the above definition of Secondary then it is
not allowed to be designated as Invariant. The Document may contain zero Invariant
Sections. If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under
this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may
be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented
in a format whose specification is available to the general public, that is suitable for
revising the document straightforwardly with generic text editors or (for images com-
posed of pixels) generic paint programs or (for drawings) some widely available drawing
editor, and that is suitable for input to text formatters or for automatic translation to
a variety of formats suitable for input to text formatters. A copy made in an otherwise
Transparent file format whose markup, or absence of markup, has been arranged to
thwart or discourage subsequent modification by readers is not Transparent. An image
format is not Transparent if used for any substantial amount of text. A copy that is
not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without
markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly
available DTD, and standard-conforming simple HTML, PostScript or PDF designed
for human modification. Examples of transparent image formats include PNG, XCF
and JPG. Opaque formats include proprietary formats that can be read and edited
only by proprietary word processors, SGML or XML for which the DTD and/or
processing tools are not generally available, and the machine-generated HTML,
PostScript or PDF produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following
pages as are needed to hold, legibly, the material this License requires to appear in the
title page. For works in formats which do not have any title page as such, “Title Page”
means the text near the most prominent appearance of the work’s title, preceding the
beginning of the body of the text.
The “publisher” means any person or entity that distributes copies of the Document
to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title either
is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in
another language. (Here XYZ stands for a specific section name mentioned below, such
as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve
the Title” of such a section when you modify the Document means that it remains a
section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that
this License applies to the Document. These Warranty Disclaimers are considered to
be included by reference in this License, but only as regards disclaiming warranties:
any other implication that these Warranty Disclaimers may have is void and has no
effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any,
be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled “History” in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the “History” section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled “Endorsements”. Such a section may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their
titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.”
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
7. AGGREGATION WITH INDEPENDENT WORKS
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the GNU Free
Documentation License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document
specifies that a particular numbered version of this License “or any later version”
applies to it, you have the option of following the terms and conditions either of that
specified version or of any later version that has been published (not as a draft) by
the Free Software Foundation. If the Document does not specify a version number of
this License, you may choose any version ever published (not as a draft) by the Free
Software Foundation. If the Document specifies that a proxy can decide which future
versions of this License can be used, that proxy’s public statement of acceptance of a
version permanently authorizes you to choose that version for the Document.
11. RELICENSING
“Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide
Web server that publishes copyrightable works and also provides prominent facilities
for anybody to edit those works. A public wiki that anybody can edit is an example of
such a server. A “Massive Multiauthor Collaboration” (or “MMC”) contained in the
site means any set of copyrightable works thus published on the MMC site.
“CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license pub-
lished by Creative Commons Corporation, a not-for-profit corporation with a principal
place of business in San Francisco, California, as well as future copyleft versions of that
license published by that same organization.
“Incorporate” means to publish or republish a Document, in whole or in part, as part
of another Document.
An MMC is “eligible for relicensing” if it is licensed under this License, and if all works
that were first published under this License somewhere other than this MMC, and
subsequently incorporated in whole or in part into the MMC, (1) had no cover texts
or invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under
CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is
eligible for relicensing.
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
```
Copyright (C) year your name.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. A copy of the license is included in the section entitled ‘‘GNU
Free Documentation License’’.
```
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
```
with the Invariant Sections being list their titles, with
the Front-Cover Texts being list, and with the Back-Cover Texts
being list.
```
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
4 Reporting Bugs
Bugs in the GCC Quad-Precision Math Library implementation should be reported via http://gcc.gnu.org/bugs/.
|
{"Source-Url": "https://gcc.gnu.org/onlinedocs/libquadmath.pdf", "len_cl100k_base": 7059, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 44222, "total-output-tokens": 8295, "length": "2e12", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.0005092620849609375, "__label__crime_law": 0.0009479522705078124, "__label__education_jobs": 0.0010671615600585938, "__label__entertainment": 0.0001018047332763672, "__label__fashion_beauty": 9.518861770629884e-05, "__label__finance_business": 0.0008368492126464844, "__label__food_dining": 0.000324249267578125, "__label__games": 0.0008702278137207031, "__label__hardware": 0.0008192062377929688, "__label__health": 0.00023746490478515625, "__label__history": 0.00017130374908447266, "__label__home_hobbies": 8.183717727661133e-05, "__label__industrial": 0.00032019615173339844, "__label__literature": 0.0003666877746582031, "__label__politics": 0.00022125244140625, "__label__religion": 0.00032782554626464844, "__label__science_tech": 0.022186279296875, "__label__social_life": 6.920099258422852e-05, "__label__software": 0.0246429443359375, "__label__software_dev": 0.94482421875, "__label__sports_fitness": 0.00014460086822509766, "__label__transportation": 0.0002529621124267578, "__label__travel": 0.00011926889419555664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31804, 0.02282]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31804, 0.33797]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31804, 0.84459]], "google_gemma-3-12b-it_contains_pii": [[0, 36, false], [36, 703, null], [703, 1155, null], [1155, 1155, null], [1155, 1724, null], [1724, 1724, null], [1724, 1863, null], [1863, 1863, null], [1863, 3182, null], [3182, 3182, null], [3182, 3797, null], [3797, 4766, null], [4766, 6386, null], [6386, 6728, null], [6728, 8285, null], [8285, 8932, null], [8932, 11937, null], [11937, 15264, null], [15264, 18582, null], [18582, 21546, null], [21546, 24675, null], [24675, 27961, null], [27961, 30397, null], [30397, 31678, null], [31678, 31804, null], [31804, 31804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 36, true], [36, 703, null], [703, 1155, null], [1155, 1155, null], [1155, 1724, null], [1724, 1724, null], [1724, 1863, null], [1863, 1863, null], [1863, 3182, null], [3182, 3182, null], [3182, 3797, null], [3797, 4766, null], [4766, 6386, null], [6386, 6728, null], [6728, 8285, null], [8285, 8932, null], [8932, 11937, null], [11937, 15264, null], [15264, 18582, null], [18582, 21546, null], [21546, 24675, null], [24675, 27961, null], [27961, 30397, null], [30397, 31678, null], [31678, 31804, null], [31804, 31804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31804, null]], "pdf_page_numbers": [[0, 36, 1], [36, 703, 2], [703, 1155, 3], [1155, 1155, 4], [1155, 1724, 5], [1724, 1724, 6], [1724, 1863, 7], [1863, 1863, 8], [1863, 3182, 9], [3182, 3182, 10], [3182, 3797, 11], [3797, 4766, 12], [4766, 6386, 13], [6386, 6728, 14], [6728, 8285, 15], [8285, 8932, 16], [8932, 11937, 17], [11937, 15264, 18], [15264, 18582, 19], [18582, 21546, 20], [21546, 24675, 21], [24675, 27961, 22], [27961, 30397, 23], [30397, 31678, 24], [31678, 31804, 25], [31804, 31804, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31804, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
c8b642f0db69ad5f747770fdc3bf55f87afe8a9f
|
Automatic Design Validation Framework for HDL Descriptions via RTL ATPG *
Liang Zhang and Michael Hsiao
Department of ECE, Virginia Tech
Blacksburg, VA, 24061, USA
{liang,hsiao}@vt.edu
Indradeep Ghosh
Fujitsu Laboratories of America Inc.
Sunnyvale, CA, 94086, USA
ighosh@fla.fujitsu.com
Abstract
We present a framework for high-level design validation using an efficient register-transfer level (RTL) automatic test pattern generator (ATPG). The RTL ATPG generates the test environments for validation targets, which include variable assignments, conditional statements, and arithmetic expressions in the HDL description. A test environment is a set of conditions that allow for full controllability and observability of the validation target. Each test environment is then translated to validation vectors by filling in the unspecified values in the environment. Since the observability of error effect is naturally handled by our ATPG, our approach is superior to methods that only focus on the excitation of HDL descriptions. The experimental results on ITC99 benchmark circuits and an industrial circuit demonstrate that very high design error coverage can be obtained in a small CPU times.
1. Introduction
Despite of the advances of formal verification methods (such as equivalence checking, model checking, theorem proving, etc.) in recent years, simulation remains the primary approach for design validation, especially at the high abstraction levels of the circuit. The keys to simulation-based validation approach are coverage metrics and vector generation algorithms. The most common coverage metrics are statement, branch and condition coverage adopted from software testing [1]. In [4], the authors proposed a simulation-based ATPG approach for design validation, in which the input VHDL code is first instrumented so that the execution trace can be captured, then the instrumented VHDL is simulated using a commercial simulator. The underlying procedure interacts with the simulator via the trace files and tries to generate vectors to maximize the statement coverage. Another algorithm [9] proposes a mutant analysis and tries to generate vectors to capture every injected mutant. The authors in [10] propose the analysis of paths for input HDL code. Each path starts from a variable definition and ends at usage of that variable. The constraints are formed from the path and provided to constraint solver to generate each validation test. The authors in [3] propose the genetic algorithm (GA) based approach to automatically generate test programs for microprocessor cores. A gate-level fault simulator is used to evaluate the fitnesses of generated test programs and to guide the GA process. The gate-level implementation must be available for this approach. In [6], the authors use integer linear programming and boolean satisfiability methods to generate vectors that achieve high tag (an observability-enhanced statement) coverage [7]. Tag coverage is a better coverage metric than traditional software coverages in that the propagation of error effects to an observation point is considered. In [8], authors proposed a deterministic RTL-ATPG algorithm, which is able to efficiently generate logic-level stuck-at tests from an RTL HDL description. The algorithm utilizes a set of 9-valued algebra to perform symbolic justification and propagation to derive the test environment. Once a test environment is obtained, the precomputed test vectors can be plugged into the test environment to derive the complete set of test vectors. In order to more effectively handle control-intensive circuits, authors in [11] extended the 9-valued algebra to a 10-valued algebra and introduced several high-level heuristics to efficiently use finite state machine (FSM) information to guide the ATPG search process.
While the RTL ATPG algorithm was originally targeted for manufacturing tests, we have discovered that with some modifications, the ATPG algorithm can be extended to generate validation vectors efficiently. The original ATPG targets every construct that is synthesized to the structural RTL netlist. However, for design validation purposes, we only need to target constructs that directly map to variable assignment, arithmetic expressions, and conditional constructs at the behavioral level. The RTL algebra has been augmented to handle embedded counters more efficiently. In addition, in our work, test environment relaxation is used to help validation of additional portions of the design that may be hard to test. As a result, our approach is superior to the test generation algorithm in [8] and is able to generate more compact validation vectors. Our approach is very different from the previous approaches. Most of the previous techniques do not consider
---
*This research was supported in part by NSF grants CCR-0196470, CCR-0305881, and a grant from Fujitsu Labs of America
the observability issues; instead, they focus on maximizing the excitation of the potential error sites. Compared to the approach described in [6], which targets the tag coverage, our test generation method improves upon the tag simulation process and is able to handle much larger designs. Lastly, the back-end test environment translator has been enhanced with to maximize the design error detection. As the result, the generated vectors are more compact and more powerful for design validation.
Note that while automatically deriving validation vectors at the RTL, we are validating on the implementation circuit, which may be a buggy circuit. Nevertheless, this fact can add advantages to our work in the following scenarios:
(i) If a golden RTL model is present, then from the model the test vectors may be derived and used to compare output responses from an implementation.
(ii) The test vectors obtained from an implementation can be applied to its executable behavioral specification (if available) and will produce different outputs when the bugs are excited and propagated in the implementation. (We assume this scenario in this framework)
(iii) The vectors may be used to validate the logic-level circuit derived from an RTL description if formal equivalence checking fails (i.e., the RTL description and the gate level implementation are structurally different). Thus, automatically generated validation test benches at the RTL can aid the verification process to a large extent.
The rest of paper is organized as follows: Section 2 presents an overview of our design validation framework. Section 3 describes our validation vector generation algorithm. Section 4 reports the experimental results, and Section 5 concludes the paper.
2. Design Validation Framework
Figure 1 shows the overview of our design validation framework. First, the input HDL description (either VHDL or Verilog) is compiled into an internal structural RTL representation based on Assignment Decision Diagrams (ADD) [2]. Then, the ATPG procedure generates the validation tests and reports the achieved tag coverage. The HDL test bench wrapper is generated to facilitate the simulation. Next, a commercial HDL simulator is invoked to simulate the validation vectors. Finally, the responses are captured and compared against the responses from the specification by the checker program.
2.1. Validation Metrics
In order to measure the thoroughness of the validation, we adopted into our framework the Observability-Based Code Coverage (OBCC or tag coverage)[7]. The OBCC is superior to classical software testing coverages (such as statement and branch coverage) in that it incorporates observability as well as controllability information into the simulation. The basic strategy of OBCC is the efficient computation of tag coverage. The tag is introduced in OBCC as a mechanism to extend standard coverage metrics so that the observability can be computed. A tag is defined as a symbol placed at a given location, which can be used to represent the presence of an incorrect value. First, the tags are injected at variable assignment statements and branch conditions, then the tags are propagated based on a set of calculus rules for supported primitives. We augmented original tag calculus with unsigned (U) tag. Table 1 shows the augmented tag calculus for an adder. The +Δ represents a positive tag, while -Δ denotes a negative one. The ? means that tag may be killed as the result of operation during execution. A tag is declared to be observed only when either +Δ, -Δ, or U has been successfully propagated to at least one PO.
3. Validation Vector Generation Algorithm
The core of our validation framework is an efficient RTL ATPG tool, which generates the validation vectors for a given HDL description. The justification and propagation of controllability and observability objectives are carried out symbolically using a set of 10-valued RTL algebra. The 10-valued RTL algebra, first proposed in [8], then extended in [11], includes following symbols:
<table>
<thead>
<tr>
<th>Adder</th>
<th>b</th>
<th>b-Δ</th>
<th>b+Δ</th>
<th>b+U</th>
<th>b+?</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>a+b</td>
<td>a+b-Δ</td>
<td>a+b+Δ</td>
<td>a+b+U</td>
<td>a+b+?</td>
</tr>
<tr>
<td>a-Δ</td>
<td>a+b-Δ</td>
<td>a+b+Δ</td>
<td>a+b+?</td>
<td>a+b+?</td>
<td>a+b+?</td>
</tr>
<tr>
<td>a+Δ</td>
<td>a+b+Δ</td>
<td>a+b+</td>
<td>a+b+Δ</td>
<td>a+b+?</td>
<td>a+b+?</td>
</tr>
<tr>
<td>a+U</td>
<td>a+b+U</td>
<td>a+b+?</td>
<td>a+b+?</td>
<td>a+b+?</td>
<td>a+b+?</td>
</tr>
<tr>
<td>a?</td>
<td>a+b+?</td>
<td>a+b+?</td>
<td>a+b+?</td>
<td>a+b+?</td>
<td>a+b+?</td>
</tr>
</tbody>
</table>
Cg (general controllability) is the ability to control a variable to arbitrary value.
C0 (controllability to zero) is the ability to control a variable to the value 0.
C1 (controllability to one) is the ability to control a variable to the value 1; i.e., "000...01".
Ca1 (controllability to all ones) is the ability to control the variable to all ones; i.e., "111...11".
Cq (controllability to a constant) is the ability to control the variable to any fixed constant.
Cz (controllability to the Z value) is the ability to control the variable to high-impedance Z.
Cs (controllability to a state) is the ability to control the state variable to a particular state.
Cp[a,b] (controllability to a particular range) is the ability to control the variable within the range of [a,b].
O (observability) is the ability to observe a fault at a variable.
O’ (complement observability) is defined for single-bit variables only. It signifies the zero/one fault.
Figure 2 shows the test generation flow. First, a preprocessor builds a validation target list for the circuit, which includes all condition, arithmetic, and assignment constructs. Next, the ATPG iterates through the list and generates the test environment for each target. If the test environment cannot be obtained for a given validation target, the ATPG tries to generate the relaxed test environment for it. After the test environment generation stops, the back-end translator is invoked to generate the validation vectors from the test environments.
3.1. Test Environment Generation
The test environment is a set of conditions that allow controllability and observability of the validation target. Each test environment can be viewed as a symbolic path which starts from the PIs, traversing through the target site, and reaches at one or more POs or observable variables. The test environment generation process, as shown in Figure 3, is essentially searching for a sufficient symbolic path, through which the excitation objectives can be delivered to the target site, and error effect can be propagated to the PO.
```c
void generate_test_environment()
{
while(select_symbolic_path() == TRUE) {
inject_symbolic_path_objectives();
inject_symbolic_excitation_objectives();
if (justify_all_objectives()==TRUE) {
exit();
}
}
save_test_environment();
}
```
Consider the VHDL description and its structural RTL in Figure 4 as an example to illustrate the above algorithm. Assume that RST, ina and inb are PIs, and out is the only PO for the circuit. Suppose that the multiplier M3 is our current validation target. Our algorithm executes as follows:
1. Find the shortest propagation path M3 → M4 → C → M5 → OUT.
2. Propagation constraints of (CTL1,1,C1) and (CTL2,2,C1) are injected. The first objective means a C1 algebra at time frame 1 is needed on signal CTL1.
```
If RST = 1 then
a := 1;
b := 1;
c := 1;
i := 1;
out := 1;
elsif clk'event and clk = 1 then
i := i + 1;
if (i = 2) then
a := ina - 7;
b := inb;
elsif (i = 3) then
c := a * b;
elsif (i = 4) then
out := c;
endif;
endif;
```
Figure 4. Sample VHDL Code and Structural RTL
3. Excitation objectives of (a,0,Cg) and (b,0,Cg) are injected at the inputs of M1. The objectives mean algebra Cg is needed on both signal a and b at time frame 0.
5. All objectives are justified. The test environment is generated as shown below.
\[(\text{RST}, -2, C1);\]
\[(\text{RST}, -1, C1);\]
\[(\text{ina}, -1, C1);\]
\[(\text{M0}, -1, C1);\]
\[(\text{M1}, 0, C1);\]
\[(\text{a}, 0, C1);\]
\[(\text{inb}, -1, C1);\]
\[(\text{M2}, 0, C1);\]
\[(\text{b}, 0, C1);\]
\[(\text{RST}, 0, C1);\]
\[(\text{RST}, 1, C1);\]
\[(\text{RST}, 2, C1);\]
Note that the test environment contains all justified algebra on the PIs and Cgs on internal nodes. The fan-in nodes are also included in test environment for the internal nodes.
### 3.2. Test Environment Translation
A generated test environment must be translated into validation vector(s) to be applicable to the design. Table 2 highlights partial results as the procedure proceeds. Suppose we need to apply 11 and 4 at the two inputs of the multiplier. First, all algebra except the Cg at PIs are translated. For the above test environment, only the value of RST is determined at this step, as shown from the columns under the heading "STEP I". Secondly, the value 11 is plugged into (a,0,Cg), then following the trace \((a,0,Cg) \rightarrow (M1,0,Cg) \rightarrow (M0,-1,Cg) \rightarrow (M0,-1,Cg) \rightarrow (\text{ina},-1,Cg)\) the value can be propagation backward to the PI. The value may need to be adjusted when propagated through certain types of RTL constructs. For example, the value 11 is propagated from (a,0) through (M0,-1) without any adjustment. However the 11 on (M0,-1) implies the 18 at (ina,-1), since M0 is a subtracter, and the other operand is constant 7. Similarly the 4 can be plugged in (b,0,Cg) and value can be propagated to the (inb,-1). The translation results are recorded in columns under "STEP II". Finally, all unspecified PIs are filled with the random numbers to form the fully specified test vectors. The last three columns show the final test vectors.
### 3.3. Test Environment Relaxation
If the complete test environment cannot be derived using the above algorithm, the ATPG relaxes the controllability condition and repeats the algorithm to generate a relaxed test environment. The benefit of test environment relaxation can be illustrated by following example.
### Table 2. Test Environment Translation
<table>
<thead>
<tr>
<th>Time Frame</th>
<th>STEP I</th>
<th>STEP II</th>
<th>STEP III</th>
</tr>
</thead>
<tbody>
<tr>
<td>-2</td>
<td>RST</td>
<td>ina</td>
<td>inb</td>
</tr>
<tr>
<td>-1</td>
<td>0</td>
<td>x</td>
<td>x</td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>x</td>
<td>x</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>18</td>
<td>4</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>x</td>
<td>0</td>
</tr>
</tbody>
</table>
1. \( c := a + b; \)
2. \( c := a + d; \)
3. \( c := a - b; \)
Suppose statement 1 is the correct implementation, while statements 2 and 3 are erroneous versions. With complete test environment for statement 1, we can fully control the values of \(a\) and \(b\). In other words, by enforcing the values of \(a\) and \(b\) to be different from any other signals, the erroneous values computed for \(c\) in statements 2 and 3 can be guaranteed to be different from correct one. The errors will be propagated to PO by the test environment and the design error will be captured.
Now suppose that the ATPG cannot find a complete test environment for statement 1, it produces a relaxed version instead. This relaxed test environment does not guarantee full controllability over the operands, the detection of statement 3 can still be assured as long as the value of \(b\) is not zero. However, the relaxed test environment should not replace the complete test environment always, since it can only conditionally detect error in statement 2. The detection is contingent upon values for \(b\) and \(d\) to be different.
### 3.4. Techniques to Maximize the Error Detection
In the previous example, if back-end translator accidentally sets the value of \(b\) equal to the value of \(d\), the generated validation vectors will not detect the erroneous implementation of statement 2. To remedy this problem, while translating the test environment, the ATPG keeps a list of values that have been assigned to the signals and enforces exclusiveness of signal values as much as possible so that the signal substitution errors will be detected.
For the relational operations, we maximize the error detection by plugging 3 properly selected values into test environment. The signal values can be determined as follows:
1. If the value of one input signal is fixed to \(k\), then apply 3 values of \(k + 1\), \(k\), and \(k - 1\) to the other input signal.
2. If no input is fixed, first select a unique value to one input, then follow step 1 for the other input.
For example, in validating the condition \((a > 3)\), we need to check for (1) the correct use of “greater-than” operator here, and (2) value 3 is the correct boundary of the condition. In other words, we need to differentiate it from the following...
implementations, where \( k \) stands for any value, and \( n \) stands for any value but 3:
\[
(a \geq k), (a \neq k), (a < k), (a \leq k), \text{ and } (a > n)
\]
Following the above mentioned rules, we know that 4, 3, and 2 need to be plugged into the generated test environment for signal \( a \) to obtain 3 different validation sequences. Although none of the 3 sequences can individually detect all the bugs, collectively they can capture all bugs. Note that, from the tag coverage [6] point of view, requiring all three validation sequences \( (a > 3) \) is an overkill because for each condition (or statement), two sequences are sufficient to capture both positive and negative tags on it. As the result, the [6] cannot guarantee the error detection associated with operational operators.
4. Experiments
We applied our framework to 10 ITC99 [5] benchmark circuits, as well as GPIO, an industrial general purpose input output bus controller, on a 2.0 GHz Pentium-4, with 512 MB RAM, running the Linux operating system. For each VHDL description, we manually injected 15 to 30 bugs, which include the most typical design errors, such as missing case statement, missing signals, wrong signals, wrong variable values, wrong ordering of nested if statements, wrong operation types, and etc.
Table 3 shows the circuit characteristics and our RTL ATPG results. Note that, our framework works on a given HDL description, and no gate-level implementations are needed. However, we include the gate-level characteristics in the table to show the complexities of each design. For each circuit, the total number of VHDL lines is first reported, followed by the number of logic gates corresponding to the VHDL and the number of flip-flops. Then, the number of validation vectors generated using our RTL ATPG and the execution time (in seconds) are reported. For example, circuit B11 has 118 lines of VHDL code and can be synthesized to 397 gates at the gate level, containing 30 FFs. Using the VHDL alone for our RTL ATPG, 193 validation vectors were generated in only 1.46 seconds.
Note that the test generation times of our method are orders of magnitude smaller than the ones reported in [6] for similar sized circuits. A direct comparison is not possible as the circuits used in [6] are not publicly available.
Figures 5 and 6 show the tag coverage and bug coverage, respectively, for each circuit. Both random validation and our approach are reported, and the bug coverage is defined as the ratio of the number of detected bugs to the total number of injected bugs. For the random approach, 5000 random vectors were applied to each circuit.
For all circuits, our RTL ATPG outperformed random generation of validation vectors. In fact, in the five largest circuits, our method achieved orders of magnitude better results in both tag and bug coverages. Note that since the number of validation vectors we generated was significantly fewer than 5000, the validation time would be reduced by the same ratio. For example, in circuit B08, with only 100 vectors, we were able to achieve 93.3% tag coverage in under 1 second of computation, while with 5000 random vectors, only 2.2% tag coverage was achieved. In terms of simulation time for validation, 50-fold improvement was achieved. In fact, increasing the random vectors beyond 5000 vectors still would not be helpful. In this regard, several orders of magnitude lower validation time can be achieved with our method.


Figure 7 shows the detailed validation results on the industrial circuit GPIO. The X-axis is the number of validation vectors
vectors applied, while the Y-axis is the coverage of respective metrics. 30 design errors were injected in this circuit. Note that GPIO is not randomly validatable, as shown by the curves for random vectors. Our approach was able to obtain both high tag coverage as well as high bug coverage. We can also clearly observe the close correlation between the achieved tag coverage and bug coverage for both random vectors and vectors generated by our approach. However, the tag coverage is slightly more pessimistic than the bug coverage in that the coverage reported is lower. In other words, the actual bug coverage generally is higher than the tag coverage. Nevertheless, these two coverage metrics track very well with each other.
Figure 7. Coverages on GPIO
We also ran the vectors generated by our approach with TRansEDA[12] coverage analysis tool to obtain the statement, branch, conditional and tag coverages. The results are reported in Table 4. We can see that for most circuits, the three traditional coverages (statement, branch, condition) are overly optimistic measures, and that tag coverage is superior in that it reflects the bug coverage better. In B05, due to large redundancies in both VHDL and gate-level implementations, the coverages are low. However, in most other circuits, such as GPIO, the 100% measures would provide little confidence on the effectiveness of validation vectors. Note that these traditional coverage metrics are currently used in the industry to measure the level of design validation and a number of commercial tools exist in this effect.
Table 4. Coverages of Our Approach
<table>
<thead>
<tr>
<th>Circuit</th>
<th>State (%)</th>
<th>Branch (%)</th>
<th>Cond (%)</th>
<th>Tag (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>B01</td>
<td>100</td>
<td>100</td>
<td>100</td>
<td>98.4</td>
</tr>
<tr>
<td>B02</td>
<td>100</td>
<td>100</td>
<td>100</td>
<td>98.2</td>
</tr>
<tr>
<td>B03</td>
<td>100</td>
<td>100</td>
<td>100</td>
<td>98.6</td>
</tr>
<tr>
<td>B04</td>
<td>100</td>
<td>100</td>
<td>100</td>
<td>98.4</td>
</tr>
<tr>
<td>B05</td>
<td>65.4</td>
<td>61</td>
<td>n/a</td>
<td>41.1</td>
</tr>
<tr>
<td>B06</td>
<td>100</td>
<td>100</td>
<td>100</td>
<td>72.1</td>
</tr>
<tr>
<td>B07</td>
<td>97.9</td>
<td>97.2</td>
<td>n/a</td>
<td>91.2</td>
</tr>
<tr>
<td>B08</td>
<td>100</td>
<td>100</td>
<td>n/a</td>
<td>93.3</td>
</tr>
<tr>
<td>B10</td>
<td>94.7</td>
<td>95.3</td>
<td>90</td>
<td>100</td>
</tr>
<tr>
<td>B11</td>
<td>85.7</td>
<td>87.9</td>
<td>100</td>
<td>66.7</td>
</tr>
<tr>
<td>GPIO</td>
<td>100</td>
<td>100</td>
<td>100</td>
<td>76.6</td>
</tr>
</tbody>
</table>
5. Conclusion
We have presented an automatic design validation framework for HDL descriptions. The core of our framework is a modified RTL ATPG algorithm, which efficiently generates the validation vectors. Our approach is superior to existing approaches that target only the excitation while ignoring propagation of the errors in the design. We also allow for relaxation of test environments such that additional hard errors may be detected. Experiments show that our approach is able to generate high quality validation vectors, which achieve both high tag coverage and high bug coverage with very low computational cost. Orders of magnitude improvement in coverage over random patterns were achieved.
References
|
{"Source-Url": "https://www.computer.org/csdl/proceedings/ats/2003/1951/00/19510148.pdf", "len_cl100k_base": 5973, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22767, "total-output-tokens": 6800, "length": "2e12", "weborganizer": {"__label__adult": 0.0008530616760253906, "__label__art_design": 0.0009145736694335938, "__label__crime_law": 0.0007290840148925781, "__label__education_jobs": 0.0007104873657226562, "__label__entertainment": 0.00015985965728759766, "__label__fashion_beauty": 0.00042319297790527344, "__label__finance_business": 0.0006170272827148438, "__label__food_dining": 0.0006513595581054688, "__label__games": 0.0011501312255859375, "__label__hardware": 0.032806396484375, "__label__health": 0.0011692047119140625, "__label__history": 0.0005941390991210938, "__label__home_hobbies": 0.00031375885009765625, "__label__industrial": 0.00293731689453125, "__label__literature": 0.00029397010803222656, "__label__politics": 0.0005850791931152344, "__label__religion": 0.001071929931640625, "__label__science_tech": 0.356689453125, "__label__social_life": 0.00010442733764648438, "__label__software": 0.00701141357421875, "__label__software_dev": 0.5859375, "__label__sports_fitness": 0.0008006095886230469, "__label__transportation": 0.002895355224609375, "__label__travel": 0.0004112720489501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26381, 0.04445]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26381, 0.42762]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26381, 0.8817]], "google_gemma-3-12b-it_contains_pii": [[0, 4923, false], [4923, 9299, null], [9299, 12530, null], [12530, 17598, null], [17598, 21270, null], [21270, 26381, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4923, true], [4923, 9299, null], [9299, 12530, null], [12530, 17598, null], [17598, 21270, null], [21270, 26381, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26381, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26381, null]], "pdf_page_numbers": [[0, 4923, 1], [4923, 9299, 2], [9299, 12530, 3], [12530, 17598, 4], [17598, 21270, 5], [21270, 26381, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26381, 0.15976]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
ecfc171626239d0e9ddbab6283aad7c0775b38de
|
Model-driven Structural Design of Software-intensive Systems Using SysML Blocks and UML Classes
Marcel da Silva Melo¹ and Michel S. Soares²
1Faculty of Computing, Federal University of Uberlândia, Uberlândia, Brazil
2Computing Department, Federal University of Sergipe, São Cristóvão, Sergipe, Brazil
Keywords: SysML Block diagram, UML Class diagram, Software Design, Model-Driven Software Engineering, ATL Transformation Language.
Abstract: One particular characteristic of software-intensive systems is that software is a fundamental component together with other components. For the software design counterpart, both for structural and dynamic views, UML is one of the most used modeling language. However, UML is weak in modeling elements of a software-intensive system that are not software. This is the main reason why the Systems Modeling Language (SysML), a UML profile, was introduced by OMG. One objective of this article is to combine the SysML Block diagram and the UML Class diagram to design the structural view of a software-intensive system architecture. A meta-model describing the relationship between the two diagrams and an automatic model-driven transformation using the ATL language are proposed. The evaluation was performed by applying the meta-model in practice to develop software-intensive systems in the field of road traffic management, as shown in the case study.
1 INTRODUCTION
Software-intensive systems (Tiako, 2008) (Hinchey et al., 2008) are large, complex systems in which software is an essential component, interacting with other elements such as other software, systems, devices, actuators, sensors and with people. As software is an essential part of these systems, it influences the design, deployment, and evolution of the system as a whole (ISO-IEC, 2007). Examples of software-intensive systems can be found in many sectors, such as manufacturing plants, transportation, telecommunication and health care.
Designing software-intensive systems is a challenging activity for many reasons. The proper environment in which software-intensive systems act poses great challenges. Software-intensive systems are frequently used to control critical infrastructures in which any error, non-conformance or even response delays may cause enormous financial damage or even jeopardize human life. Designing and creating models are important activities to improve communication between teams and to significantly diminishing natural language ambiguities (Ludewig, 2003). Typically, in Systems and Software Engineering, an artifact is considered to be a model if it has a graphical, formal or mathematical representation (Bézivin, 2006).
Currently, there are a variety of modeling languages, methods, and techniques applied to all phases of software systems development. An extensive list of techniques for software design activities is presented in (Jiang et al., 2008). For instance, the structural elements of software have long been modeled using the Entity-Relationship model or simple block diagrams with unclear semantics (Edwards and Lee, 2003). There is no doubt that UML (OMG-UML, 2010) has been widely applied to the development of software in industry. Despite its relative success, the language has been heavily criticized (Bell, 2004) (France et al., 2006) (Andrè et al., 2007) (Soares and Vrancken, 2009). The most relevant criticism of UML regarding this article is that UML is weak in modeling elements of a software-intensive system that are not software. This is the main reason why SysML (OMG-SysML, 2010) was proposed. SysML is a systems modeling language that supports the specification, analysis, design, verification and validation of complex systems. The language is derived from UML, taking into account systems aspects such as hardware, information, processes and personnel.
The focus of this article is on the design phase of software-intensive systems, more specifically the design of the structural view of a software-intensive
system architecture. Therefore, it is of utmost importance that not only the software part of these systems are modeled, but also the system elements are modeled. This article has two main objectives. The first one is to describe the introduction of SysML as a modeling language in the development process of distributed real-time software-intensive systems. More specifically, the SysML Block diagram is applied in practice to describe the structural architecture in the field of road-traffic management. SysML has been applied to a number of projects (Viehl et al., 2006) (Laleau et al., 2010) in various fields, such as large telescopes (Karban et al., 2008), car manufacturing (Balmelli et al., 2006), industrial process control applications (Hastbacka et al., 2011), and road traffic management systems (Soares et al., 2011).
The second objective is to propose a metamodel to describe the relationship between the SysML Block and the UML Class diagrams, which was not well-described in the SysML specification (OMG-SysML, 2010). This relationship is then implemented using a model-driven approach. An automatic transformation using the ATL language (Jouault and Kurtev, 2005) is performed based on the described metamodel.
Other model-driven approaches combining SysML and ATL are not frequent in the literature, because of the novelty of these languages. One example can be found in (Colombo et al., 2012), in which the transformation based on ATL is performed considering only the SysML metamodel, i.e., the authors proposed a transformation from analysis models to design models by refining the SysML diagrams. ATL was chosen in this research because it has been successfully applied for transformations in real applications as described in the literature (Jouault et al., 2008) (Kim et al., 2012) (Goknil et al., 2014). In addition, ATL provides an adequate tool support, as the language is part of the Eclipse project. The approach is applied to a practical application in the field of road-traffic management, in which important software-intensive systems are implemented in order to support modern life.
2 BASICS ON SysML BLOCK DIAGRAMS
SysML is considered both a subset and an extension of UML. As a subset, UML diagrams considered too specific for software (Objects and Deployment diagrams) or redundant with other diagrams (Communication and Time Diagrams) were not included in SysML. Some diagrams are derived from UML without significant changes (State-Machine, Use Case, Sequence, and Package Diagrams), other diagrams are derived with changes (Block, Activity, Internal Block Diagrams) and there are two new diagrams (Requirements and Parametric Diagrams). As a matter of fact, SysML is compatible with UML, which can facilitate the integration of the disciplines of Software and System Engineering. Nevertheless, there is still lack of research on using both languages together, and the boundaries and relationships are not yet clear.

The SysML Block (Figure 1) extends the UML Class by including additional elements and constraints. A SysML Block can be divided into named compartments, which can be defined specifically for each type of system. For instance, a compartment can represent properties, operations, or parts. A property can represent a role in the context of its enclosing block. A part belonging to a block defines a local usage of its defining block within the specific context to which the part belongs. Operations describe the behavior of a system.
SysML Blocks provide a general-purpose capability to describe the architecture of a system (OMG-SysML, 2010). The SysML Block diagram provides the ability to represent a system hierarchy. It can also represent parts of software-intensive systems at many levels of abstraction. Elements of a software-intensive system such as hardware, procedures, data, and persons are modeled with the SysML Block diagram.
The design of the system architecture is described by means of blocks, with focus not only on the software structure of each system element, but also on the general structure, including parts of each block, constraints and properties not necessarily related to software. With the SysML Block diagram, system’s ele-
ments are identified, together with their relationships, properties, and operations.
3 RELATING SysML BLOCKS WITH UML CLASS DIAGRAM
Once the structural view of the system has been defined from the systems engineering point of view, then software engineers have to map the system elements modeled as SysML Blocks to software classes and objects. The choice here is to use the UML Class diagram to represent the model of software classes. This choice is natural given that SysML and UML are modeling languages with roots on the same metamodel, the MOF (OMG, 2006). Therefore, physical elements of a system, modeled as SysML Blocks, are later implemented in a software system as a corresponding software object, once the physical element is included into the software architecture.
SysML Blocks are candidates to be refined into one or more UML Classes during the software design and implementation phases. However, the refinement is not automatic. This is a modeling activity that, by its own nature, has no strict rules. Some useful guidelines and a metamodel (Fig. 2) for the relationship are proposed in this section.
Properties, operations and constraints are compartments proposed in the SysML specification. Properties are mapped as attributes of a class, if they are related to states, or to methods, if they are related to operations. Within SysML, all properties are public. For the mapping, good design practices of the object-oriented paradigm shall be applied (e.g., information hiding). Therefore, properties are most often mapped as private or protected attributes of a class.
Operations are mapped into at least one software method, normally as a public element of the class interface. The reason is that the system operation can actually be implemented using a group of software methods instead of only one. Systems and software engineers have to work together in order to decide the best solution for this mapping.
SysML Blocks are related to each other through associations. These can be normal associations, meaning that there is a relationship between the associated blocks, or stronger relationships, in this case composition or aggregation. The semantic choice of the former two types of association depends on strong ownership and coincident lifetime of the part and the whole.
Within the SysML Block diagram, properties, operations, receptions, parts, references and values are compartments of a block. In the proposed transformation, properties and values are mapped as attributes of a UML class. Operations and receptions are mapped as at least one method of a class, almost always as a public member, but may be mapped as a group of methods. Systems engineers and software engineers must work together with the purpose of deciding the solution to this mapping.
Parts and references are represented in the model as associations between blocks. In the SysML Block diagram, parts represent that the block to be implemented is composed of other blocks. This characteristic is represented in the UML Class diagram using the composition kind of association. References indicate that the block to be mapped uses other blocks, but is not composed of other blocks. This characteristic is mapped in the UML Class diagram using the aggregation kind of association. For all mapping, the associations and their cardinalities are kept unchanged.
All properties are public in SysML, but this is not wise in UML. According to the concept of information hiding of the object-oriented paradigm, a differentiation between private and public elements is essential.
4 RULES SPECIFIED WITH ATL
ATL (Atlas Transformation Language) (Jouault and Kurtev, 2005) was chosen as the language for implementing the transformation from SysML Block diagrams to UML Class diagrams. ATL is a model-to-model transformation language. The choice was taken considering many aspects. ATL is part of the M2M Eclipse project with an active discussion group and many examples and case studies applied even in industry (Selim et al., 2012). As an Eclipse project, ATL proposes to the community a complete Eclipse IDE (Integrated Development Environment) coming along with the ATL language and core components. The language is one of the most mature technologies in the model-driven field of research (Brunelière et al., 2010). The Eclipse editor provides standard resources for the implementation, including syntax highlighting and debugger.
In order to create and execute the ATL transformation, two metamodels were used: the SysML Block diagram and the UML Class diagram metamodels. These metamodels are exactly the same as proposed in OMG specifications. Using these metamodels, ATL rules were defined with the purpose of transforming blocks into classes. Overall, ten ATL rules were implemented to create the transformation (see Table 1). Due to lack of space in this article, only four of these rules implemented in ATL are presented in the article as follows.


The purpose of the Block2Class rule (Fig. 3) is to transform each SysML Block into a corresponding UML Class. The elements of a Block are transformed as well. Properties and values in a Block are transformed into class attributes, and operations and receptions are transformed into class methods. Specific rules for transforming operations and receptions are presented in Fig. 4.
Rules Operation2Method and Reception2Method (Fig. 4) have the objective of transforming operations and receptions into class methods. Operations and receptions are simple elements of a SysML Block, containing for their specifications only name, visibility, and parameters. Therefore, the transformations were quite simple, with only an additional transformation named ParameterAttribute2Parameter in order to transform parameters of operations into param-
Table 1: Rules in ATL.
<table>
<thead>
<tr>
<th>Rules</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Model2Model</td>
<td>Transforms a SysML Block diagram into a UML Class diagram</td>
</tr>
<tr>
<td>Block2Class</td>
<td>Transforms each SysML Block into a UML Class</td>
</tr>
<tr>
<td>Property2Attribute</td>
<td>Transforms Block properties into Class attributes</td>
</tr>
<tr>
<td>Operation2Method</td>
<td>Transforms Block operations into Class methods</td>
</tr>
<tr>
<td>Reception2Method</td>
<td>Transforms Block receptions into Class methods</td>
</tr>
<tr>
<td>ParameterAttribute2Parameter</td>
<td>Transforms parameters of operations and attributes of receptions into parameters of methods of a Class</td>
</tr>
<tr>
<td>AssociationPartReference2Association</td>
<td>Transforms parts, references and associations into Class associations</td>
</tr>
<tr>
<td>DataType2DataType</td>
<td>Transforms a DataType of a Block into a DataType of a Class</td>
</tr>
<tr>
<td>Enumeration2Enumeration</td>
<td>Transforms an Enumeration of a Block into an Enumeration of a Class</td>
</tr>
<tr>
<td>EnumerationLiteral2EnumerationLiteral</td>
<td>Transforms an EnumerationLiteral of a Block into an EnumerationLiteral of a Class</td>
</tr>
</tbody>
</table>
Table 2: Elements of the Software Architecture.
<table>
<thead>
<tr>
<th>Layer</th>
<th>Geographic element</th>
<th>Monitoring</th>
<th>Control</th>
</tr>
</thead>
<tbody>
<tr>
<td>Network</td>
<td>Network Route</td>
<td>OD-matrix</td>
<td>ODMGR.</td>
</tr>
<tr>
<td>Link</td>
<td>Link merge/choice point</td>
<td>Route travel time</td>
<td>Route Mgr.</td>
</tr>
<tr>
<td>Point</td>
<td>sensor/actuator position</td>
<td>Avg. speed, turn fractions</td>
<td>Junction Mgr.</td>
</tr>
</tbody>
</table>
Figure 5: ATL Rule Association2Association.
5 STRUCTURAL ARCHITECTURE OF THE CASE STUDY
The SysML Block diagram is applied in this article to represent the structural view architecture of systems. The case study shown in this section is of an architecture for a road traffic management system (RTMS) (Almejalli et al., 2008) (Almejalli et al., 2009), which are software-intensive systems used in activities such as controlling, predicting, visualizing, and monitoring road traffic. The structural view architecture describes which elements (see Table 2) cooperate with each other in a high level manner, without concerns about how this interaction is done. These network elements are:
- **Origin-destination Managers**. (ODMGR) represent the relation between an origin and a destination and comprise one or more routes.
- **Route Managers**. control the set of routes from one origin to one destination.
- **Links**. come in two types, Main links and Accessor links. The Main link is the link from the merge point to the choice point and the Accessor link is the link from the choice point to the merge point.
- **Junctions**. comprise the outgoing Main link and the incoming Accessor links of a crossing or motorway junction. A junction is a location where traffic can change its routes, directions and sometimes even the mode of travel.
- **Control Schemes**. are coherent set of measures triggered by recurring patterns in the traffic state, such as the morning rush hours or the weekend exodus.
Figure 6: Logical view using SysML Block diagram.
Figure 7: Logical view using UML Class diagram.
The representation of the structural architecture view is shown in Fig. 6 using a SysML Block diagram. The distributed components have to communicate with each other, as they work in cooperation. They continuously measure the traffic state and communicate about it to other links in real-time. For instance, links have to communicate with other links in order to achieve a traffic state. Routes participate in at least one link, but they can participate in more links.
The initial SysML Block model was designed using the Papyrus tool, which is integrated into the TopCased tool. Papyrus offers all support to create SysML models, including the automatic generation of an xmi file, which is the entry model to create ATL transformations. After execution of the transformation, an xmi file is generated with the UML Class. This file is then ready to be presented in a graphical manner. The final result is presented in Fig. 7. With this UML Class diagram, a Model-to-Code transformation can be performed. In this research, the TopCased tool allowed the generation of Java source code automatically.
In terms of the system example presented in this section, sensors and actuators objects only exist because they belong to a junction object. The same holds to sensor objects related to link objects. As a result, they are all represented with the composition relationship. Other compartments of a SysML Block do not have a straightforward mapping, in particular the ones specific to the domain. Thus, each case is taken into account carefully. For instance, the Control Scheme is actually refined to a control class, responsible to implement the proposed scenarios. Each scenario presented in the “scenario compartment” is designed as a Use Case, and implemented in software through the UML Classes.
The proposed meta-model which is the basis for the mapping and further implementation using ATL is depicted in Fig. 2, and the final result of the mapping from SysML Blocks to UML Classes using the proposed meta-model is presented in Fig. 7.
6 CONCLUSIONS
After more than a decade of use in a variety of domains, both in academia and industry, the number of legacy systems modeled using UML is considerable. Therefore, even with its well-known problems, the language has been considerably applied to new projects. As a matter of fact, the introduction of a completely different language would be a challenge for many reasons. New modeling tools would have to be integrated into the development methodology. The learning process by the development team has to be considered, and training has to be taken into account. For this reason, the added value of a new modeling language must be clear.
It is difficult to find a single modeling language that is capable of modeling both software and system elements. The objective of this article is to describe a research and its further practical application in which the SysML Block diagram is introduced to create models of software-intensive systems in combination with the UML Class diagram. As the SysML Block diagram is useful to model components of a system, such as hardware and its parts, it can be applied to model other elements besides software. A meta-model describing the relationship between SysML Blocks and UML Classes is presented. A model-driven approach using the ATL language is used to implement the metamodel in order to transform a SysML Block diagram into a UML Class diagram. This approach brings improved separation of concerns during system design, and provides a straightforward way to trace systems elements to software elements. In addition, knowing the mapping from system elements to software elements may bring together the work of systems and software engineers. The evaluation was based on the practical application to develop software-intensive systems in the field of road traffic management.
Future research will focus on evaluating the dynamic behavior of software-intensive systems modeled using the SysML Activity diagram, which is an extension to the UML Activity diagram and offers additional modeling possibilities when compared with the UML version, as for instance, the support to model continuous flow as well as discrete flow. In addition, other transformation languages are being applied to this same case study with the purpose of comparing different approaches to create the transformation.
ACKNOWLEDGEMENTS
The authors would like to thank CAPES (www.capes.gov.br), FAPEMIG (www.fapemig.br - FAPEMIG 01/2011, Grant APQ-01589-11), and Federal University of Sergipe for the financial support.
REFERENCES
Almejalli, K., Dahal, K., and Hossain, A. (2009). An Intelligent Multi-Agent approach for Road Traffic Manage-
|
{"Source-Url": "http://www.scitepress.org/Papers/2014/48713/48713.pdf", "len_cl100k_base": 4634, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21706, "total-output-tokens": 6473, "length": "2e12", "weborganizer": {"__label__adult": 0.0003364086151123047, "__label__art_design": 0.0005507469177246094, "__label__crime_law": 0.00026798248291015625, "__label__education_jobs": 0.0006899833679199219, "__label__entertainment": 5.900859832763672e-05, "__label__fashion_beauty": 0.00014519691467285156, "__label__finance_business": 0.00020503997802734375, "__label__food_dining": 0.0002942085266113281, "__label__games": 0.0004897117614746094, "__label__hardware": 0.0007014274597167969, "__label__health": 0.0003590583801269531, "__label__history": 0.0002334117889404297, "__label__home_hobbies": 6.777048110961914e-05, "__label__industrial": 0.00045418739318847656, "__label__literature": 0.0002636909484863281, "__label__politics": 0.0001996755599975586, "__label__religion": 0.0004436969757080078, "__label__science_tech": 0.0192718505859375, "__label__social_life": 6.592273712158203e-05, "__label__software": 0.006282806396484375, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.00026917457580566406, "__label__transportation": 0.0006814002990722656, "__label__travel": 0.00018417835235595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27245, 0.02781]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27245, 0.60915]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27245, 0.90472]], "google_gemma-3-12b-it_contains_pii": [[0, 3996, false], [3996, 8248, null], [8248, 10553, null], [10553, 14164, null], [14164, 17458, null], [17458, 17557, null], [17557, 22271, null], [22271, 27245, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3996, true], [3996, 8248, null], [8248, 10553, null], [10553, 14164, null], [14164, 17458, null], [17458, 17557, null], [17557, 22271, null], [22271, 27245, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27245, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27245, null]], "pdf_page_numbers": [[0, 3996, 1], [3996, 8248, 2], [8248, 10553, 3], [10553, 14164, 4], [14164, 17458, 5], [17458, 17557, 6], [17557, 22271, 7], [22271, 27245, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27245, 0.16038]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
46f393269a419a223a8165d638dff5395153a98b
|
Fast Approximate String Matching in a Dictionary
Ricardo Baeza-Yates Gonzalo Navarro
Dept. of Computer Science, University of Chile
Blanco Encalada 2120 - Santiago - Chile
gnavarro@dcc.uchile.cl
Abstract
A successful technique for approximate searching on large indexed textual databases relies on an on-line search in the vocabulary of the text. This works well because the vocabulary is relatively small (i.e. a few megabytes for gigabytes of text), and therefore the search takes a few seconds at most. While those times are appropriate for single-user environments, they are inappropriate for multi-user setups such as a text database server for the Web. We present a speed-up technique for on-line searching in the vocabulary which needs only a 10% overhead. We also propose to exploit the fact that the problem involves a definition of similarity among words which respects the triangular inequality to structure the vocabulary in such a way that it is not necessary to traverse it completely. We show that the improvement in time is very significative and pays for the extra space needed.
1 Introduction
Approximate string matching is a recurrent problem in many branches of computer science, with applications to text searching, computational biology, pattern recognition, signal processing, etc.
The problem can be stated as follows: given a long text of length \( n \), and a (comparatively short) pattern of length \( m \), retrieve all the segments (or “occurrences”) of the text whose edit distance to the pattern is at most \( k \). The edit distance \( ed() \) between two strings is defined as the minimum number of character insertions, deletions and replacements needed to make them equal.
In the on-line version of the problem, the pattern can be preprocessed but the text cannot. The classical solution uses dynamic programming and is \( O(mn) \) time [27, 23]. Later, a number of algorithms improved this to \( O(kn) \) time in the worst case or even less on average, by using cleverly the properties of the dynamic programming matrix (e.g. [13, 17, 32, 11, 36]) or by using an automaton which is used in deterministic or nondeterministic form [35, 4, 21]. Another trend is that of “filtration” algorithms: a fast filter is run over the text quickly discarding uninteresting parts. The interesting parts are later verified with a more expensive algorithm. Examples of filtration approaches are [29, 6]. Some are “sublinear” in the sense that they do not inspect all the text characters, but the on-line problem is \( \Omega(n) \) if \( m \) is taken as constant.
If the text is large and has to be searched frequently, even the fastest on-line algorithms are not practical, and preprocessing the text becomes necessary. This is especially true for very large text databases, which take gigabytes, while the fastest on-line search algorithms can process a few megabytes per second. We are interested in large text databases in this work, where the main motivations for approximate string matching come from the low-quality of the text (e.g. because of optical character recognition (OCR) or typing errors), heterogeneity of the databases (different
languages which the users may not spell correctly), spelling errors in the pattern or the text, searching for foreign names and searching with uncertainty.
Although many indexing methods have been developed for exact string matching from a long time ago [34], only a few years ago indexing text for approximate string matching was considered one of the main open problems in this area [35, 2]. The practical indices which are in use today rely on an on-line search in the vocabulary of the text, which is quite small compared to the text itself.
The fastest on-line approximate search algorithms run at 1-4 megabytes per second, and therefore they find the answer in the vocabulary in a few seconds. While this is acceptable for single-user environments, the search time may be excessive in a multi-user environment. For instance, a Web search engine which receives many requests per second cannot spend four seconds to traverse the vocabulary.
We present two proposals in this paper. A first one is a speed-up for the normal on-line traversal which exploits the fact that consecutive strings in a sorted dictionary tend to share a prefix. This speedup costs only 10% extra space.
Our second proposal needs more extra space: organize the vocabulary such as to avoid the complete on-line traversal. This organization is based on the fact that we want, from a set of words, those which are at edit distance at most $k$ from a given query. The edit distance \(ed()\) used respects the axioms which make it a metric, and therefore we may apply any data structure to search in metric spaces. This imposes normally a space overhead over the vocabulary, but the reward is an important improvement in search times.
We experimentally compare all the different structures for metric spaces accounting for the search time and space overhead, and compare also the different on-line variations.
This paper is organized as follows. In Section 2 we explain how the current indices for approximate string matching work. In Section 3 we survey the main techniques to search in metric spaces. In Section 4 we explain our setup to speed up the on-line search in the vocabulary, and in Section 5 we explain our method to avoid the on-line traversal. In Section 6 we show experimental results. In Section 7 we give our conclusions and future work directions.
## 2 Indices for Approximate String Matching
The first indices for approximate string matching appeared in 1992, in two different flavors: \textit{word-oriented} and \textit{sequence-oriented} indices. In the first type, more oriented to natural language text and information retrieval, the index can retrieve every \textit{word} whose edit distance to the pattern is at most $k$. In the second one, useful also when the text is not natural language, the index will retrieve every \textit{sequence}, without notion of word separation.
We focus on word-oriented indices in this work, because the problem is simpler and hence has been solved quite well. Sequence-retrieving indices are still very immature to be useful for huge text databases (i.e. the indices are very large, are not well-behaved on disk, are very costly to build and update, etc.). It must be clear, however, that these indices are only capable of retrieving an occurrence that is a sequence of words. For instance, they cannot retrieve "flower" with one error from "flower" or "many flowers" from "manyflowers". In many cases the restriction is acceptable, however.
Current word-oriented indices are basically inverted indices: they store the vocabulary of the text (i.e. the set of all distinct words in the text) and a list of occurrences for each word (i.e. the set of positions where the word appears in the text). Approximate string matching is solved by first running a classical on-line algorithm on the vocabulary (as if it was a text), thus obtaining the set of words to retrieve. The rest depends on the particular index. Full inverted indices such as Igrep [1] simply make the union of the lists of occurrences of all matching words to obtain the final answer. Block-oriented indices such as Glimpse and variations on it [19, 5] (which reduce space requirements by making the occurrences point to blocks of text instead of exact positions) must traverse the candidate text blocks to find the actual answers. In some cases the blocks need not be traversed (e.g. if each block is a Web page and we do not need to mark the occurrences inside the page) and therefore the main cost corresponds to the search in the vocabulary. See Figure 1.
This scheme works well because the vocabulary is very small compared to the text. For instance, in the 2 Gb TREC collection [14] the vocabulary takes no more than 2 Mb. An empirical law known as Heaps Law [15] states that the vocabulary for a text of $n$ words grows as $O(n^\beta)$, where $0 < \beta < 1$. In practice, $\beta$ is between 0.4 and 0.6 [1]. The fastest on-line approximate search algorithms run at 1-4 megabytes per second (depending on some parameters of the problem), and therefore they find the answer in the vocabulary in a few seconds. While this is acceptable for single-user environments, the search time may be excessive in a multi-user environment. For instance, a Web search engine which receives many requests per second cannot spend four seconds per query.
3 Searching in General Metric Spaces
The concept of “approximate” searching has applications in a vast number of fields. Some examples are images, fingerprints or audio databases; machine learning; image quantization and compression; text retrieval (for approximate string matching or for document similarity); genetic databases; etc.
All those applications have some common characteristics. There is a universe \( U \) of objects, and a nonnegative distance function \( d: U \times U \rightarrow R^+ \) defined among them. This distance honors the three axioms that makes the set a metric space
\[
\begin{align*}
d(x, y) &= 0 \quad \iff \quad x = y \\
d(x, y) &= d(y, x) \\
d(x, z) &\leq d(x, y) + d(y, z)
\end{align*}
\]
where the last one is called the “triangular inequality” and is valid for many reasonable distance functions. The smaller the distance between two objects, the more “similar” they are. This distance is consider expensive to compute (e.g., comparing two fingerprints). We have a finite database \( S \subseteq U \), which is a subset of the universe of objects and can be preprocessed (to build an index, for instance). Later, given a new object from the universe (a query \( q \)), we must retrieve all similar elements found in the database. There are three typical queries of this kind:
(a) Retrieve all elements which are within distance \( k \) to \( q \). This is, \( \{ x \in S / d(x, q) \leq k \} \).
(b) Retrieve the closest elements to \( q \) in \( S \). This is, \( \{ x \in S / \forall y \in S, \ d(x, q) \leq d(y, q) \} \). In some cases we are satisfied with one such element. We can also give a maximum distance \( r \) such that if the closest element is at distance more than \( r \) we do not want any one reported.
(c) Retrieve the \( i \) closest elements to \( q \) in \( S \). This is, retrieve a set \( A \subseteq S \) such that \( |A| = i \) and \( \forall x \in A \), \( y \in S - A, d(x, q) \leq d(y, q) \).
Given a database of \( n \) objects, all those queries can be trivially answered by performing \( n \) distance evaluations. The goal is to structure the database such that we perform less distance evaluations.
This is applicable to our problem because \( ed() \) indeed satisfies the axioms and therefore the search in the vocabulary is an instance of this problem. In our case the database is the set of all the different words of the text, and we are interested in queries of type (a). Moreover, our distance is discrete (i.e. gives integer answers), which is of importance for the types of data structures which can be applied. We briefly survey the main applicable structures now.
Probably the first general solution to search in metric spaces was presented in [10]. They propose a tree (thereafter called Burkhard-Keller Tree, or bk-tree), which is suitable for discrete distance functions like \( ed() \). It is defined as follows: an arbitrary element \( a \in S \) is selected as the root, and it has a number of children. In the \( i \)-th children we recursively build the tree for all elements in \( S \) which are at distance \( i \) from \( a \). This process can be repeated until there is only one element to process, or there are no more than \( b \) elements (and we store a bucket of size \( b \)), or the tree has a given height \( h \).
To answer queries of type \((a)\), where we are given a query \(q\) and a distance \(k\), we begin at the root and enter into all children \(i\) such that \(d(a, q) - k \leq i \leq d(a, q) + k\), and proceed recursively (the other branches are discarded using the triangular inequality). If we arrive to a leaf (bucket of size one or more) we compare sequentially all the elements. Each time we perform a comparison where \(d(q, x) \leq k\), we report the element \(x\).
In [28], the use of more than one element per node of the tree is proposed. Those elements allow to eliminate more points per level at the cost of doing more distance evaluations. The difference with a tree that uses those points successively downwards the tree is that the query is compared against all the points of the node no matter which the result is.
The advantage of the previous idea is made clear in a further development, called “Fixed-Queries Trees” of fq-trees [3]. This tree is basically a bk-tree where all the elements stored in the nodes of the same level are the same (and of course do not necessarily belong to the set stored in the subtree). The advantage of such construction is that some comparisons are saved between the query and the nodes along the backtracking that occurs in the tree. If we visit many nodes of the same level, we do not need to perform more than one comparison. This is at the expense of somewhat taller trees. They show that their approach is superior to bk-trees. They propose a variant which is called “Fixed-Height fq-trees”, where all the leaves are at the same depth \(h\), regardless of the bucket size. This makes some leaves deeper than necessary, which makes sense because we may have already performed the comparison between the query and one intermediate node, therefore eliminating for free the need to compare the leaf.
An analysis of the performance of fq-trees is presented in [3], which disregarding some complications can be applied to bk-trees as well. We present the results in the Appendix. We also present an analysis of fixed-height fq-trees which is new.
An algorithm which is close to all the presented ideas but performs surprisingly better by an order of magnitude is [33]. They select a point \(a \in S\) at random and measure \(d = d(a, q)\), eliminating all elements \(x\) of \(S\) which do not satisfy \(d - k < d(x, s) < d + k\). This is repeated until few enough elements remain in the set. Although very similar to bk-trees, the key difference is that the second element to compare against \(q\) is selected from the remaining set, instead of from the whole set as in bk-trees. This means that this algorithm is more likely to compare the query against a centroid of the remaining set (i.e. an element whose distance distribution against the rest favors smaller values). This is because the distance distribution tends to be very centered (which is bad for all range search algorithms) and the selection of a centroid distributes the distances better.
The problem with the algorithm [33] is that it needs \(O(n^2)\) space and build time. In this sense it is close to [25]. This is unacceptably high for all by very small databases.
Some approaches designed for continuous distance functions [31, 37, 8, 9, 12, 24] are not covered in this brief review. The reason is that these structures do not use all the information obtained from the comparisons, since this cannot be done in continuous spaces. It can, however, be done (and it is done) in discrete spaces and this fact makes the reviewed structures superior to these ones, although they would not be directly applicable in continuous spaces.
4 Speeding Up the On-line Search
Except for filtration algorithms, all the on-line approximate search algorithms traverse the text character by character. They store a context, which is the state of the search. For each new character read they modify their context. Whenever their context indicates a match they report it. For instance, if the search is done with a deterministic finite automaton as in [21], the context is simply the current state of the automaton. When run over a vocabulary, their processing is very similar, except because the context is initialized for each new word to process.
If the vocabulary is stored in lexicographical order (which is useful to binary search on it for exact retrieval), each word will share a prefix with the previous word. The larger the vocabulary, the longer the shared prefixes. This property has been used in [22, 7, 20], for instance to compress the vocabulary (since the prefix shared with the previous word needs not be stored). However, direct access is complicated in those compression schemes. Figure 2 shows an example.
We propose to use that property in a different form. We store the complete words, as well as an additional byte which tells the length of the prefix shared with the previous word. The search algorithm will not change except because it will store all the contexts that it traversed from the beginning of the word. That is, it will keep a stack of contexts, and each time a new character is read, the current context is pushed onto the stack before being modifying according to the new character. When the word is finally traversed, we have all the traversed contexts in the stack.
If the next word shares a prefix of length $\ell$ with the word just processed, we do not need to reprocess the first $\ell$ characters. We just take the $\ell$-th context of the stack instead of the initial one and process the string from the $(\ell + 1)$-th character on.
This has the additional overhead of storing the contexts instead of just replacing them, which makes the strategy to work better for algorithms where the context is very small. On the other hand, since we always search words (which are rarely longer than 10 letters), we need also algorithms which are especially efficient for short words. Fortunately, both requirements match since the fastest algorithms for short patterns are [4, 21], which use a very small context (we exclude filtration algorithms because the technique is not applicable to them).
The first algorithm simulates using bit-parallelism the behavior of a non-deterministic finite automaton that searches the pattern allowing errors. It uses just one computer word whenever $(m - k)(k + 2) \leq w$, where $m$ is the length of the pattern, $k$ is the number of errors and $w$ is the number of bits in the computer word. For instance it can search with $m$ up to 9 in a 32-bit architecture. Although in the original work they show how to use many computer words for longer
patterns, in our case this will occur very infrequently, and when it occurs the pattern will be one or two letters longer. We prefer therefore to prune longer patterns and to verify after a match if there is indeed an occurrence of the complete pattern.
The second algorithm converts the automaton to deterministic form, building only the states which are actually reached in the text traversal. It is shown in [21] to be very efficient on short patterns.
To analyze the expected improvement, we notice that the number of letters that will be effectively traversed by the optimized algorithm is exactly the number of nodes of a trie [16] built over all the words of the vocabulary. This is because, if we consider all the prefixes of all words, we work only once on each different prefix. On the other hand, each node of a trie represents a different prefix. The original algorithm, on the other hand, will work on every character of every word.
Not all the characters of all words are present in a trie built from the vocabulary, since once the prefix of a word is unique the trie is not further expanded but the word is stored in a leaf. The parts of the words which are not represented in the trie are worked on in all cases. The difference is in the letters represented in the trie: while the optimized algorithm works once per internal node, the amount of work of the original algorithm is proportional to the external path length, which is the sum of the depth of all leaves (i.e. the sum of the lengths of all words, up to where they are represented in the trie).
In [30], some asymptotical statistics are computed on a suffix trie, for large $n$, using a Markovian model (which is quite good for natural language). Statistics for random tries are equivalent to suffix tries over a random text, except for $o(1)$ terms [26]. We take here the simpler case of independent character generation (i.e. a Markovian model with no memory). The only difference in the general case is the constant factor of the results, not the order. The reader is referred to [30] for more details.
Suppose our alphabet is composed from a finite or infinite number of symbols, call $q_i$ the probability of the $i$-th symbol, and call $H = \sum q_i \log(1/q_i)$ the entropy of the language. Then the external path length is $E_n = (n \log n)/H = O(n \log n)$, while the number of internal nodes is $S_n = n/H = O(n)$ (this last result is taken from [18], for random tries). Therefore, we work in $n(\log n - 1)/H$ less characters. Except for the parts of the strings stored in the leaves, we work $O(n)$ instead of $O(n \log n)$. Unfortunately, the part stored in the leaves is important and is proportional to the part stored in the trie in practice.
Finally, notice that our proposal is similar to that of storing a trie with the vocabulary and run the algorithms recursively on the trie to factor out repetitions. However, our technique is faster and has much less memory overhead.
We found experimentally, however, that the technique we are proposing here is of no use against the fastest non-filtering algorithms [4, 21]. This is because the algorithms are extremely efficient and the amount of repetition in the prefixes is not large enough to counter the accesses to the stack of contexts (the stack cannot be put in registers). The extra accesses to the stack eliminate the advantage for the less letters considered.
However, we believe that this idea can still have use for more complex edit distances, where the fastest algorithms cannot be applied and we must resort to the classical $O(mn)$ algorithm. This study is part of our future work.
5 The Vocabulary as a Metric Space
Traversing the whole vocabulary on-line is like comparing the query against the whole database in a metric space. Our proposal in this section is to organize the vocabulary such as to avoid the complete on-line traversal. This organization is based on the fact that we want, from a set of words, those which are at edit distance at most \( k \) from a given query. The edit distance \( ed() \) used satisfies the axioms which make it a metric, in particular a discrete metric.
The proposal is therefore, instead of storing the vocabulary as a sequence of words, organize it as a metric space using one of the available techniques. The distance function to use is \( ed() \), which is computed by dynamic programming in time \( O(m_1 m_2) \), where \( m_1 \) and \( m_2 \) are the lengths of the two words to compare. Although this comparison takes more than many efficient algorithms, it will be carried out only a few times to get the answer. On the other hand, the dynamic programming algorithm is very flexible to add new editing operations or changing their cost, while the most efficient on-line algorithms are not that flexible.
Figure 3 shows our proposed organization. The vocabulary is stored as a contiguous text (with separators among words) where the words are sorted. This allows exact or prefix retrieval by binary search, or another structure can be built onto it. The search structure to allow errors goes on top of that array and allows approximate or exact retrieval.
An important difference between the general assumptions and our case is that the distance function is not so costly to compute as to make negligible all other costs. For instance, the space overhead and non-locality of accesses incurred by the new search structures could eliminate the advantage of comparing the query against less words in the vocabulary. Hence, we do not consider simply the number of comparisons but the complete CPU times of the algorithms, and compare them against the CPU times of the best sequential search algorithms run over the complete vocabulary. Moreover, the efficiency in all cases depends on the number of errors allowed (all the algorithms worsen if more errors are allowed). Finally, we have to consider the extra space incurred
because the vocabulary is already large.
It is interesting to notice that any structure to search in a metric space can be used for exact searching, since we just search allowing zero errors (i.e. distance zero). Although not as efficient as data structures designed specifically for exact retrieval (such as hashing or binary search), the search times may be so low that the reduced efficiency is not as important as the fact that we do not need an additional structure for exact search (such as a hash table).
6 Experimental Results
We show experimentally the performance obtained with our metric space techniques against online algorithms. The results are preliminary and must be tested on larger setups. We ran our experiments on a Sun SparcClassic with 16 Mb of RAM, running SunOS 4.1.3.
We tested three different structures: bk-trees (BKT), fq-trees (FQT) and fq-trees of fixed height (FQH). For the first two we tested buckets of size 1, 10 and 20; while for the last one we tested fixed heights of 5, 10 and 15. As explained before, other structures for metric spaces are not well suited to this case (we verified experimentally this fact with GNATS and gh-trees). We used a Spanish dictionary composed of more than 80,000 words (which is still modest compared to the 500,000 words of the TREC collection which will be used in future work). The set was randomly permuted and separated in 8 incremental subsets of size 10,000 to 80,000.
Our first experiment deals with space and time overhead of the data structures that implement the search in a metric space, and its suitability for exact searching. Figure 4 shows the results. As it can be seen, build times are linear for FQH and slightly superlinear ($O(n \log n)$ in fact) for BKT and FQT. The overhead to build them is normally below a minute, which is a small percentage of the time normally taken to build an index for a text database whose vocabulary is of 80,000 words.
If we consider extra space, we see that BKT and FQT pose a fixed space overhead, of 100% or less (with respect to the size of the vocabulary with no more data), with the exception of FQT for $b = 1$ which is 200%. As an index normally has another 100% overhead over the plain vocabulary to store pointers to the index, we can consider that the extra overhead is in fact closer to 50-100%. This is not negligible but acceptable. The FQH indices pose a fixed extra space, whose overhead tends to zero as the vocabulary grows. However, these percentages are quite large for reasonably-sized dictionaries, except for small heights.
Finally, we show that the work to do for exact searching involves a few distance evaluations (16 or less) with very low growth rate (logarithmic). This shows that the structure can be also used for exact search. The exception is FQH ($h = 5$), since these structures are $O(n)$ time for fixed $h$, and this is noticed especially for small $h$.
We show in Figure 5 the query performance of the indices to search with one error. As it can be seen, no more than 10% of the dictionary is traversed (the percentage is decreasing since the number of comparisons are sublinear except for FQH). The user times correspond quite well to the number of comparisons. We show the percentage of user times using the structures versus the best online algorithm for this case [6]. As it can be seen, for the maximum dictionary size we reach 40% of the online time for many metric structures (this percentage will improve for BKT and FQT in larger dictionaries). From those structures, we believe that FQT and BKT with $b = 1$ are the best choices, since they are sublinear and they have a reasonable overhead (in contrast to FQH). For
larger dictionaries, FQH with $h = 10$ could also be a good choice.
Figure 6 shows the result with two errors. This time the online algorithm selected was [4] and the metric space algorithms do not improve the online search. The reason is that the offline algorithms are much more sensitive to the error level than the online algorithm used. This shows that our scheme is only useful to search with one error.
We also tested the search for the nearest neighbor, and the results are very similar to a search with $k$ equal to the distance to that nearest neighbor.
7 Conclusions
We proposed a new method to organize the vocabulary of inverted files in order to support approximate searching on the indexed text collection. Most present methods rely on a sequential search over the vocabulary words using a classical online algorithm. We propose instead to organize the vocabulary as a metric space (taking advantage of the fact that the edit distance that models the approximate search is indeed a metric).
We show in our preliminary experiments that the best data structures for this task are Burkhard-Keller trees or Fixed-Queries trees, using no buckets. Those structures allow, with almost negligible construction time and reasonable space overhead (50%-100% extra over typical space taken by the vocabulary, which is already very small), to search close to 5% of the dictionary for one error and 25% for two errors. This cuts down the times of the best online algorithms to 40% for one error, although for two errors the online algorithms (though traversing the whole dictionary) are faster.
For larger dictionaries, Fixed-Height fQ-trees of height 10 could also be a good choice.
We determined also that other structures not aimed to discrete spaces are not well suited for this task, being their performance very inferior to the ones we presented. We also determined that a proposed idea to improve online search algorithms on a sorted vocabulary by skipping common prefixes, although theoretically appealing, is of no interest in practice. A study of this idea for the case of a more expensive algorithm (e.g. to compute a more complex distance function) is of interest, however.
Future work also involves repeating all the experiments on a larger machine and on a larger vocabulary, to obtain figures adequate to very large text databases. The dictionary used had 80,000 words, which corresponds to a text of less than 100 megabytes. The 2 gigabytes TREC collection has a vocabulary of 500,000 words and we plan to use that vocabulary.
References
**Appendix. Analysis of Fixed-Height FQ-trees**
We call $p_i$ the probability that two random elements from $U$ are at distance $i$. Hence, $\sum_{i>0} p_i = 1$, and $p_{-i} = 0$ for $i > 0$. In [3] the fq-trees are analyzed under the simplifying assumption that the $p_i$ distribution does not change when we enter into a subtree (their analysis is later experimentally verified). They show that the number of distance evaluations done to search an element with tolerance $k$ (in our application, allowing $k$ errors) on an fq-tree of bucket size $b$ is
$$P_k(n) = O(n^\alpha)$$
where $0 < \alpha < 1$ is the solution of
$$\sum_{i>0} \gamma_i(k)p_i^\alpha = 1$$
where $\gamma_i(k) = \sum_{j=i-k}^{i+k} p_j$. This $P_k$ result is the sum of the comparisons done per level of the tree (a logarithmic term) plus those done at the leaves of the tree, which are $O(n^\alpha)$.
The CPU cost depends also on the number of traversed nodes $N_k(n)$, which is also shown to be $O(n^\alpha)$ (the constant is different). Finally, the number of distance evaluations for an exact search is $O(b + \log n)$.
Under the same simplifying assumption the analysis applies to bk-trees too. The main difference is that the number of comparisons is for this case the same as the number of nodes traversed plus the number of leaf elements compared, which also adds up $O(n^\alpha)$ (although the constant is higher). The distribution of the tree is different but this difference is overridden by the simplifying assumptions anyway.
We analyze now fq-trees of fixed height. The analysis is simpler than for fq-trees. Let $F^h_k(n)$ be the number of elements not yet filtered by a proximity search of distance up to $k$ after applying $h$ fixed queries. Then, the expected number of comparisons for a proximity query is
$$F^h_k(n) = h + F^h_k(n)$$
Let $\beta_k$ be the probability of not filtering an element when doing the proximity search at distance $k$. If an element is at distance $i$ to a query, it is not filtered with probability $\sum_{j=i-k}^{i+k} p_j$. The element is at distance $i$ with probability $p_i$, so
$$\beta_k = \sum_{i>0} \sum_{j=i-k}^{i+k} p_i p_j$$
Note that $\beta_k$ converges to 1 when $k$ increases. So, the expected number of elements not filtered between two consecutive levels are related by $F^h_k(n) = \beta_k F^{h-1}_k(n)$. Clearly, $F^0_k = n$, so $F^h_k(n) = \beta_k^h n$. Because $F^h_k(n)$ decreases when $h$ grows, the optimal $h$ is obtained when $P^h_k(n) \leq P^{h+1}_k(n)$. That is, when
$$h + \beta_k^h n \leq h + 1 + \beta_k^{h+1}$$
Solving, we obtain the optimal $h$ for a given $k$
$$h_k = \frac{\log(n(1 - \beta_k))}{\log(1/\beta_k)}$$
Replacing this $h$ in $P_h^k(n)$ we get
$$P_k(n) = \frac{\log(n(1 - \beta_k))}{\log(1/\beta_k)} + \frac{1}{1 - \beta_k}$$
That is, $P_k(n)$ is logarithmic for the optimal $h_k$ (and linear for a fixed $h$). This is asymptotically better than the $O(n^\alpha)$ results for fq-trees and bk-trees. Nevertheless, the constant factor in the log term grows exponentially with $k$, so this is good for small to medium $k$.
To obtain this logarithmic behavior, the fixed height must increase as the number of elements grows (i.e. $h_k = O(\log n)$). Unfortunately the optimal height is dependent on the search tolerance $k$. However, the logarithmic cost can be maintained even for non-optimal $h$ provided we use $h = \Theta(\delta \log n)$, where $\delta \geq 1/\log 1/\beta_k$ (i.e. we overestimate the optimal height).
On the other hand, the number of nodes visited is bigger than in fq-trees. In fact, using a recurrence similar to the one for fq-trees, it is possible to show that the number of nodes visited is $O(h_k n^\alpha)$ for $\alpha < 1$ which could easily be larger than $n$ even for small $k$. So, these trees are good when the cost of comparing two elements is very high, like comparing two genetic sequences, polygons or graphs.
A related problem is the size of the data structure. While normalfq-trees or bk-trees are $O(n)$ size, fixed-height fq-trees can in principle be superlinear. In fact, we could not reach the optimal $h_k$ in our experiments because of space limitations.
Figure 4: Comparison of the data structures. From top to bottom and left to right, number of distance evaluations to build the structures, user times to build the structures, extra space taken by the structures as a percentage of the size of the vocabulary (FQH $h = 15$ is close to 700) and number of distance evaluations for exact search (FQH $h = 5$ grows linearly). The $x$ axis are the number of words in the dictionary in multiples of 10,000.
Figure 5: Comparison allowing one error. The first row shows the number of comparisons (on the left, absolute number, on the right, percentage over the whole dictionary). The second row shows user times for the queries (on the left, seconds, on the right, percentage over the best online algorithms). The x axis are the number of words in the dictionary in multiples of 10,000.
Figure 6: Comparison allowing two errors. The first row shows the number of comparisons (on the left, absolute number, on the right, percentage over the whole dictionary). The second row shows user times for the queries (on the left, seconds, on the right, percentage over the best online algorithms). The x axis are the number of words in the dictionary in multiples of 10,000.
|
{"Source-Url": "https://www.dcc.uchile.cl/TR/1998/TR_DCC-1998-001.pdf", "len_cl100k_base": 8136, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 57408, "total-output-tokens": 10951, "length": "2e12", "weborganizer": {"__label__adult": 0.00042629241943359375, "__label__art_design": 0.0005116462707519531, "__label__crime_law": 0.0006933212280273438, "__label__education_jobs": 0.0013933181762695312, "__label__entertainment": 0.0002181529998779297, "__label__fashion_beauty": 0.0002841949462890625, "__label__finance_business": 0.0003685951232910156, "__label__food_dining": 0.00048613548278808594, "__label__games": 0.0008535385131835938, "__label__hardware": 0.0014591217041015625, "__label__health": 0.0011453628540039062, "__label__history": 0.0005054473876953125, "__label__home_hobbies": 0.0001569986343383789, "__label__industrial": 0.00054931640625, "__label__literature": 0.0013599395751953125, "__label__politics": 0.0004391670227050781, "__label__religion": 0.0007042884826660156, "__label__science_tech": 0.3720703125, "__label__social_life": 0.00018143653869628904, "__label__software": 0.0250701904296875, "__label__software_dev": 0.58984375, "__label__sports_fitness": 0.0003192424774169922, "__label__transportation": 0.0006546974182128906, "__label__travel": 0.0002343654632568359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41309, 0.03154]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41309, 0.67269]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41309, 0.90378]], "google_gemma-3-12b-it_contains_pii": [[0, 3179, false], [3179, 6660, null], [6660, 8526, null], [8526, 11864, null], [11864, 15515, null], [15515, 18491, null], [18491, 22135, null], [22135, 24424, null], [24424, 28116, null], [28116, 31161, null], [31161, 33468, null], [33468, 35774, null], [35774, 38498, null], [38498, 40104, null], [40104, 40553, null], [40553, 40931, null], [40931, 41309, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3179, true], [3179, 6660, null], [6660, 8526, null], [8526, 11864, null], [11864, 15515, null], [15515, 18491, null], [18491, 22135, null], [22135, 24424, null], [24424, 28116, null], [28116, 31161, null], [31161, 33468, null], [33468, 35774, null], [35774, 38498, null], [38498, 40104, null], [40104, 40553, null], [40553, 40931, null], [40931, 41309, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41309, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41309, null]], "pdf_page_numbers": [[0, 3179, 1], [3179, 6660, 2], [6660, 8526, 3], [8526, 11864, 4], [11864, 15515, 5], [15515, 18491, 6], [18491, 22135, 7], [22135, 24424, 8], [24424, 28116, 9], [28116, 31161, 10], [31161, 33468, 11], [33468, 35774, 12], [35774, 38498, 13], [38498, 40104, 14], [40104, 40553, 15], [40553, 40931, 16], [40931, 41309, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41309, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
6b0c9e6e8d683829c8356d1a14f61cfd240f8aa0
|
Towards a Two-Dimensional Framework for User Models
P.t. de Vrieze\textsuperscript{1}, P. van Bommel\textsuperscript{1}, J. Klok\textsuperscript{2}, and Th. van der Weide\textsuperscript{1}
\textsuperscript{1} University of Nijmegen \\
\textsuperscript{2} Océ Research & Development
Abstract. The focus if this paper is user modeling in the context of personalization of information systems. Such a personalization is essential to give users the feeling that the system is easily accessible. The way this adaptive personalization works is very dependent on the adaptation model that is chosen.
We introduce a generic two-dimensional classification framework for user modeling systems. This enables us to clarify existing as well as new applications in the area of user modeling. In order to illustrate our framework we evaluate push and pull based user modeling.
1 Introduction
The research area of user modeling seeks to enhance human computer interaction by adapting the system to the user. This topic has already gained attention by various authors, see: [1], [2], [3], [4]. User modeling involves the use of incremental behaviour analysis for acquiring user models. It also involves adaptation of the system behaviour to the user model. For a background on system adaptation we refer to: [3], [5], [6].
The key part of a user modeling system is the user model. In order to know what a user model should look like it is necessary to know the adaptation methods that are going to be employed. The methods that do this are described in the adaptation model. This is a general model that describes how the user models need to be created, maintained and used.
We distinguish two kinds of adaptation models: a push adaptation model and a pull adaptation model. Those models are based on the direction of inference in the system. Further it is possible to combine both models into a hybrid adaptation model that combines aspects of both models. An example of a hybrid system can be found in.
While publications have described the use of both kinds of models and combinations of them, they have not explicitly evaluated the advantages and disadvantages of those models. We believe that this is important to be able to design user modeling systems better.
In this paper we analyse the differences between the push and pull adaptation models. For that it is important to first define what a user modeling system actually is, and which parts of a system can be seen as a part of the adaptation
system. For that reason we give an overview of user modeling systems in section 2. After that we will introduce a list of demands that user modeling system should satisfy. This list is then used in sections 5, 6, and 7 to evaluate the push, pull and hybrid adaptation models. Finally, in section 8 we will evaluate our framework and state possible points of further research.
2 Overview of User Modeling Systems
A user modeling system is a system that shows adaptive behaviour concerning its interaction with the user. For explaining the difference between conventional systems, i.e interactive systems that do not employ user modeling, (see figure 1(a)) and user modeling systems (see figure 1(b)) we first need to describe conventional systems in a suitable way. Then we need to describe user modeling systems, and compare them. In the next two sections we will describe both conventional and user modeling systems.
Conventional interactive systems (see figure 1(a)) can be seen as state machines that interacts with a user. This interaction is handled by a user interface. Each user action can induce a state change, after which new user actions are possible.
In designing a user interface several choices have to be made concerning the looks and behaviour of the interface. Many of these choices are implicit or given by default choices from guidelines. For the sake of being able to compare
a conventional system with a user modeling system we assume that the choices are explicit. We call those choices interface properties. The interface properties determine both the behaviour and looks of the user interface.
In a conventional system user actions induce events. These events trigger system actions and interface changes. These actions and interface changes can differ based on the interface properties.
In a system based on user modeling (see figure 1(b)), the behaviour of the various handlers may be affected by user properties in addition to the handler specific properties. See e.g., [4] and [7] for systems that show such a change of behaviour. Those user properties are supplied by the adaptation system. The user properties can be seen as questions asked by the system about a specific user property. As the adaptation system can be seen as the authority on the user, the questions should be in such a way that all inference happens inside the adaptation system.
As a consequence of the user properties influencing the handlers the user interface now takes into account the user model as its behaviour is determined by the user interface handler. The same goes for the action handler.
The user properties are provided by the adaptation handler. The adaptation handler generates these properties based on events fed to it by the event handler. The main point of user modeling is about how to go from these events to the user properties.
3 Further Analysis of User Modeling Systems
To evaluate user modeling systems it is very useful to have a clear method for comparing them. For this purpose we have developed a two-dimensional classification framework. Our framework looks at all kinds of user modeling systems and is not made by classification of existing systems. In this it differs significantly from the framework in [8].
Figure 2 presents the proposed framework. Along the horizontal axis is the inference process. It goes from the event model to the user model, and from the user model to the system concept model. The event model consists of the actual events generated by the system. The user model of the most system independent user properties, and the system concept model consists of all the user questions that can be asked by the system.
For certain user properties many derivation steps are necessary, and for others only a few. Because of this reason we model the progress in that process, not the steps. Further we define the model that is least system specific to be in the middle. For that reason all systems will have their highest point in the middle.
On the vertical axis we model system independence. At the start of the adaptation process, there are events generated by the system. These events are maximally system dependent. An example of such an event could be: “The user fills box 123 with a purple background”. We call the model here the event model.
For adaptation purposes the events generated by the system are not that relevant. An adaptation system wants to use specific cases to infer knowledge of
the general case. This inference process goes in a number of steps. At one point a model is inferred that is most general. An example of knowledge that can be inferred here is: “The users favourite colour is purple”. This is part of what we call the user model.
At a point where the user model is known, the system needs to know how this model fits into the questions a user modeling system might have. A user modeling system wants to know the answer on a question like: “What background color should a new box have?” In the adaptation phase of the system, the adaptation system will try to get system dependent answers based on the general knowledge from the user model. The model of answers to system questions is called the system concept model. The system concept model is where the user properties live.
We can use the framework of figure 2 to determine two properties of systems. Firstly, we can look at the hight of the triangle to determine how system specific an adaptation system is. For example in figure 3 we see the systems S2 and S4. S2 is more system independent than S4. This could mean that S2 can be more easily be extended to provide more or different adaptation. The second property we can distinguish is, where in the inference process a persistent model is stored. This is an important measure as the process is different before and after storage. Before storage a push process needs to be used to create the model. Push here means that the arrival of an event generates a waterfall of subsequent events that lead to updating the persistent model. We call this push adaptation. We will discuss the advantages and disadvantages of push based systems in section 5.
After storage we need to use a pull strategy to perform adaptation. This starts with the system requesting the value of a certain property from the adaptation system. For determining the value of this property the adaptation system might want to use the values of other properties that might also need to be calculated. This goes on until the persistent model is used. We call this pull adaptation.
Fig. 3. Use of the two-dimensional classification framework
As an example of the use of the framework we look at figure 3. In figure 3 there are six systems with all different properties. System S1 is almost a purely pull-based system, as it’s persistent model is created very early on in the inference process, while S5 is can be classified as a hybrid system and S6 is a rule-based system. The other systems are all different kinds of hybrid systems. Note that S5 is almost in the middle, but a system completely in the middle would be rather unrealistic.
Based on the locations of the systems in figure 3 we can say things about the systems, and especially their relations with each other. As an example looking at systems S3 and S5 we can say that system S3 has a bias on pull modeling compared to S5 and that S3 is more system dependent than S5. This can be used to say things about these systems like: “the persistent model of S3 is probably relatively bigger than the persistent model of S5”, “It is probably more easy to extend the adaptation system of S5 than to extend that of S3”, and “The persistent model of S5 is less system dependent than that of S3”.
4 Properties of a User Modeling System
In the framework from section 3 we saw that there is push adaptation and pull adaptation. In the coming sections we want to analyse the advantages and disadvantages of these adaptation strategies. To make an analysis we have identified a number of key properties of user modelling systems. Although some of these properties are not easily measured, we still believe they are important.
– Adaptability. The user should be able to manually adapt his model to a certain extend.
– Speed. The users’ perception of the system’s speed should not decrease.
– Extensibility. The system should be extensible while retaining the existing knowledge about its users.
– Model size. The model size should not grow too large.
– Analysis possibilities. The chosen kind of adaptation model should allow for all kinds of analysis techniques.
– Privacy. The system should be designed in such a way as to guarantee the highest possible level of privacy for the users.
Some of these properties are more important than others. It mainly depends on the application. We will not further discuss privacy as it depends mostly upon the application and very little on the adaptation model.
5 Push Adaptation Models
Push adaptation models are adaptation models, that let events propagate on to the values of a user model. Many systems that use push adaptation models use a rule-based model as employed in [9]. This paper describes the adaptation system of the AHA! system, a research system for creating adaptive hypermedia. These rule-based models are based on Active Database technology and as such inherit limitations from database systems.

There are several points to ECA rules. There is the possibility of endless recursion. Also there needs to be made a choice of techniques of achieving confluence. It should not be possible that equal starting models and equal events lead to different final user models.
One advantage of push adaptation is the fact that the contents of the user model are well aggregated. This has as advantage that those contents can be easily understood. Another advantage is that the relative size of the user model stays small, and that the size does not change during regular use of the system. This does however impede the possibilities for basing values of newly introduced attributes upon already seen behaviour of the user.
In this section we evaluate push based adaptation models based on the points from section 4.
– Adaptability. Because the user model stores end values it will be fairly easy for users to adapt the model to their wishes as the results of their changes are obvious and local. There could be too many possibilities for changes though.
– **Speed.** Provided that the amount of rules stays within limits there are no serious speed issues with push adaptation models.
– **Extensibility.** Push adaptation models are similar to database theory, and are often based on it. They have one problem that is similar to the problem of databases. Database systems are not good in data model change. This is the same with rule based adaptation models. At a moment that the adaptation model changes, values for new properties need to be calculated which can be expensive in terms of time.
– **Model size.** A push adaptation model has a user model with a limited size. This is because events are aggregated into the user model at the moment they happen.
– **Analysis possibilities.** The fact that event aggregation in rule based adaptation models happens at the moment the events happen makes it hard to impossible to perform time based analysis on user actions. Also aging (as weighing recent events higher than older events) is hard to implement.
From this point by point overview we can see that push adaptation models are especially good in the areas of model size and complexity. The weakest points lay in extensibility of the model.
Push adaptation models are very popular within the domain of educational systems. Those systems can be characterised by the fact that the user properties that need to be modelled are often (static/discrete/...). Push adaptation models are used in other system to though. Examples of push adaptation models can be found in: [9],[10],[11],[8].
### 6 Pull Adaptation Models
Pull adaptation models perform adaptation from a different direction than push models. In the extremity a pull adaptation model records all events in the user model. High level attributes are then derived based on lower level attributes and querying of the event record.

**Fig. 5.** A pull adaptation model
The pull model is based on calculation at the moment of the request. As such extension of the adaptation model is a lot easier than with push models.
One problem with the functional model though is the fact that the recorded data has very little value on itself. For adaptation purposes one would prefer to know concepts of user behaviour, not individual events. Push adaptation makes sure that concept generation needs to be done only once. Certain concept generation rules might be quite complex and would take a long time to recalculate on every use. To allow this for the pull model caching could be very helpful.
- **Adaptability.** Pull models have problems with adaptability. This is caused by the fact that user models store huge amounts of abstract facts. One can not expect even experts to be able to make changes with predictable results in such a user model. An exception to this is that exclusion of time periods is easy in pull models. All events have a timestamp, and removal of facts just leads to different results of the functions.
- **Speed.** As user models that store events can get very big there is certainly the need to use extensive caching of intermediate results. The language used to query the user model could provide tools for incremental queries, where old results get enhanced with newer facts. Also the set of matching events can be stored to be used as a base for the query at a later time.
- **Extensibility.** The pull adaptation model scores very well on the point of extensibility. As abstract events are stored there will be many cases where new user attributes can be derived from behaviour before the attribute was introduced.
- **Model size.** Model size is a disadvantage of the pull adaptation model. With a little loss on model quality though old events could be aggregated into smaller parts or even discarded. If the amount of users of the system is not very high we don’t believe there is a big problem on model size.
- **Analysis possibilities.** The pull adaptation model allows for more analysis possibilities. As all data in the user model is time stamped, time based analysis and aging are easy performed. There are no analysis possibilities in the push model that are not available in a pull model.
Pull based adaptation models are currently not common. They are especially utilised in cases where combinations of events need to be analysed to retrieve the goals of a user. A pull based adaptation model is for example used in [12]. In this article the interaction of users with a word processor is studied. This interaction is used to make recommendations to the user on doing things more efficient. Another example of pull models are attentive systems. They need to determine whether a user can be disturbed. These systems are highly dynamic and thus do not fit well with the static nature of the push model. Examples of these systems can be found in [13]. Other pull systems can be found in: [7], [14] and [15].
7 Hybrid Adaptation Models
Both adaptation models have their advantages and disadvantages. The push model for example might need workarounds for ages (as being dynamic properties
changing every second). The pull model is not very good at storing static user properties, and can be very space inefficient.
Looking at the two phases of the user modeling process we can see that while the model use phase is especially suited for a pull approach, the modeling phase is more directed towards a push approach. We can use this by using a hybrid adaptation model. Such a hybrid model can combine the advantages of both pure models. Basically the push model has a place in the user modelling phase and the pull model in the adaptation phase.
- **Adaptability.** By storing system independent user properties the hybrid system can offer the user clear high-level properties the user can change (not properties that are either too abstract events with unclear results (pull), or many system specific properties which have too localised results (push). This could mean that the adaptability of a hybrid system is better than both the rule based and functional approaches.
This adaptability advantage could vanish if the rule based and functional models offer adaptability of intermediate concepts that are at the same position as the user properties of the hybrid model.
- **Speed.** Hybrid adaptation models should relieve many of the possible speed problems in the functional model as it can reduce the complexity of the event store in the functional model. It also avoids the rule explosion that comes with a big interrelated push model.
- **Extensibility.** The modeling process goes from very system specific events to less system dependent concepts. Those system independent concepts can be building blocks for extension. System dependent events cannot really do that. So there is no real loss in extensibility when using a hybrid model where concepts are stored that are less system dependent.
- **Model size.** In the hybrid model the model size can be significantly lower than the pull model as not single events are stored, but more high-level concepts.
- **Analysis possibilities.** As hybrid adaptation models allow for different adaptation strategies for different properties, they can retain most of the analysis possibilities that function-based adaptation models have. At the same time hybrid adaptation models can take advantage of properties of rule-based adaptation models where the analysis possibilities offered by a function-based approach is not necessary.
Hybrid adaptation models are more common than one would expect. They can often be found in systems where no special effort was put to the adaptation model. One area where they are almost unavoidable is the area of recommender systems. These systems tend to be focused on document–user matching techniques. Many of these systems make a single “user model” out of the event history of the user-system interaction (push). Those user models are then used at query time to make a rank of different recommendations (pull). Examples of recommender systems can be found in: [8],[16]
8 Conclusion
In this paper we have introduced a framework for classifying user modeling systems. With this framework we have shown that there are two basic categories of adaptation: rule-based adaptation and function-based adaptation. We have pointed out several examples of such systems.
Besides the rule-based and function based systems there is also a possibility for hybrid systems. We believe these hybrid systems can be able to solve the problems with both pure approaches, and combine their strong points.
We also pointed out that user modeling systems can have differing system dependence. This system dependence measure can be an indication of ease of extensibility of the system.
References
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/176173/176173.pdf?sequence=1", "len_cl100k_base": 4286, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 22394, "total-output-tokens": 5461, "length": "2e12", "weborganizer": {"__label__adult": 0.0003943443298339844, "__label__art_design": 0.00176239013671875, "__label__crime_law": 0.00038361549377441406, "__label__education_jobs": 0.006214141845703125, "__label__entertainment": 0.0002053976058959961, "__label__fashion_beauty": 0.00024247169494628904, "__label__finance_business": 0.0006885528564453125, "__label__food_dining": 0.00047135353088378906, "__label__games": 0.0007276535034179688, "__label__hardware": 0.0009603500366210938, "__label__health": 0.0008687973022460938, "__label__history": 0.0005717277526855469, "__label__home_hobbies": 0.00013172626495361328, "__label__industrial": 0.0004820823669433594, "__label__literature": 0.0008835792541503906, "__label__politics": 0.00037479400634765625, "__label__religion": 0.0005006790161132812, "__label__science_tech": 0.2076416015625, "__label__social_life": 0.0001825094223022461, "__label__software": 0.047515869140625, "__label__software_dev": 0.7275390625, "__label__sports_fitness": 0.00028204917907714844, "__label__transportation": 0.0006046295166015625, "__label__travel": 0.00027561187744140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24017, 0.01916]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24017, 0.14793]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24017, 0.94384]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2498, false], [2498, 3898, null], [3898, 6962, null], [6962, 9049, null], [9049, 11083, null], [11083, 12970, null], [12970, 15031, null], [15031, 18037, null], [18037, 21012, null], [21012, 24017, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2498, true], [2498, 3898, null], [3898, 6962, null], [6962, 9049, null], [9049, 11083, null], [11083, 12970, null], [12970, 15031, null], [15031, 18037, null], [18037, 21012, null], [21012, 24017, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24017, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24017, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2498, 2], [2498, 3898, 3], [3898, 6962, 4], [6962, 9049, 5], [9049, 11083, 6], [11083, 12970, 7], [12970, 15031, 8], [15031, 18037, 9], [18037, 21012, 10], [21012, 24017, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24017, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
e5e15443999aaa3e0808b3f8cd5943f981b42ce7
|
Mobile Application for Visualisation of External Signal on PDA device in Real Time
Radim Štěrba
Department of Measurement and Control,
Faculty of Electrical Engineering and Computer Science
VSB Technical University of Ostrava,
Ostrava-Poruba, Czech Republic
radim.sterba.st@vsb.cz
Abstract— The work is focused on creating a mobile application that is able to capture and visualize an external audio signal. The aim was to create applications which can be built at development of a digital oscilloscope. The application is written for the platform Compact NET Framework and Windows Mobile. The entire application is written in C#. Today’s digital oscilloscopes are fairly large, heavy and mainly dependent on the electrical energy. They are adapted to work in the laboratory. The motivation for this work was to use a handheld computer to be able to replace the digital oscilloscope at least partially. At the end there was an application that was able to accurately visualize the external audio signal up to frequencies around 3000Hz. This limitation is due to the parameters of the Pocket PC (microphone, A/D converter, sound card and processing power PDA). The capture of the external audio signal is used built-in microphone.
Keywords – Windows Mobile; .NET Compact Framework; C#; PDA; digital oscilloscope
I. INTRODUCTION
Mobile Electronics brings us a user’s comfort and simplifies our lives. PDAs, mobile phones and various communicators are now a normal part of our lives. They are very powerful today due to great technological developments in the PDA. They have powerful processors with high clock frequencies, large system memory of the hundreds of Mega Bytes and also have a large color displays with high resolutions. According to the communication site PDAs are mostly equipped with USB ports. PDA is able to communicate from the wireless communication thanks to IrDA, Bluetooth, WiFi, GSM, GPRS and UMTS.
Today's PDAs can play videos or music, allow you to surf the Internet, are able to locate and navigate through an integrated GPS receiver. With powerful processors and their large operating memory they can run very demanding applications. But it has not always been that way. The PDAs should help with the planning tasks, organizing time and contact management originally. PDAs can be used for office applications (Microsoft Word, Microsoft Excel, and others) besides multimedia content. Picture No 1 illustrates how PDA may look like.
![Picture 1: Example of PDA. [2]](image)
The whole problem of visualization of an external audio signal can be divided into three blocks:
- Reading of an external audio signal
- Audio Signal Processing
- Plot the signal on the PDA green
The first block is the reading of the external audio signal (sound). It is worked with the built-in microphone and sound card PDA here. The external audio signal is read by a microphone and converted to a discrete signal. The second block is a discrete signal processing into a form that is suitable as the input data to draw on the screen. The last third signal block is a rendering on the screen PDA devices. The PDAGraph component takes care about the rendering of the signal, which is described below.
The quality of the visualization depends on the hardware quality. Built-in microphones in the PDA are used primarily to record human speech. Frequency range of the speech is about 300Hz to 4000Hz, so we cannot expect quality
visualization of very low and very high frequencies of the input audio signal. Another limiting factor may be a PDA hardware power, especially clock processor speed and memory size of the RAM.
II. APPLICATION PDASCOPE
PDAScope application is designed for PDA devices running Windows Mobile operation system. It is written in C# programming language and uses the platform .NET Compact Framework, which is designed for mobile devices. The application was developed and tested in a developmental background of Microsoft Visual Studio 2008. Windows Mobile 5.0 or higher and support. .NET Compact Framework 3.5 and higher are the minimum of the requirements for this application. Preview applications running in the emulator is shown in Picture No 2.
Picture No 3 shows a block diagram of an application PDAScope. The first block introduce a scan signal, it means that the work is dealt with the hardware and with sound card and built-in microphone in this case. The microphone is used to record sound which converts the sound into an electrical signal. An algorithm, which works with Microsoft classes for sound processing, provides the audio signal processing. The component PDAGraph provides the rendering, which was created within this project. PDAGraph component is described below.
Picture 3: Block diagram of the application.
The first block of a block diagram introduces a part of the program, which cares about the external signal scanning from built-in microphone. The communication with the microphone is provided with WaveIn.cs class. [3] The external analogue signal is converted to discrete signals in the first block, which range from 0 to 255. The level of the analog signals is represented with discrete values from 0 to 255, the analog zero (there is no signal at the output of the microphone) is represented with the discrete values of the 128th. Discrete values from 0 to 127 represent the negative deflections and the discrete values from 129 to 255 represent positive deflections of the analog signals.
The block of the audio signal processing is represented with a PDAScopeWaveIn.cs class and Record.cs class. The PDAScopeWaveIn.cs class generates from raw data the field of samples. There is created a copy of the field in Form1.cs, which continues to work with. This copy serves as the input for rendering and other calculations.
There is a change to render the scanned process over the PDA screen in the last part. The rendering is provided with the user’s component, which is described below. The component contains two methods (AddGraph method and DeleteGraph method) for the rendering. During the redrawing the field of the samples (which is the parameter) is transmitted to the AddGraph method, and then it is depicted. We leave the depicted process on the display for a while (20 ms) and then we use the DeleteGraph method. This method deletes the depicted process. This is cyclically repeated using high redraw speed.
III. SCANNING THE ANALOG SIGNAL
The microphone is connected to the PDA using the sound card. The input signal is the analog signal of course and it must be converted to digital signal by an A / D converter. This conversion is made by using sampling. The signal sampling is a process, when the discretization of the signal happens in its time domain. The current value of the signal (the sample) is recorded at any point in the time. We can see these samples as red circles at the picture No 4. The picture shows that we lose a lot of details by sampling the original signal. Even when we increase the sampling rate we get only a set of discrete points. The Shannon sampling theorem deals with the sampling, which states that an accurate reconstruction of a continuous, frequency-limited signal from its samples will be possible if the signal is sampled with at least twice higher frequency than the maximum frequency of the reconstructed signal, is. The sampling frequency is chosen twice higher plus a small reserve than the maximum transmitted frequency is required in practice. [4]
Man is able to perceive frequencies from 20Hz to 20kHz. Therefore the sampling frequency should be at least 40kHz.
The sampling frequency 44100Hz is usually selected in practice. The recording quality is set to 8 bits per sample in PDAScope application and the sampling frequency is set to 22050Hz. This setting is done in the WaveIn.cs class and can be changed. The PDA hardware options must be respected as a computing power or as a screen display capabilities.
There are many types of microphones in terms of them. Each type is based on a different physical principle and therefore has different properties. Currently, the most common type of microphones is an electrostatic microphone, it is sometimes also known as a condenser. This type is characterized by high input impedance, which is balanced by the input characteristic and high sensitivity. Electrostatic microphones belong between the best for its quality and are often used for a professional recording. There aren’t such high-quality microphones for professional audio recording in the handheld computers. But built-in microphones in handheld computers are good enough for example to capture the spoken word.
The condenser microphone works such way, that acoustic oscillations vibrate the membrane, which is one of the capacitor electrodes, connected to the electrical circuit. The capacitance changes according to the rhythm of the changing the position of the membrane. This change is converted into an electrical signal either powering its own microphone capsule by a very soft source of a polarization tension and the tension is sensed by the preamplifier with a high input impedance, or the capacity of the inserts is used as a part of the high-frequency oscillator, which is disharmonized by the capacity changes, and the frequency signal is demodulated in following circuit [6].
IV. COMPONENT PDAGRAPh
The information about the color of the curve is sent to the component by each visualized signal rendering, the information about the curve thickness and about the look of the curve is sent mainly. This information is presented by a two-dimensional area that contains the coordinates of the points x, y, from which the process is composed.
PDAGraph component is used for rendering the measured data. The component was written specifically for the application of a digital oscilloscope. The component is written in C# language and contains six basic methods for working with the component. All methods are described below.
The component is written that way so it can adapt to the screen of each PDA. Therefore it is always necessary to know the width and height of the panel, which the process is rendered in. These dimensions can be determined by two methods GetPanelWeight and GetPanelHeight. Methods always return an integer value size in pixels.
The rendering of two vertical scales on the right and left side is under way when the process starts. Two methods take care about the rendering, Panel2_Paint method and Panel3_Paint method. Both methods are described below.
AddGraph Method:
AddGraph is a method that directly renders charts. AddGraph has three arguments: an area of points, a line width and color lines. The first argument is an area of points, namely x, y coordinates of the points. The second argument is the width lines, the data type is Integer, and so the entire number is entered. The last argument is the color line, a data type is Color. The method is called: PDA_Graph.AddGraph (points, 3 Color.Blue). Points are an area of points, 3 is the width (the width of process will be 3 points) and Color.Blue is the process color (in this case blue). AddGraph method is called every time the screen is redrawing with DeleteGraph method together. The redraw process starts at the first by calling the AddGraph method, which renders the course. Then the fiber, that carries about drawing, is put to the sleep for 20 ms, in order to see a picture of the course and immediately to be deleted by DeleteGraph method. This is constantly repeated, and thereby we can achieve a continuous rendering.
DeleteGraph Method:
This method deletes the entire contents of the chart. The picture of the process is deleted. The method is called for each redraw. The method has no arguments, so it is enough to call: PDA_Graph.DeleteGraph ().
Panel2_Paint Method:
Method Panel2_Paint draws vertical scale on the left side of the component. The vertical scale is used for better orientation. This method is called immediately at the start that is why the vertical scale is rendered immediately when you start the application.
Panel3_Paint Method:
It is a similar method, such as Panel2_Paint Method is. Panel3_Paint Method depicts the vertical scale on the right side of the component.
GetPanelWidth Method:
This is a method, which returns the width of the panel, which the process is rendered in. Each PDA has a different screen size and mainly different resolution. And therefore it is necessary to return the size of the component, which is made to render. It is returned both – the width and the height.
Both methods GetPanelWeight and GetPanelHeight are parameter less and always return the entire number as a panel size information in pixels.
GetPanelHight Method:
GetPanelHeight method is used for the return the value of the height of the panel, in which the captured signal is rendered. As it was described before each PDA has a different screen size and therefore it is necessary to dynamically adjust the parameters for rendering.
Preview of the above component is described in Figure No 5. The most important is the white area where the captured processes are rendered. There are vertical scales on sides. Colors, which the process can be rendered by, can be changed in the settings. The blue color is set implicitly.

V. THREADS IN THE APPLICATION
The application is written so the threads can be used. Specifically, two threads are in the application, they are responsible for recording the external audio signal, rendering the screen on the PDA.
The synchronization object syncObj EventWaitHandle is defined in the Form1.cs class, which ensures the synchronization of fibers. The context switching thread is manually, it is given by setting EventResetMode.ManualReset. The recording is realized as the first, which is provided by the threads called recTh in a Record.cs class. Class Record.cs contains important MAX_RECORD constant, which determines the time during which the recorded sound will be recorded from the built-in microphone. MAX_RECORD constant is set to 30 ms. The availability of hardware devices (built-in microphone) is checked after the recording initialization. If the built-in microphone is available, the recording starts. The recTh thread is suspended during the recording and the recording is stopped than, the time is given by MAX_RECORD constant. The context switches for thread graphTh after checking the length of the field, where data is stored using the Form1.syncObj.Set(). The GraphTh fibre runs in the class Form1.cs and always waits until the recTh thread stops the recording. The method recTh Form1.syncObj.WaitOne() provides the waiting for a signal from the recTh thread. Then data is copied to a local field using the lock (lock) as the first. It is because not to avoid a duplication access to the memory and because the application thread-safe could exist. Next we work only with local data field. Further the measurement of maximum and the average value calculations are made, and then the actual rendering of the data sample to the PDA display with the specific color is made. The process stays on the display depicted only for specific time, which is selected as MAX_RECORD – 10. The process stays rendered for 20ms. Then the process is deleted and context of the method Form1.syncObj.Reset(); is handed back to the recTh thread and the whole situation is repeated again.
VI. TESTING THE APPLICATION
The functionality of the application was tested in the Microsoft Visual Studio 2008 and mainly with the help of real devices Fujitsu-Siemens N560. The emulator doesn’t have to always behave just like real devices. It is always better to use your real PDA device for the test, because the hardware is accessed in this application.
<table>
<thead>
<tr>
<th>Producer</th>
<th>Fujitsu-Siemens</th>
</tr>
</thead>
<tbody>
<tr>
<td>Operating system</td>
<td>Windows Mobile 5, Premium Edition</td>
</tr>
<tr>
<td>Processor type</td>
<td>Intel PXA270; XScale</td>
</tr>
<tr>
<td>Processor frequency</td>
<td>624 MHz</td>
</tr>
<tr>
<td>ROM memory</td>
<td>64 MB</td>
</tr>
<tr>
<td>RAM memory</td>
<td>128 MB</td>
</tr>
<tr>
<td>Display</td>
<td>3.5"; TFT; 652×26 barwe</td>
</tr>
<tr>
<td>Display resolution</td>
<td>963 x 480</td>
</tr>
<tr>
<td>Memory card slot</td>
<td>ano; SD</td>
</tr>
<tr>
<td>Microphone</td>
<td>ano; mono</td>
</tr>
<tr>
<td>Reproducing unit</td>
<td>ano; mono</td>
</tr>
<tr>
<td>USB</td>
<td>ano; USB 1.1</td>
</tr>
<tr>
<td>Wi-Fi</td>
<td>ano</td>
</tr>
<tr>
<td>Bluetooth</td>
<td>ano; Bluetooth 1.2</td>
</tr>
<tr>
<td>Battery</td>
<td>ano</td>
</tr>
<tr>
<td>GSM</td>
<td>na</td>
</tr>
<tr>
<td>GPS</td>
<td>ano</td>
</tr>
<tr>
<td>Number of channels</td>
<td>20</td>
</tr>
<tr>
<td>GPS chip</td>
<td>SIRF Star III</td>
</tr>
<tr>
<td>Battery</td>
<td>L-ion; 1200 mAH</td>
</tr>
<tr>
<td>Weight</td>
<td>180 μ</td>
</tr>
<tr>
<td>Dimensions</td>
<td>115 x 71 x 14 mm</td>
</tr>
</tbody>
</table>
Table 1: Parameters PDA Fujitsu-Siemens N560.
Table 1 shows the parameters of the PDA, where the testing the application PDAScope was performed. The equipment is several years old, and that is why it cannot be compared with today’s latest PDA. Very small RAM memory is probably the most limiting parameter.
The first phase of testing consists in the visualization of pre-defined signal, such as human speech. It was verified with this test that the PDAScope application really somehow visualize the external audio signal. Testing method is illustrated in Picture No 6.
The first phase principles of testing PDAScope application
There is a printscreen of the PDA screen during the testing on the picture below. The signal, depicted in Picture No 7, corresponds to the human voice.

Picture 6: The first phase principles of testing PDAScope application.
The second phase of testing was aimed at signals testing, about which we know how their process is over time. A sine function is an example. A mobile phone was used for the test. An audio file in MP3 format was released from the mobile phone, which featured the sine wave process in a given frequency. The frequency of 1000Hz was chosen in this case. Mobile phone was used as a signal source (signal generator).
Applications for PDA visualized the sine wave with a small error always on the left edge of the screen, is at the beginning of each scanning cycle. It wasn’t found why such distortion existed all over the entire period of application development and application testing. The process distortion is shown in Picture No 9, which is a sine wave at a frequency of 1000Hz.

Picture 7: The print screen of the screen during the first phase of testing.

Picture 8: The second phase principles of testing PDAScope application.
The second phase of testing was aimed at signals testing, about which we know how their process is over time. A sine function is an example. A mobile phone was used for the test. An audio file in MP3 format was released from the mobile phone, which featured the sine wave process in a given frequency. The frequency of 1000Hz was chosen in this case. Mobile phone was used as a signal source (signal generator).
Applications for PDA visualized the sine wave with a small error always on the left edge of the screen, is at the beginning of each scanning cycle. It wasn’t found why such distortion existed all over the entire period of application development and application testing. The process distortion is shown in Picture No 9, which is a sine wave at a frequency of 1000Hz.
Several other frequencies were tested to be able to find which sinusoidal signal frequency the application is able to accurately visualize. The testing was performed gradually by increasing the frequency by 1000 Hz. Sinusoidal signals were tested in frequencies 1000Hz, 2000Hz, 3000Hz, 4000Hz and 5000Hz. The signal was almost unreadable already at a frequency of 4000Hz. The test follows that the application is able to visualize the signals in the frequency of about 3000Hz. The distortion, described in this charter, is not already identifiable by the visualization of such high frequency. The print screen of the screen, when the sinusoidal signal of frequency 3000Hz is visualized, is shown below in Picture No 10.

Picture 9: The print screen of the screen during the second phase of testing.
Several other frequencies were tested to be able to find which sinusoidal signal frequency the application is able to accurately visualize. The testing was performed gradually by increasing the frequency by 1000 Hz. Sinusoidal signals were tested in frequencies 1000Hz, 2000Hz, 3000Hz, 4000Hz and 5000Hz. The signal was almost unreadable already at a frequency of 4000Hz. The test follows that the application is able to visualize the signals in the frequency of about 3000Hz. The distortion, described in this charter, is not already identifiable by the visualization of such high frequency. The print screen of the screen, when the sinusoidal signal of frequency 3000Hz is visualized, is shown below in Picture No 10.
Picture 10: Visualization of the signal with a frequency of 3000Hz.
Picture No 11 shows the final test, which consists in a measure testing, is Measure button. The value of maximum and average values of the displayed in decibels are calculated when you press this button.
VII. CONCLUSIONS
The application of the digital oscilloscope, that was able to visualize the external audio signal, was realized through this work. The PDAGraph component for the graphs rendering on mobile devices was created together with this application. A benefit of created component is its applicability in other projects.
The application testing found that the maximum frequency, which was an application able to truly visualize, was about 3000Hz. It was used the PDA Fujitsu-Siemens N560 for the testing.
REFERENCES
[1] Personal Digital Assistant; http://cs.wikipedia.org/wiki/Personal_Digital_Assistant (Date of quotation 02. 05. 2011)
[2] Example of PDA; http://blogs.totalpda.co.uk/2009/06/work-and-play-with-the-fujitsusiemens-pocket-loox-n560/ (Date of quotation 02. 05. 2011)
[3] Recording and Playing Sound with the Waveform Audio Interface; http://msdn.microsoft.com/en-us/library/aa446573.aspx (Date of quotation 03. 05. 2011)
[4] Vzorkování; http://cs.wikipedia.org/wiki/Vzorkov%C3%A1n%C3%AD (Date of quotation 02. 05. 2011)
[5] Example of signal sampling; http://upload.wikimedia.org/wikipedia/commons/a/a0/Vzorkov%C3%A1n%C3%AD.png (Date of quotation 02. 05. 2011)
[6] Mikrofon; http://cs.wikipedia.org/wiki/Mikrofon (Date of quotation 02. 05. 2011)
|
{"Source-Url": "http://dspace.vsb.cz/bitstream/handle/10084/87388/STE743_FEI_B2649_2601R004_2011_priloha.pdf?isAllowed=y&sequence=3", "len_cl100k_base": 5107, "olmocr-version": "0.1.42", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 15067, "total-output-tokens": 5313, "length": "2e12", "weborganizer": {"__label__adult": 0.0006642341613769531, "__label__art_design": 0.000941753387451172, "__label__crime_law": 0.0004930496215820312, "__label__education_jobs": 0.0012054443359375, "__label__entertainment": 0.0001595020294189453, "__label__fashion_beauty": 0.0003566741943359375, "__label__finance_business": 0.00018846988677978516, "__label__food_dining": 0.0005173683166503906, "__label__games": 0.000804901123046875, "__label__hardware": 0.0301666259765625, "__label__health": 0.0008254051208496094, "__label__history": 0.0004341602325439453, "__label__home_hobbies": 0.0002574920654296875, "__label__industrial": 0.0010824203491210938, "__label__literature": 0.0002734661102294922, "__label__politics": 0.00022602081298828125, "__label__religion": 0.0006933212280273438, "__label__science_tech": 0.124267578125, "__label__social_life": 0.00010520219802856444, "__label__software": 0.01105499267578125, "__label__software_dev": 0.8232421875, "__label__sports_fitness": 0.0005106925964355469, "__label__transportation": 0.0012464523315429688, "__label__travel": 0.0002694129943847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22644, 0.02452]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22644, 0.565]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22644, 0.93842]], "google_gemma-3-12b-it_contains_pii": [[0, 3434, false], [3434, 7589, null], [7589, 12563, null], [12563, 17234, null], [17234, 21093, null], [21093, 22644, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3434, true], [3434, 7589, null], [7589, 12563, null], [12563, 17234, null], [17234, 21093, null], [21093, 22644, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22644, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22644, null]], "pdf_page_numbers": [[0, 3434, 1], [3434, 7589, 2], [7589, 12563, 3], [12563, 17234, 4], [17234, 21093, 5], [21093, 22644, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22644, 0.20536]]}
|
olmocr_science_pdfs
|
2024-11-22
|
2024-11-22
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.